Purpose/significance How to promote the transformation of intelligence research paradigms through the integration of intelligence cognition and machine intelligence,and to achieve a more profound,broader,and deeper transformation in intelligence research,is a major challenge that the current intelligence discipline urgently needs to address. Method/process First,this paper analyzes the background of the proposal of big intelligence;on this basis,it puts forward and interprets the conceptual characteristics of big intelligence,and clarifies its logical starting point,ideological guidance and technical driving force; then,this paper proposes the scientific research paradigm of big intelligence,and discusses the research methods and paths of big intelligence at the current stage. Result/conclusion “Big Intelligence” is the specific manifestation of human cognitive wisdom and machine emergent intelligence in the field of intelligence. The research paradigm under big intelligence can break through the dual barriers of physical boundaries and technological levels in traditional intelligence,relying on big data and large model technologies to build a unified intelligent foundation,thereby achieving standardized and engineering-based processing of multi-modal,cross-domain,and multi-agent intelligence.
Purpose/significance In the era of artificial intelligence,intelligence research faces significant and urgent challenges. There is an urgent need to re-understand intelligence research mode from the perspectives of technological development,engineeringization,and the integration of theory and technology. Method/process This paper analyzes the practical difficulties faced by intelligence research in digital and intelligent era,and proposes a big intelligence research model based on the “system-experiment dual cycle”. On this basis,a big intelligence system is constructed,and big intelligence experimental methods are designed. Result/conclusion The big intelligence research mode relied on new-generation information technologies such as big data and large language models as its two pillars. . It generates preliminary conclusions of intelligence research through a multi-agent collaborative big intelligence system,then verified these conclusions and optimizes system functions through big intelligence experiments,ultimately forming the final conclusions of intelligence research.
Purpose/significance In the digital and intelligent era,intelligence work shows a development trend of big intelligence,and information science needs to realize disciplinary reform through “adapting to changes” and “seeking changes”. Method/process This study reviews the basic understanding of the origin,positioning,disciplinary paradigm and disciplinary system of information science,demonstrates the necessity of information science discipline construction in the context of big intelligence,explores the positioning and key directions of discipline construction,and puts forward countermeasures and suggestions for strengthening discipline construction. Result/conclusion Under the framework of big intelligence,information science takes DIKW as the context,focusing on studying how information technology enables the efficient transformation from data to knowledge,and how human thinking dominates the sublimation from knowledge to wisdom. The discipline of information science needs to focus on such disciplinary directions as intelligence data governance,intelligence knowledge engineering,intelligence cognition and intelligent intelligence generation,so as to adapt to the challenges and opportunities of big intelligence work.
Purpose/significance This study proposes an interpretable interactive fake review detection model that incorporates DeepSeek-based reasoning clue features. The goal is to enhance the effectiveness of detecting and managing fake reviews in complex interactive scenarios,thereby strengthening user trust in the platform. Method/process On one hand,RoBERTa is employed to extract semantic features from interactive review texts. On the other hand,DeepSeek-R1 is utilized to infer fake review clues,which are then transformed into vector representations. These two types of features are subsequently fused through a cross-attention mechanism. Finally,the fused representations are fed into a multi-label classification layer to obtain the identification results of interactive fake reviews. Result/conclusion Experimental results on a real-world interactive fake review detection dataset show that the proposed model achieves an improvement of approximately 14.22% in F1-score,significantly outperforming baseline models. Moreover,the natural language fake clue texts generated through DeepSeek reasoning provide interpretable evidence for the detection results. Innovation/value This study models the interactive fake review detection task as a multi-label text classification problem,offering a novel methodological perspective and approach for research in fake review detection. The proposed model partially addresses the limitations of previous methods,such as the restricted contextual understanding caused by unidirectional comment propagation and the lack of model interpretability. It provides a practical and effective solution for platforms to identify and manage fake reviews.
Purpose/significance This study investigates how different prompt frameworks influence user behavior in conversational search. The goal is to inform the development of personalized prompting strategies in real-world applications and to provide empirical support for optimizing human–AI collaboration in information-seeking tasks. Method/process The study designed two types of search tasks—analytical and creative—and used ChatGPT as the experimental platform. The experiment followed a between-subjects design,in which thirty participants were randomly divided into three groups,with each group using one of the following different prompt frameworks:ICIO,CRISPE,or QCIPSPE. Researchers collected participants’ evaluations of system performance and user experience before and after the tasks,and then compared the results to evaluate the effectiveness of each framework. Result/conclusion Overall,the use of structured prompt frameworks significantly improved both the efficiency and quality of conversational search. The ICIO framework was particularly effective in helping users break down complex problems and retrieve deeper information,making it well-suited for analytical tasks. CRISPE excelled in reducing search time and enhancing result clarity,fitting time-sensitive scenarios. QCIPSPE improved search accuracy and minimized the need for user corrections,making it ideal for precision-oriented tasks. The findings also highlight the importance of role-based interaction and contextual cues in shaping user satisfaction and retrieval outcomes. Moreover,users’ prior knowledge emerged as a key factor influencing the effectiveness of different prompt strategies. Innovation/value This study addresses the practical gap in understanding the mechanisms by which prompt frameworks influence conversational search,proposes operational evaluation metrics,and offers empirical insights for prompt design and system optimization.
Purpose/significance Data asset inventory,as the first part of carrying out data asset listing,is an important step in identifying,discovering and measuring data resources with real and potential value,and clarifying issues related to data asset inventory has a driving effect on data asset listing. Method/process Focusing on the current difficulties and controversial issues of data asset inventory,we analyze the connotation,object,and content of data asset inventory,put forward the suggestions for data asset inventory. Result/conclusion Data asset inventory should take “data” with semantic integrity,relative independence and clear business objects as the object,and unify basic data,derived data and implicit data with asset attributes and application scenarios into the scope of inventory,adhere to the data value-oriented approach. And,determine the management attributes of data assets from the scope of organization,identify the application scenarios of data assets within the business context,and ascertain the technical attributes of data assets from the system perspective.
Purpose/significance Application scenarios serve as concrete carriers for aggregating,analyzing,and fulfilling user needs. They play a vital role in enhancing the effectiveness of data collaborative governance. Method/process Based on scenario theory and collaborative governance theory,this paper employs literature analysis and other methods to construct a framework for data collaborative governance mechanisms driven by application scenarios,and analyzes the underlying conceptual design. It argues that scenario-driven approaches and their iterative development can accelerate the formation of data collaborative governance mechanisms. This process enables a dynamic governance structure composed of diverse elements such as multi-stakeholder collaboration,integration of heterogeneous data sources,interconnection of varied platforms,and multi-dimensional technological support. The paper also analyzes the operational characteristics and implementation pathways of scenario-driven data collaborative governance. Result/conclusion The main innovative conclusion of this study is that the design and innovation of application scenarios act as key driving forces for data collaborative governance. The governance process itself is characterized by multi-element matching and the enhancement of data value. Iterative evolution of application scenarios can continuously drive the improvement and functioning of data collaborative governance mechanisms,thereby effectively enhancing the efficiency of data governance.
Purpose/significance The development of information science is closely related to the progress of the times. Against the backdrop of the digital intelligence era,it is necessary to re-examine the traditional concept of the intelligence chain and its limitations. Method/process The article introduces the emergence theory in complex systems science,analyzes the complex adaptability of the information chain,and deconstructs the characteristics of multi-agent interaction,nonlinear transmission,and self-organizing evolution in the operation process of the intelligence chain. By constructing a hierarchical model of the functional emergent behavior effects of the intelligence chain,it is revealed that the essence of the functional emergence of the intelligence chain lies in a hierarchical transition process of element interaction-structural reorganization-functional emergence.The core mechanisms of the functional emergence of the intelligence chain are elaborated from four aspects:information integration,core transformation,efficiency amplification,and external catalysis. It is confirmed that the functional system of the intelligence chain achieves evolution from lower to higher levels through internal and external interactions. Result/conclusion The essence of the intelligence chain is a complex system with self-organization,non linearity,and adaptability.The core function of the intelligence chain emerges from the interaction of intelligence elements rather than a simple linear superposition. Its ultimate value lies in achieving a qualitative change from scattered information to the basis for decision-making.This study enhances the theoretical depth of information research by adopting an interdisciplinary approach and expands the cross-disciplinary research between information science and management.
Purpose/significance China introduced specific legislation on the AIGC labelling system in 2025,but existing research has paid limited attention to this development,with some even questioning the necessity of the system. In this context,it is essential to study the significance of the AIGC labelling system,evaluate the current legal framework,and propose optimization strategies. Method/process This study adopts functionalism,interpretative theory,and legislative theory to clarify the system’s role and analyse the shortcomings of existing norms by combining the negative impacts of AIGC proliferation. Result/conclusion The AIGC labelling system plays a foundamental role in controlling information pollution,maintaining copyright incentives,rebuilding societal trust,and optimizing technological iteration. However,current laws suffer from issues such as low legislative hierarchy,regulatory conflicts,insufficient control over dissemination stages,and inadequate operational mechanisms. Hence,it is recommended to timely elevate the legislative hierarchy,integrate existing norms,employing methods such as the combination of subjects and technologies,detailed labelling systems,and labelling logs to strengthen governance over the dissemination stage,balance responsibilities among all parties,and clarify legal consequences for violations.
Purpose/significance In view of the current policy evaluation are completed by experts through small sample manual qualitative,this paper proposes a policy automatic evaluation method based on PMC index,emphasizing the use of automatic assignment rules to calculate PMC index,in order to realize the objective quantitative evaluation of large samples,and to innovate the existing policy informatics method system. Method/process By constructing the PMC index system,we design and realize the automatic scoring method based on keyword recognition,and plot the surface diagram for quantitative analysis and comprehensive evaluation after calculating the PMC index. Takes China’s 2018 biosafety policies as examples,this paper evaluates the distribution of scores and the overall performance of the biosafety policies,and analyzes the shortcomings of the current biosafety policies. Result/conclusion It is found that the overall biosafety policies in China are in good condition,with high scores for policy nature,policy perspective and policy evaluation,and 93% of the policies are at an acceptable level,but the scores for incentives and constraints and issuing organizations are low,reflecting the governance characteristics of our country that emphasizes on evaluation and assessment and administrative penalties,and is light on constraints and incentives. The policy automatic evaluation method based on PMC index proposed in this paper provides an efficient means for evaluating large-volume policy texts and enriches the policy informatics methodology.
Purpose/significance Under the background of the deep intelligent transformation in scientific and technical(S&T) intelligence research,clarifying the intelligent analytical approach for complex S&T intelligence problems is crucial for efficiently applying AI technologies,represented by Large Language Models (LLMs),to empower S&T intelligence studies. Method/process Based on summarizing the current status of AI-empowered S&T intelligence analysis and the fundamental logic of complex problem resolution,this study formalized the cognitive processes of S&T intelligence experts through business practices. By dynamically combining and comprehensively applying analytical logic methods following the workflow of “problem definition→logic adaptation→element deconstruction→verification and iteration”,an LLM-based intelligent analytical framework for complex S&T intelligence problems was proposed. Result/conclusion A chain-of-thought-like analytical process for complex S&T intelligence problems was constructed. The basic framework for intelligent S&T analysis was proposed,integrating prompt engineering,business models,contextual learning,and fine-tuning,providing support for the application development of LLMs and AI agents in the S&T intelligence domain.
Purpose/significance To effectively address the phenomenon of technological suspension in the application of large language models,promote the deep integration of large language models with user needs,and achieve high-quality development of large language models. Method/process This paper conducted field investigations on practical cases of big language model technology suspension,conducted in-depth interviews with 24 respondents,and systematically interpreted the characteristics and influencing factors of the phenomenon of big language model technology suspension by encoding and deconstructing the interview data. Result/conclusion The research results show that there is a close correlation between the phenomenon of big language model technology suspension and digital divide,digital disorder,and digital misplacement,including three specific characteristic forms: technology embedding suspension,cognitive suspension,and digital suspension. At the same time,it was found that information quality,information literacy,technological availability,technological usefulness,and community influence are the key driving forces behind the phenomenon of technological suspension in large language models. Based on this,strategies and suggestions have been proposed to optimize information services,enhance technological value,and improve the technological environment to address the issue of technology suspension in large language models.
Purpose/significance Accurately measuring original innovation in scientific papers is essential for enhancing national innovation capacity and improving research management and decision-making. Method/process Building on a clear definition of original innovation,this paper focuses on three key dimensions—Original Pioneering (OP),Original Breakthrough (OB),and Original Leadership (OL)—and proposes a quantitative measurement framework that integrates both the internal and external features of scientific papers. The main process includes OP assessment using domain-specific knowledge extraction,OB assessment through the integration of local structure entropy and local outlier factor,and OL assessment by combining citation frequency with the quality of citing literature. Finally,the CRITIC entropy weighting method integrates these three dimensions to generate a comprehensive originality score. Result/conclusion An empirical study in the attosecond science domain shows that the proposed method effectively captures original innovation in scientific papers and provides objective,quantitative support for research management. Innovation/limitation This study demonstrates exploratory value in providing a quantifiable and computable measure of original innovation. However,the OL indicator relies on citation information,making it difficult to assess the originality of recent research,and the method’s scalability requires further validation and refinement.
Purpose/significance Artificial intelligence opens up a new pattern of space economy,and spatial computing leads a new era of digitalization. In the field of intelligence mining and knowledge discovery,it is of great significance to promote the deep integration of data-driven and spatial intelligence technologies. Method/process Based on relevant research at home and abroad,from the perspective of data fusion theory,this paper analyzes the conceptual system,operation logic and mechanism of multi-source heterogeneous spatial data fusion from the perspective of data fusion theory,focusing on the connotation boundary,element structure and technical approach. Result/conclusion The multi-dimensional inference chain of spatial,semantic and temporal series is proposed,which helps the traditional intelligence mining to make the transition to spatial intelligence mining. As an important driving force for knowledge discovery,multi-source heterogeneous spatial data fusion is mainly reflected in methodological innovation,technical tool upgrading and application scenario expansion. Innovation/value Based on the multi-dimensional inference chains of spatial,semantic,and temporal aspects,through the deep integration of multi-source heterogeneous spatial data,traditional intelligence mining will transition into spatial intelligence mining,promoting the evolution of information science from two-dimensional to multi-dimensional space and enabling a paradigm shift in knowledge discovery.
Purpose/significance Analyze the user expected items and satisfaction priorities of AI search in different dimensions,and provide theoretical support and practical guidance for improving the user experience and satisfaction of AI search. Method/process Using online research,29 AI search expectation items were extracted and sorted out,and divided into 5 dimensions. Design a questionnaire based on the Kano model and conduct user surveys. Analyze the impact of various expectations on user satisfaction through Kano data analysis and satisfaction index analysis. Result/conclusion AI search user expectations include 7 association types,11 charm types,4 indifference types,and 7 essential types. A “dimension * hierarchy” model of AI search user expectations was constructed and AI search optimization strategies were proposed. Innovations/limitations Provides a multi-dimensional and multi-level theoretical framework and strategic basis for AI search optimization,but it does not fully explore the dynamic changes of user expectations. In the future,it is necessary to study its time evolution trend and group differences to improve the optimization reference.
Purpose/significance Objective and accurate identification of technical opportunities is crucial for enterprises to continuously enhance the efficiency of technical innovation. However,existing literature generally suffers from imprecise extraction of patent text information and low utilization of innovation elements. To this end,this paper proposes a technological opportunity identification method that combines BERTopic with generative topological mapping to enhance the precision of judgment on technological opportunities. Method/process Firstly,the BERTopic model is utilized to conduct topic clustering on technical patents in a specific domain,and the topic information is transformed into a How-to model to extract corresponding functional elements. Secondly,based on the function-oriented search method,patent data in analogous technical fields are collected,and through generative topological mapping technology,relevant technical element combinations are obtained. Then,a multi-dimensional space patent map is constructed to identify potential technical opportunities. Finally,taking the assembly of aircraft fuselage structures as an example,the feasibility of the method proposed in this paper is verified. Result/conclusion Technology opportunity identification method based on patent text clustering and mining the combination of technical elements can effectively reduce the risk of irrational innovation of enterprises,and provide a scientific decision-making reference for enterprises to efficiently carry out concrete technical innovation.
Purpose/significance With the rapid development of network technology and artificial intelligence,religious extremist organizations have accelerated their penetration through various social media platforms,and their behavior of spreading extreme ideas has become increasingly covert and technical. Traditional governance means are facing the severe challenge of “difficult to find,trace and intervene”. Method/process Taking the “X” platform public account as the object,based on the open source intelligence technology to capture its social relations and communication data,combined with the social network analysis (SNA) method,quantify the node centrality,communication level and cluster characteristics,and identify the key nodes. Result/conclusion The dynamic SNA method integrating open-source intelligence effectively compensates for the deficiencies of traditional static social network analysis in terms of timeliness and dynamic evolution characterization. Cracking down on the “key nodes” in religious extremist propaganda networks had a multiplier effect on the disintegration of the organization. Facing the “decentralized” network of extremist propaganda organizations,we should focus on cracking down on key hub nodes,gradually divide and disintegrate small groups within the organization,and use artificial intelligence technology to intervene in the internal information exchange of the organization.
Purpose/significance As the core confluence of scientific and technological innovation,the frontier interdisciplinary field has increasingly become a strategic focal point in global technological competition. Amidst the escalating intensity of global scientific and technological rivalry,nations are actively vying to gain the upper hand in researching and developing frontier interdisciplinary topics to erect technological barriers. These topics engender disruptive innovation through the reorganization of disciplinary knowledge. They not only facilitate breakthroughs in the development bottlenecks of individual fields but also foster the impetus for industrial transformation via the integration of multiple technologies. This process provides robust support for countries to optimize the allocation of scientific research resources and formulate forward - looking strategies. Consequently,the accurate and expeditious identification of frontier interdisciplinary topics is of paramount importance for seizing the commanding heights of science and technology. Method/process This research proposes a method for identifying frontier interdisciplinary topics by integrating the perspective of knowledge networks. It identifies such topics through the synthesis of knowledge network characteristics and literature - based index features. First,the BERTopic model and hierarchical clustering methods are applied to conduct topic identification and clustering. Second,a knowledge network is constructed based on topic clusters,and the metrics of degree centrality and betweenness centrality are introduced to quantitatively analyze the influence and connectivity of topic clusters within the network. Third,network - based indicators are integrated with traditional metrics to construct an index system for positioning frontier interdisciplinary topics from the two dimensions of frontiers and interdisciplinarity. Finally,an empirical study is performed in the domain of generative artificial intelligence to validate the effectiveness of the proposed research framework. Result/conclusion The frontier interdisciplinary topics in the field of generative artificial intelligence are primarily distributed in the areas of technological innovation,social governance,and biomedical applications. Substantiated by evidence from authoritative biomedical industry reports,and extensive literature and news,this framework has demonstrated its efficacy in accurately identifying frontier interdisciplinary topics within the domain,thereby exhibiting significant practical application value. The study not only expands the theoretical approaches for identifying frontier interdisciplinary topics but also offers novel perspectives for the allocation of scientific research resources and the prediction of technological trends in practical scenarios.
Purpose/significance Technological videos are an important part of video intelligence. As a video type that takes science and technology as its subject of expression,they integrate visual and auditory dimensional information,significantly enhancing the integrity of intelligence in the field of intelligence research. With the breakthrough development of artificial intelligence generated content (AIGC),the creative paradigm of technological videos is undergoing profound changes. The aim of this study is to analyze the application of AIGC in the creation of technological videos,providing theoretical and practical guidance for its development. Method/process The article conducts research from three dimensions: dataset construction,technical application,and reflection on issues. By constructing high-quality datasets of technological videos and using digital conversion and dynamic web crawling technologies,it provides data for AIGC model training. It also conducts an in-depth analysis of the technical paths and typical cases of AIGC in various stages of creation,including pre-production,production and post-production. At the same time,it examines issues such as the crisis of scientific authenticity,ethical risks,and blurred liability attribution in AIGC applications. Result/conclusion The research reveals that AIGC has significant advantages in improving creative efficiency and expanding the boundaries of artistic expression. However,it is necessary to center on human subjectivity and establish a dual-track collaboration framework of “creativity - led,technology - executed” to balance efficiency and values. In the future,the integration of AIGC and technological videos should move towards an intelligent creative ecosystem of human - machine collaboration,achieving both efficiency improvement and ethical adherence through technological iterations and interdisciplinary norms.
Purpose/significance In the context of technological innovation,accurate prediction of emerging topics helps to grasp the future development trend of science and technology,and is of great value in guiding research directions and optimizing industrial layout. Method/process Aiming at the logical consistency between the generation of emerging topics and the co-occurrence of patent classification,migrating the emerging topic prediction task to the link prediction scenario,designs an end-to-end graph representation learning emerging topic prediction framework,and verifies the prediction effect with patent data in the field of genetic engineering. Result/conclusion Experimental results show that this method can capture future emerging topics,and the model has good robustness and generalization ability. The prediction results are scientific and can become an effectively method path to assist the prediction of emerging topics.
Purpose/significance In the era of big data,the mechanism analysis of the multidimensional correlation evolution of crisis public opinion has become an important basis for public opinion governance. Method/process This study proposes a big data analysis framework for crisis public opinion based on mixed methods,designing a three-stage progressive analysis mode of “feature extraction,mechanism mining and reverse validation”. High-precision extraction of crisis features in data space is achieved based on adaptive deep learning methods. Econometric models are constructed to systematically explore the internal conduction mechanism of public opinion evolution in real space. QCA is introduced to reversely validate the robustness of conclusions. Result/conclusion The Accuracy,Precision,Recall and F‒Measure indicators of automatic feature extraction models are all above 0.942. Econometric models reveal the full chain conduction mechanism that risk signals in data space activate psychological factors such as social identity,risk perception and protective motivation,catalyze anger and fear emotional reactions,and ultimately drive the risks of public opinion diffusion in real space. The moderating effect of psychological distance reveals its complex nonlinearity and situational dependence. QCA results further demonstrate the comprehensive reliability of the mechanism discovery. Innovation/value The framework has achieved breakthroughs in the accuracy of feature automatic extraction,depth of mechanism interpretation and reliability of conclusions,providing theoretical support and methodological reference for the precise governance of crisis public opinion.
Purpose/significance In contemporary social media,users’ opinions and emotional expressions are increasingly presented in multimodal forms. To address the challenges of incomplete feature representation and semantic inconsistency across modalities in multimodal sentiment analysis tasks,this study investigates the application and performance of multimodal large language models in social media multimodal sentiment analysis tasks. Method/process Specifically,two models are constructed,a multimodal sentiment analysis model based on llama3.2-vision,fine-tuned using LoRA,and another multimodal sentiment analysis model based on GPT-4o combined with prompt strategy optimization. The effectiveness of the proposed models was evaluated through experiments on the public datasets MVSA-Single and MVSA-Multiple. Result/conclusion Experimental results show that the sentiment analysis models under both schemes significantly outperform the baseline models in terms of accuracy,precision,and recall. The study provides a new analytical perspective and theoretical framework for multimodal sentiment analysis tasks on social media in the context of generative artificial intelligence,which is of great significance for multimodal sentiment monitoring and guidance in cyberspace.
Purpose/significance With the acceleration of information technology,enterprises are facing an increasingly complex policy environment. How to achieve accurate policy matching and personalized recommendation has become an important issue to be addressed urgently. This paper proposes a policy recommendation method that integrates DeepSeek and Sentence-BERT,aiming to improve the accuracy and efficiency of policy matching. Method/process The study constructs an enterprise profile by characterizing enterprises from four dimensions: enterprise type,location,industry,and main business. Based on prompt engineering with DeepSeek,key information such as applicable targets,regions,industries,and business areas is extracted from policy texts. The Sentence-BERT model is introduced to generate semantic embeddings,and a multi-head attention mechanism is applied to align the semantic features between enterprises and policies,thereby constructing an enterprise-policy matching model. Result/conclusion Experimental results show that the proposed model achieves excellent performance in precision,recall,and F1 score,significantly outperforming traditional models,which verifies the effectiveness and feasibility of the method. [Innovation/ limitation This study innovatively applies DeepSeek to policy information extraction and integrates it with Sentence-BERT to build a policy recommendation framework,providing an efficient technical approach for personalized policy recommendations to enterprises,and helping enterprises better utilize policy resources. Future work may explore directions such as dynamic updating,real-time recommendation,and multi-modal data fusion to further enhance the timeliness and comprehensiveness of policy recommendation.