Purpose/significance The subjects of knowledge production in the traditional knowledge spiral theory (SECI model) are divided into three levels:individual,team and organiszation,all of which are composed of humans. However,in the era of Artificial Intelligence (AI),generative AI is also involved in the knowledge production process,forming a different knowledge spiral paradigm. Method/process In this paper,the case study method,theoretical analysis and model construction method,and future scenario prediction method are comprehensively used to construct the human-intelligence symbiosis knowledge spiral EPIC model. Result/conclusion Large models represented by DeepSeek promote the transformation of knowledge from unidirectional reception to bi-directional synergy through technical mechanisms such as network search,deep thinking,human-intelligence synergy,and knowledge distillation. This realizes the paradigm upgrading of “knowledge exporter”,which is the core driving force for the knowledge bi-directional spiral. Compared with the SECI model,the EPIC model is innovative in that it extends from a single human subject to “human-AI” double subject synergy,upgrades from a two-dimensional flat “field” within the organisation to a three-dimensional dynamic knowledge creation space,and leaps from a one-way local loop to a two-way global loop.
Purpose/significance Social media influenced by generative artificial intelligence(GAI) has exerted a profound impact on the current information ecology of social networks. By emphasizing subjectivity within social networks,GAI technology actively engages as a cognitively influential actor in shaping group opinions,thereby reshaping the information ecology of the online public opinion landscape. Method/process Drawing from a vast array of literature,a systematic analysis is conducted focusing on three scientific questions:the formation mechanism,social influence,and governance pathway under the influence of GAI. Result/conclusion At the theoretical level,the study constructs a three-dimensional integrated analysis framework of technology-driven,meso-influence and macro-structure,and deconstructs four formation mechanisms:cognitive adjustment,social emotion,technology enhancement and network reconstruction. At the level of mechanism deepening,a triple evolution mechanism for GAI technology alienation is proposed,encompassing technological militarization,spatial disembedding,and systemic deregulation. At the level of governance innovation,a governance pathway for network group polarization influenced by GAI is introduced,alongside a three-dimensional governance framework of technological governance,institutional innovation,and capacity building. This study enhances our understanding of the formation mechanism of network group polarization driven by GAI,offering both theoretical and practical guidance for mitigating and governing the risks associated with social media network group polarization driven by GAI.
Purpose/significance In the AIGC (Artificial Intelligence Generated Content) era,the new information dissemination model brought by artificial intelligence technologies has endowed information pollution with new connotations for the times. The past definitions of the connotations and analyses of the denotations of information pollution were constrained by era-specific limitations. With the rapid development of AI technologies,exploring and summarizing the connotations and formation mechanisms of new-type information pollution is of great significance for continuing relevant research in the future. Method/process Taking the new connotations of information pollution in the AIGC environment as the breakthrough point,this study explores the formation mechanisms of information pollution from the perspectives of limitations in AI technologies and manipulation behaviors in information activities,and proposes corresponding prevention and control strategies. Result/conclusion Regarding the formation mechanisms,the five-dimensional defects in AI model data and the technical and algorithmic limitations of AI together leave vulnerabilities for the invasion of polluted information and manipulation intentions. Manipulators achieve deceptive intentions through two paths—pre-manipulation and post-manipulation—leading to the spread of information pollution,which ultimately consolidates into pollution phenomena. In terms of governance,based on the four links of input,source,channel,and destination,corresponding information pollution prevention and control strategies are proposed,including establishing an input data standard system,implementing pollution prevention and blocking governance,etc. Limitations This study has explored less on user manipulation behaviors. In the future,it is planned to supplement relevant qualitative research to extract and summarize the characteristics of information manipulation in the AIGC era and propose corresponding governance strategies.
Purpose/significance Exploring the relationship between post-publication evaluation sentiment and the impact of academic papers can contribute to the improvement of the scientific research evaluation system. Method/process Based on the recommended biomedical papers in H1 Connect,this study proposes metrics including overall sentiment score,positive/negative sentiment ratio,and sentiment consensus degree. The papers are grouped by citation count,and statistical analysis is conducted to reveal differences in post-publication peer review and academic citation sentiment characteristics across groups. Furthermore,correlation analysis and interpretable machine learning methods are employed to explore the relationship between sentiment features and academic impact. Result/conclusion There exists a correlation between post-publication evaluation sentiment and academic impact in academic papers. Papers with higher sentiment scores,more positive evaluations,and higher sentiment consensus demonstrate substantially greater academic impact. We proposes a quantitative method for assessing post-publication evaluation sentiments in academic papers,explores the sentiment characteristics of evaluations across citation-based impact tiers,and reveals the relationship between evaluation sentiments and academic impact. The study can provide theoretical support and methodological insights for research evaluation.
Purpose/significance The effective transformation of intelligence requirements into intelligence cognition problems plays a critical role in enhancing the efficiency of intelligence work. In-depth research on transformation mechanisms can help bridge the gap between requirements and cognition,advance the paradigm of human-machine collaborative intelligence research in the era of large language models,and provide robust support for intelligence operations. Method/process By meticulously analyzing concepts such as intelligence requirements and intelligence cognition problems etc,this study delves into the challenges encountered during the transformation process. It clarifies the intrinsic mechanisms of the “three transitions” and constructs a procedural model centered on “context anchoring–requirement analysis–boundary definition–framework construction–iterative calibration” while proposing targeted recommendations. Result/conclusion This research reveals the essence of the “three transitions” in transforming intelligence requirements into cognition problems and establishes a corresponding transformation model. The findings facilitate the conversion of ambiguous intelligence requirements into precise problem definitions,improve the efficacy of intelligence research,and promote the advancement of intelligence work in the era of large models.
Purpose/significance Intelligence facts are crucial for intelligence work. It is an urgent intelligence business issue to be addressed to explore the specific requirements and corresponding implementation ways in the key supply area of secure data delivery of fact-based databases to meet the supply needs of intelligence facts. Method/process Based on the theory of the ternary space,this study elaborates the requirements for secure and trusted data delivery of fact-based databases and constructs a framework for trusted data delivery of fact-based databases. It systematically proposes methods for trusted data delivery of fact-based databases and validates them using case analysis. Result/conclusion To ensure the secure and trusted delivery of data from fact-based databases,the physical space requires technical support for trusted data circulation,the cyber space demands that data be of high-quality intelligence,and the social space calls for the normalization of transparent data management. Specifically,with the data from fact-based databases as the main element,blockchain,privacy computing,and other technologies are employed,along with data trusts,to achieve a trusted data circulation process that includes multiple stages such as acquisition and aggregation,transmission and trading,analysis and utilization,as well as archiving or reuse.
Purpose/significance In the digital economy era where data has emerged as a novel production factor,investigating the evolutionary frameworks of data element circulation product morphology holds critical significance in advancing ownership authentication,value quantification,operational fluidity,and holistic market ecosystem development for data element. Method/process By systematically reviewing product morphologies on domestic and international data element trading platforms and categorizing them based on the intrinsic logic of data element value transformation,this research constructs a value-oriented evolutionary framework for data element circulation product morphology. Guided by value augmentation and grounded in the DIKW (Data-Information-Knowledge-Wisdom) theory,the framework encompasses four core product morphology tiers—raw data,thematic knowledge graphs,task-oriented lightweight models,and industry-specific large models—alongside their derivatives,forming a comprehensive and refined product matrix. Result/conclusion By deepening the understanding and application of data element circulation product morphologies,this study contributes a forward-looking and operational theoretical framework for the healthy development of data element trading markets. The proposed system optimizes rights confirmation processes and pricing safeguards while enhancing transactional efficiency,offering Chinese wisdom for the global construction of data trading and circulation product frameworks.
Purpose/significance To clarify the generative mechanisms of social media users’ information dietary bias behavior under stratified public opinion polarization,gain insights into the internal and external dynamics of opinion surges within stratified circles amidst complex public opinion environments,and provide valuable references for preventing circle solidification and fostering a healthy public opinion ecosystem. Method/process Based on Wilson’s information behavior framework,this study integrates Engeström’s activity model and Foster’s nonlinear model. It employs specific cases to illustrate the chain of effects of user information dietary bias behavior under stratified public opinion polarization. A generative mechanism model for this behavior is constructed,encompassing five key stages: demand continuity,algorithmic ordering,emotional circulation,cognitive processing,and behavioral feedback. Furthermore,the psychological dynamic perception process of user information dietary bias is analyzed across three stages:habitual,sensitization,and compulsive. Result/conclusion Focusing on user information dietary bias behavior under stratified public opinion polarization,this research utilizes case studies to reveal the intrinsic nature of information bias across five dimensions: situational cognition chain,platform algorithm chain,emotional dissemination chain,stratified hierarchy chain,and demand-bearing chain. It discusses the generative mechanisms and psychological perception stages of user information dietary bias,offering novel perspectives for research on this behavior within polarized stratified circles.
Purpose/significance AI Model Cards are designed to document and share metadata about artificial intelligence models systematically. As a structured and standardized tool for describing algorithms,they are pivotal in enhancing algorithmic transparency and advancing algorithm governance. Method/process This study systematically reviewed and synthesized the existing literature on AI Model Cards,categorizing the research into three key dimensions:element structure,content quality assessment,and value justification. Result/conclusion This study identifies the structural elements and intrinsic attributes of AI Model Cards,highlighting shared principles across academia,industry,and government while underscoring their distinct priorities in content development. It also reveals a gap between conceptual design and practical implementation. Moreover,Model Cards demonstrate varying value dimensions tailored to different stakeholder needs. By deeply examining their advantages,limitations,and future development,this study provides conceptual insights and methodological support for building model information frameworks and advancing algorithm transparency.
Purpose/significance The academic community of information science generally agrees that the underlying logic of patent citations to scientific papers is technical innovation’s reliance on scientific foundations. However,there is no consensus on the underlying logic of scientific papers citing patents. This paper aims to infer the underlying logic of such citations by analyzing the scientific effects after scientific papers cite patents. Method/process Taking all 14628 SCI papers that cited patents as the research sample,Spearman correlation analysis,zero-inflated negative binomial regression and other methods were used to calculate the correlation between the academic impact of the sample papers after citing patents and the value,breadth,intensity and speed of the patents they cited. Result/conclusion The scientific effect/academic impact of a sample paper citing patents is not related to the value or breadth of the cited patents,but is positively correlated with the intensity and speed of the citation. That is,when scientific papers cite patents,it is not the case that the higher the value of the cited patents,the better,nor is it the case that the more fields of patents cited,the better. Instead,the more focused and rapid the citation of newer patents,the more it can stimulate scientific effects/academic impact. Based on the viewpoint of“science explains technology - technology stimulates science”in the philosophy of science and technology,the conclusion is attributed,and the underlying logic of such citations is explained. It is proposed to apply the indicators of the intensity and speed of scientific papers citing patents to innovation evaluation. Limitations The scientific papers only come from SCI,there are limitations in the sample.
Purpose/significance Intelligence is crucial support for emergency management in accidents and disasters. Exploring the problems of intelligence failure in responding to accidents and disasters can help fully leverage the value of intelligence,and promoting the scientific nature of accident and disaster prevention as well as the modernization of emergency management. Method/process Based on 21 investigation reports of major and extremely major accident and disaster emergencies from 2014 to 2024,this study combed out the issues of inadequate data collection,inaccurate information assessment and judgment,untimely intelligence transmission,and insufficient intelligence application in responding to accidents and disasters,along with their specific manifestations. Result/conclusion Based on the perspective of resilient thinking and drawing on the TOE framework,the results indicate that the defects of the intelligent management platform,barriers to technology application,imbalance in organizational structure,low intelligence literacy,and structural shortcomings of the bureaucracy lead to intelligence failure in responding to accident and disaster emergency at the three levels of “technological system - organizational system - environmental system”,which in turn trigger the occurrence and development of accidents and disasters. Based on the above,it proposes countermeasures such as strengthening the data and information correction mechanism,optimizing risk assessment work,smoothing the collaborative linkage mechanism,and deepening the democratic decision-making throughout the process.
Purpose/significance Exploring the impact of public data on the development of regional data element market is beneficial for optimizing data resource allocation and promoting balanced development of regional data economy. Method/process Taking the rapidly developing eastern cities in China’s data element market as the research object,based on the TOE theory,a framework of factors influencing the data element market is constructed by integrating public data authorization operation,open utilization indicators,etc. By comprehensively utilizing bivariate analysis,clear set qualitative comparative analysis (csQCA),and multiple linear regression equation methods,this study explores the key factors and paths through which public data affects the development of China’s regional data element market. Result/conclusion The bivariate correlation results show that there are significant positive correlations among the three dimensions of TOE,especially between the service layer and the data layer,and between the data layer and the utilization layer. csQCA results show three kinds of high-data factor market development paths: data and information infrastructure - authorized operation policy support,data and information infrastructure - authorized operation platform collaboration and authorized operation platform core drive. Multiple linear regression analysis showed that all the development configuration paths of high data elements passed the test. Based on the research results,according to the development characteristics of eastern cities in China,suggestions are put forward to strengthen the construction of infrastructure and authorized operation platform,focus on the drive of authorized operation platform,and balance the development of infrastructure and public data protection,in order to promote the comprehensive and rapid development of China’s data element market.
Purpose/significance This paper studies the influencing factors and synergy of data security of the government data open platform from the temporal dimension,in order to provide a reference for the data security governance of the government data open platform. Method/process The paper constructs an analytical framework based on the information ecosystem theory,takes the provincial panel data from 2020 to 2024 as case samples,and uses the dynamic QCA method to explore the combination paths and dynamic change processes of the key factors affecting the data security of local government data open platforms. Result/conclusion The research finds that a single factor does not constitute a necessary condition for the data security of the local government data open platform. In conditional configuration analysis,there are four typical configuration paths,namely the economy and technology-driven type under government leadership,the digital talent and technology-driven type with policy support,the multi-element collaborative driven type,and the digital talent and technology-driven type empowered by the environment. The configuration paths have strong stability in the temporal dimension.
Purpose/significance Online healthcare communities serve as crucial platforms for patients to exchange medical experiences and obtain health advice,yet they face severe challenges from the proliferation of fake reviews. The scarcity of high-quality labeled data for fake reviews has constrained the performance improvement of identification models. Method/process This study proposes a fake review identification model training method based on large language model (LLM) synthetic data,combining crowdsourced annotation with LLM data synthesis technology to construct a high-quality Chinese medical review dataset containing authentic reviews and multiple types of fake reviews. Comparative experiments were conducted using four machine learning models—logistic regression,random forest,XGBoost,and gradient boosting—based on different proportions of synthetic data. Result/conclusion The research found that fake review identification models trained with synthetic data achieved a maximum F1 score of 0.983,representing a 1.55% improvement. All models achieved optimal performance with the highest proportion of LLM synthetic data,validating the effectiveness of LLM synthetic data in addressing the scarcity of labeled data in the healthcare domain. Limitations This study did not thoroughly investigate the differential impacts of various types of fake reviews on model performance,and the quality assessment of synthetic data still has room for improvement. Future research could further explore the semantic features of fake reviews and synthetic data optimization methods based on adversarial learning.
Purpose/significance Compared with common misinformation,health misinformation is more complex and difficult to identify. The recognition method of multi-feature fusion is helpful to improve the accuracy of health misinformation recognition. Method/process Centering on health misinformation,this paper introduces persuasion strategy theory and health vocabulary index,and constructs a health misinformation recognition model based on multi-feature fusion. The model comprehensively uses statistical analysis,machine learning and deep learning algorithms to extract and integrate title strategy types,health medical features,emotional features and semantic features. Then the health misinformation is identified,and finally the health information article on Wechat is taken as an example to carry out empirical analysis. Result/conclusion The discriminant effect of this method is better than that of a single feature. After analysis,the semantic features,emotional features and medical features of the text are more important.
Purpose/significance Government-funded enterprise product development project data harbors rich insights into new product research and development. Mining technical themes from such data provides critical reference value for market stakeholders,including governments and enterprises,to formulate strategies for technological innovation and industrial development planning. Method/process The S-BERT topic model is used to identify technical topics in government-funded enterprise product development project data,and the Dynamic Topic Model (DTM) is employed to reveal the evolutionary trends of distinct technical topics. The analysis specifically utilizes data from genetic technology projects funded by the U.S. SBIR program. Result/conclusion A total of 15 technology topics are identified through government-funded enterprise product development research project data,including RNA interference (RNAi) technology,zinc finger nuclease (ZFN) technology,and others. Most of these topics exhibit significant fluctuations in funding activity. Notably,transcription activator-like effector nuclease (TALEN) technology demonstrates a marked increase in funding traction,while a few technologies,such as genome-wide association technology and gene cloning technology,experience a decline in financial support.
Purpose/significance Identifying and forecasting potential co-opetition relationships is key to gaining a competitive edge for enterprises. This study proposes a systematic co-opetition analysis method to address the limitations of existing research that tends to separate competition and cooperation. It reveals the multidimensional interactions among enterprises in complex and dynamic environments,offering both theoretical support and methodological guidance for informed decision-making. Method/process From a patent-based perspective,a dynamic analytical framework is constructed to simultaneously examine competition and cooperation. An extended semantic analysis technique the “Author-Topic Model” is employed to mine patent specifications,uncovering the associations between enterprises and technological themes. Similarities in both market and technological resources are calculated to map out potential co-opetition relationships across the entire industry chain. Result/conclusion Technological competitors of a firm include not only direct industry rivals but also potential entrants from upstream and downstream of the industry chain. Technology partners may comprise not only upstream suppliers but also international peers and cross-sectoral organizations adopting similar technologies. Under the influence of market forces and technological innovation,competition and cooperation can dynamically transform into each other. This method expands the analytical perspective on co-opetition and helps enterprises optimize their technological deployment and enhance the scientific rigor of their decision-making.
Purpose/significance The many intelligence clues contained in open source information provide the information foundation for intelligence acquisition. This article proposes a solution and method for the problem of discovering intelligence clues in the field of open source intelligence research. Method/process The article analyzes the concept of intelligence clues and constructs a basic model for open source scientific and technical intelligence. The process of discovering scientific and technical intelligence clues is divided into three basic stages: topic extraction,clue recognition,and clue inference. Based on the specific research question of scientific and technical frontier,this paper proposes a research approach and method of applying D-S evidence theory to achieve thematic reasoning of scientific and technical frontiers based on intelligence clues. Result/conclusion The empirical analysis of research methods selected from scientific and technical papers,patents,and project data in the field of deep learning shows that the method described in this paper has a certain degree of credibility.
Purpose/significance This paper constructs an index measurement model for proactive information services in smart Q&A systems of the digital-intelligent government,aiming to provide improvement and optimization directions for proactive information services in the digital-intelligent government context. Method/process Based on the three-dimensional framework of “needs-data-government readiness”,this study proposes an index measurement model for proactive information services in smart question-answering systems of the digital-intelligent government. The model includes 3 dimensions and 27 indicators,integrating the theoretical framework with practical evaluation elements. Result/conclusion The model is applied to the government smart Q&A system platforms in Shanghai,Beijing,and Zhejiang province. The empirical analysis results demonstrate that the model is rational and feasible,providing a scientific basis for evaluating and improving proactive information services in digital-intelligent government systems.
Purpose/significance This study proposes a recommendation framework that integrates knowledge graph embedding with retrieval-augmented generation based on large language models,aiming to enhance the accuracy,professionalism,and interpretability of health information recommendations under complex demand scenarios on mental health platforms. Method/process A domain-specific knowledge graph for depression is constructed and embedded using the TransE model. Combined with a semantic understanding module for user-generated posts,the framework leverages vector matching and DeepSeek-R1 to build a three-stage pipeline of “knowledge reasoning–semantic retrieval–generation optimization”. Experiments are conducted using 9316 real user interactions and 100 expert-annotated Q&A pairs. Result/conclusion The proposed framework demonstrates superior performance in semantic similarity (0.786) and expert scoring (81.17). Ablation studies confirmed the complementary effectiveness of each module.[Innovation/ limitation By deeply integrating domain knowledge graph embeddings with retrieval-augmented generation,the study provides multi-perspective evidence of effectiveness in the depression context. However,the current knowledge graph construction relies heavily on public data sources,leading to limited coverage of certain specialized concepts.
Purpose/significance From the perspective of behavioral theory,this study aims to reveal the generation mechanism of Altmetrics data in diverse contexts and construct a data usability evaluation model,which is beneficial for understanding the value of Altmetrics data from the data generation scenario. Method/process Adopting normative analysis method,introducing behavioral theory to analyze the mechanism of Altmetrics data generation in scientific research,work,and life contexts,summarizing the theoretical model of Altmetrics data generation in diverse contexts,and constructing a usability evaluation function for Altmetrics data. Result/conclusion Although the generation mechanism of Altmetrics varies in different contexts,overall,Altmetrics data is a quantitative representation of various information behavior trajectory points of stakeholders driven by contextual goals. Data generation is an ordered transformation of contextual matrix,motivational decision-making,behavior transformation,and data emergence. The four-dimensional function of “context motivation behavior data” constructed from this can determine the value and usability of Altmetrics data.
Purpose/significance Since the late 1950s,the research on intelligence warning theory in the United States has flourished,with a continuous stream of research achievements. While leading the innovation of intelligence warning theory worldwide,it has also continuously driven the development of its own intelligence warning practice. Paying attention to and examining the research on intelligence warning theory in the United States can provide reference and inspiration for the innovation of intelligence theory and the improvement of practical intelligence warning work in China. Method/process Through literature combing and comparative analysis,the research on U.S. intelligence warning theory is examined from the aspects of development trajectory,basic propositions and research characteristics in both vertical and horizontal aspects. Result/conclusion Vertically,the research on intelligence warning theory in the United States has gone through four stages of development. Horizontally,the research focuses on five basic propositions and presents five basic characteristics.
Purpose/significance The information resource management discipline in the era of artificial intelligence is facing new opportunities and challenges. By analyzing the artificial intelligence literacy courses in the core iSchools and summarizing their curriculum profiles and characteristics,it can provide reference for the cultivation of talents in the information resource management discipline in China. Method/process This study takes 20 core iSchools abroad as the research objects. Data were collected through web-based research and email survey,to analyze the characteristics,covering four dimensions: course content,teaching objectives,teaching methods and assessment methods. Result/conclusion It is found that the artificial intelligence literacy courses offered by core iSchools abroad present the characteristics of cross-curriculum content,clear teaching objectives,rich teaching methods and diverse assessment methods. Based on these,it is recommended that the artificial intelligence literacy courses of information management discipline in China be optimized from four areas: creating diverse and integrated curriculum,designing multidimensional teaching objectives,promoting diverse teaching methods,and implementing multidimensional assessment approaches.