[Purpose/significance] This study explores the influence mechanism of digital hoarding behavior in the workplace,aiming to provide references for organizations to enhance information management efficiency and mitigate the risk of digital redundancy. [Method/process] By drawing on proximal and distal frameworks and regulatory focus theory,347 questionnaire survey data on full-time employees were obtained. Then,this study adopts a two-step fsQCA approach to examine the hierarchical pathways that link organizational pressures (such as time pressure,upward social comparison,and data protection responsibilities) and individual psychology (including fear of missing out and rumination) to digital hoarding behaviors in the workplace. Additionally,the study explores the moderating role of leader’ regulatory focus (promotion-focus and prevention-focus) on these pathways. [Result/conclusion] The findings reveal three distinct configurations to digital hoarding behavior in workplace: responsibility-anxiety configuration,comparison-anxiety configuration,and comparison-rumination configuration. Notably,there is no single necessary condition that leads to digital hoarding behavior in this context. Furthermore,the study demonstrates that the leaders’ prevention-focus effectively regulates the impact of employees’ psychology on digital hoarding behavior within groups. This study enriches the theoretical exploration of the mechanisms influencing digital hoarding behavior in the workplace. It also extends the application of the two-step fsQCA and QCA moderation methods,offering practical insights for strategies to mitigate digital hoarding behavior in work settings. [Limitations] This study does not address the configurations that lead to low levels of digital hoarding behavior,with a lack of focus on socio-cultural factors and an omission of the clarification of the downstream impacts of such behavior. These aspects could be further delved into in future research.
[Purpose/significance] With the increase of AI-generated content in social media,it is more possible for users to be exposed to AIGC. In particular,AI-generated images are widely used in social media,which has impacted users’ access to information. However,users’ attitudes toward the application of AI-generated images are unclear. Exploring users’ identification and cognition of AI-generated images in social media is important for the use,optimization and management. [Method/process] This study adopts a mixed research method design combining an eye-tracking experiment and an interview. 60 social media users participated in an eye-tracking experiment on identification of portrait images,followed by an in-depth interview on identification bases and cognition. Then the interview results were coded and analyzed. [Result/conclusion] Users rely on comprehensive criteria,primarily focusing on the subject’s face and clothing for identification of portrait images. However,these features fail to improve users’ identification accuracy and confidence. In terms of cognition,users display varied attitudes toward using AI-generated images,with primarily negative attitudes. AI-generated images are mainly embraced in visual assistance scenarios but are often rejected in situations that require emotional engagement,practical experience,and authenticity. The findings highlight the importance of protecting users’ right to be informed and improve governance to facilitate the application of AI-generated images in social media,ensuring users’ access to factual information.
[Purpose/significance] With the intensification of globalization and technological competition,leveraging big data technology to grasp the dynamic information of scientific and technological talents,accurately identify and predict talent security risks,and ensure the security of national scientific and technological talents has become an important component of the strategy for technological security. [Method/process] This paper elaborates on the conceptual characteristics of national scientific and technological talent security from the “state-capability” dimension,studies the types of risks faced in international competition and the content of intelligence services,and proposes a multi-source information intelligence situation awareness system for serving national scientific and technological talent security based on the SA-ISRM theoretical framework. [Result/conclusion] The system consists of three parts:situation awareness,situation understanding,and situation prediction. It integrates multi-source information processing and analysis technology into the workflow of talent security intelligence,achieving perception,prediction,and response to the security risks of scientific and technological talents,which provides a solid foundation for ensuring national technological security.
[Purpose/significance] Explore the influencing factors and correlational relationships between AIGC oriented to empirical knowledge acquisition and user value perception,in order to improve users’ value perception of AIGC from key perspectives. [Method/process] Collection of samples with the help of hybrid experiments and user evaluations,the research process of open coding,axial coding and selective coding was followed,a three-level coding analysis of the original data in turn to obtain the main categories,sub-categories and logical relationships of the factors that affect users’ perception of the value of social media content,and construct the influencing factor model of user value perception. [Result/conclusion] Users’ value perception of AIGC is the result of the combination of five influencing factors: content quality,information presentation,value standpoint,practical value and user experience. To improve the quality of AIGC,we should focus on the above five aspects.
[Purpose/significance] With the rapid popularisation of generative artificial intelligence,how to effectively avoid its potential risks and achieve precise prevention and control has become an important issue. This paper provides an in-depth analysis of the regulatory policies for generative artificial intelligence at home and abroad,with a view to improving China’s regulatory capabilities for generative artificial intelligence. [Method/process] Using text mining and the PMC model,this paper sorts out and summarises the regulatory policies and experiences of China,the United States,the European Union and the United Kingdom for generative artificial intelligence,compares the advantages and characteristics of different policies,and on this basis proposes a path for China to respond to the regulation of generative artificial intelligence. [Result/conclusion] There are significant differences in the regulatory attitudes and policy-making logic of different countries and regional organisations towards generative AI. Therefore,it is necessary to combine the advantages of different policies and consider their limitations,and to further improve the policy-making for generative AI in different fields and industries,taking into account the current state of technological development and the risks of social applications of generative AI in China.
[Purpose/significance] Synthetic data is a critical type of training data for artificial intelligence (AI). Its extensive application in AI model training has given rise to multiple risks,severely impacting the healthy development of AI models. Therefore,it is essential to explore governance pathways for these risks based on an analysis of the current governance landscape. [Method/process] By employing methods such as literature review,case studies,and qualitative analysis,this study elucidates the risk profiles associated with the use of synthetic data in AI training and identifies shortcomings in the existing governance framework. Based on a shift in governance paradigms,it proposes specific pathways for risk governance. [Result/conclusion] Currently,synthetic data in AI model training faces quality risks,security risks,and misuse risks. However,the governance system under the command-and-control model exhibits significant deficiencies in three dimensions:institutional design,technical governance,and multi-stakeholder collaboration. To address these challenges,a transition to an agile governance model is imperative. Under this new model,a systemic governance framework for synthetic data can be established by constructing a hybrid institutional system,leveraging technical tools for flexible governance,and optimizing mechanisms for multi-stakeholder collaboration.
[Purpose/significance] Based on the 15th National Symposium on Scientometrics and Scientific Evaluation,this study focuses on the innovative role of digital intelligence technology on Scientometrics and Scientific Evaluation,and explores the path of innovative development of the discipline. [Method/process] By reviewing and analyzing the keynote speeches,accepted papers,and relevant literature of the seminar,this paper explores the paradigm shift of scientometrics and the application of digital technology in scientific and educational evaluation. [Result/conclusion] Scientometrics has evolved from empirical statistics to data-driven intelligent analysis. At the same time,scientific and educational evaluation has achieved indicator innovation and process upgrading through open data and AI empowerment. Corresponding countermeasures have been proposed to address the heterogeneity of multi-source data,AI algorithm black boxes,data ethical risks,and insufficient policy coordination in the transformation process. It is pointed out that in the future,scientific econometrics and scientific and educational evaluation should be promoted from “observing history” to “foreseeing the future”,providing theoretical and tool support for the human-machine collaborative scientific research ecology.
[Purpose/significance] Taking the EU’s standardized approach to online disinformation governance from policy to system as a blueprint can provide useful references for the improvement of China’s online information content governance system and mechanism construction. [Method/process] With the timeline of policy texts as the main thread,this study combs through the process of building social consensus on the EU’s online disinformation governance,analyzes the methods of integrating disinformation into legal systems,and examines the specific measures of transparent and diversified governance. [Result/conclusion] Regarding the EU’s standardized approach,China should use continuous and unified policy texts to clarify the connotations of harmful and illegal information content,providing domain-sprecific conceptual support for the regulatory construction. In addition,online information content governance should adhere to a “human - centric” approach,value incentive and guidance thinking,and use standardized methods to invite more diversified entities to participate in the governance of online information content,thereby enhancing the effectiveness of governance.
[Purpose/significance] The field of the national defense science and technology information(STI) has the characteristic of limited open source training data,high timeliness,and high requirements for professional knowledge,so how to examine the application capability and effect of the large language models (LLMs) for its special needs is an urgent problem. [Method/process] This paper constructs the data set from three dimensions:domain knowledge,dynamic research and thematic research of the national defense STI and selects 1557 subjective and objective questions to assess the application effect of the eight LLMs developed by commercial organizations,research institutes,and universities in the field. [Result/conclusion] The LLMs perform well in the domain knowledge,but there is still a significant gap in the research of dynamic and thematic information. It is necessary to actively seek and promote the integration,adaptation and application of LLMs technical capabilities in the field,so as to provide strong support for high-quality and efficient service of the national defense STI works.
[Purpose/significance] In the context of frequent emergencies and disasters,emergency intelligence services serve as a critical component influencing the effectiveness of emergency management activities. They are a fundamental prerequisite for the successful implementation of emergency responses,providing essential support for informed decision-making and efficient actions. [Method/process] Based on the resilient thinking,the emergency intelligence service mechanism is constructed from four aspects: emergency intelligence acquisition and monitoring,intelligence intelligence analysis,intelligence research and judgment,and intelligence feedback. Combined with the research framework of resilience theory,the specific process and operation logic of emergency information service system are described from four stages: demand understanding,data mining,comprehensive research and judgment and service feedback. [Result/conclusion] With adaptive ability,absorption ability and recovery ability as the internal thrust,the resilience thinking is embedded in emergency intelligence services,and the enabling role of emergency intelligence in emergency response is strengthened to improve the resilience of emergency management.
[Purpose/significance] Accurately identifying the “bottleneck” key core technologies,and analyzing their competitive situation in a forward-looking manner are of great strategic significance for breaking through the constraints of scientific and technological embargoes and fostering new quality productivity led by scientific and technological innovation. [Method/process] This paper proposes a method for identifying “bottleneck” key core technologies and analyzing their competitive situation based on weak signal theory,utilizing patent and CCL. Firstly,this paper establishes the index system from five dimensions of complexity,innovation,security,externality and legality,and constructs a key core technology selection model based on the DNN. Secondly,based on the CCL to identify the obvious “bottleneck” key core technologies,a weak signal combination diagram is utilized to screen the hidden “bottleneck” key core technologies. Finally,through the comparison with the United States,Japan,Britain,France and other developed countries,analyzed the screening of the “bottleneck” key core technology competition situation,and then clear the direction of China’s “bottleneck” key core technology. [Result/conclusion] This paper takes deep-sea submersibles as an example to carry out empirical analysis,identifies 8 types of explicit “bottleneck” key core technologies and 3 types of implicit “bottleneck” key core technologies in the field of deep-sea submersibles,and verifies the feasibility and validity of the proposed method. This paper can reveal the breakthrough direction of “bottleneck” key core technologies from a deep level,to provide references for national development policies and industrial layout.
[Purpose/significance] Explore the deviation between the technology layer and the demand layer in the industry chain,clarify the key technology areas that urgently need to be invested in the current industry chain,and provide direction guidance for the overall development of the industry chain. [Method/process] Research on constructing demand subject cooperation networks,demand topic co-occurrence networks,industry chain correlation networks,patent IPC co-occurrence networks,and innovation subject cooperation networks,connecting each network layer to form a three-dimensional “knowledge cooperation” complex network architecture based on “demand industry technology”. Analyze the degree of deviation between technology and demand from a macro level,and comprehensively use technology co-occurrence intensity,network structure indicators,and time series dynamic analysis methods to identify the development trend of concrete technology from a micro level,thereby achieving a comprehensive assessment of technology development trends. [Result/conclusion] The empirical results indicate that the demand layer for new energy vehicles mainly focuses on performance technologies such as range,charging,safety,and power,followed by design technologies such as appearance,interior,model,and configuration. The key driving forces for the development of new energy vehicle technology are universities and technology-based enterprises. From the perspective of technological trends,technology is gradually shifting from hybrid power to electrification,mainly focusing on the research and development of battery structures and application devices. From the perspective of the industrial chain,the upstream emphasizes electric power drive,fuel cell,and energy storage battery technology,the midstream focuses on power system integration and control,and the downstream focuses on electric power drive and transmission,as well as mechanical component testing. Overall,the demand layer focuses on the downstream of the industrial chain,while the technology layer invests heavily in the upstream,with significant differences in focus.
[Purpose/significance] This paper presents a model for early identification of science and technology security risks in emerging technologies,offering valuable insights for strategic decision-making as China seeks to capitalize on the opportunities presented by the new wave of technological revolution and secure a leading position in future industries. [Method/process] Adopting a “science-technology” linkage perspective,the study integrates sentence-level scientific and technological text analysis with citation relationships to construct a current science and technology knowledge network. Using graph neural network-based link prediction methods,the paper builds a future science and technology knowledge network. Based on the temporal evolution of nodes,changes in network topology,and the linkage of science and technology,an index system is constructed to identify emerging technologies. Furthermore,the model considers factors such as development potential,technological gaps,and substitutability to construct an early warning system for science and technology security risks from the perspective of the innovation chain. This model is empirically validated within the domain of CNC machine tools. [Result/conclusion] Five emerging technologies are identified:CNC machine fault diagnosis technology,CNC machine digital twin technology,3D printing CNC machines,CNC machine chatter suppression technology,and cloud-based CNC systems. The results indicate that China faces significant science and technology security risks in the areas of chatter suppression and fault diagnosis technologies for CNC machine tools. By comparing the identification results with current market research,a good consistency was observed,demonstrating the scientific validity and forward-looking nature of this methodology.
[Purpose/significance] As an important trend in the development of search engines,conversational search engines bring a new human-intelligence interactive experience to the user information retrieval process. Analyzing the differences in the retrieval experience of users when using conversational search engines and traditional search engines and summarizing the factors that cause the differences will help to urge the development and spreading of conversational search engines,and promote the development of research on user information behavior in the field of information retrieval. [Method/process] This study adopts a task-driven experimental method. With task complexity as the gradient,three types of task situations,namely factual,explanatory and exploratory ones,are set up. Under each situation,the user experiences of the two types of search engines are compared respectively. The subjects use Bing and Bing Copilot to complete the retrieval tasks and participate in post-task interviews. [Result/conclusion] As the task types become more complex,compared with traditional search engines,the number of questioning rounds,the number of questioning information items,perceived interactivity,and perceived likability significantly increase when users use conversational search engines. However,at the same time,the number of answer points,the length of answers,the similarity of answers,and the perceived usefulness are also lower. Users are more willing to use the convenient interactive conversational search engines to meet their personalized information acquisition and knowledge growth needs,but there are also a series of problems caused by insufficient information literacy.
[Purpose/significance] With the continuous increase of trade friction,to improve China advantage in technological competition,it is essential to accurately identify the technical gap to break the technical barriers and realize the high-quality development of technology. [Method/process] Firstly,data collection and pre-processing were carried out,and Llama2-BERTopic was constructed to identify the technical segments concerned at home and abroad. Secondly,five evaluation indexes,including the degree of technological monopoly,the degree of technological novelty,the ability of technological innovation,the degree of technological importance,and the scale of technological activities,were constructed,and the PCA-VIKOR evaluation model was used to evaluate the technical subdivision field. Finally,according to the evaluation results,the existing technical gaps in our country are identified. [Result/conclusion] This method can dynamically determine the change of the gap in the technical subdivision field and provide a reference for China to obtain the comparative advantage of technological competition,and verifies the method's feasibility by taking the lithium-ion battery separator as an example.
[Purpose/significance] Constructing a set of scientific and accurate “Neck Sticking” technologies screening framework,which will provide decision-making reference and intelligence support for the future scientific research management to layout scientific and technological resources to break through “Neck Sticking” problem. [Method/process] Taking patent data as the object,designing Comprehensive Centrality indexes based on IPC classification of co-occurring hypernetwork nodes and structural features,and selecting key core technologies through K-means clustering identification; designing the index system of three dimensions of Basic Superiority,Competitive Superiority,and Innovative Superiority and setting the screening standard,and building the Cloud Matter Element Model to realize the index data mapping to the type of technology,so as to screen “Neck Sticking” technologies and the level of “Neck Sticking” of each technology in each dimension. [Result/conclusion] Taking the field of lasers as an example,the “Neck Sticking” technologies in this field are finally identified. Therefore,the methodology of this paper is verified to be scientific and reasonable. In addition,the screening framework of this paper will provide decision-making reference for the policy formulation of relevant departments.
[Purpose/significance] Traditional policy conflict detection methods,primarily reliant on manual analysis,suffer from limitations such as suboptimal efficiency and insufficient coverage,making it challenging to address the intricate relationships and sheer volume of policies. [Method/process] This paper introduces an intelligent approach for identifying and providing early warnings of potential policy conflicts by proposing a full-element network-based calculation scheme. [Result/conclusion] This scheme comprises three core modules: semantic feature modeling of policy conflicts,policy content parsing and deep semantic association analysis,and both explicit conflict detection and implicit conflict reasoning. The method’s feasibility is corroborated through preliminary experimental results,and the paper suggests two strategic implementations for early warning of policy conflicts: the first involves embedding functional modules to facilitate conflict prevention and management throughout the document circulation process,while the second entails the creation of a provincial-level policy conflict monitoring and early warning system via independent platform development. The calculation of potential policy conflicts should be widely promoted and applied throughout the entire process of official document circulation,providing services such as pre-event prevention,mid-event resolution,and post-event summary,and offering strong support for the coordinated optimization and intelligent governance of China’s policy system.
[Purpose/significance] Classifying scientific data into specific disciplinary domains enhances the effectiveness of information retrieval and improves the discoverability of scientific data. However,human-driven classification of scientific data struggles to meet the demands of processing massive data. Therefore,it is imperative to explore effective automated classification methods for scientific data. [Method/process] This paper proposes a research framework for automatic scientific data classification driven by large language models (LLMs). First,high-quality annotated datasets are constructed using data journal “Data in Brief”. Next,prompt templates are designed and few-shot data are selected to rapidly adapt the open-source model Qwen2.5-7B to the classification task,followed by supervised fine-tuning of the LLMs using annotated data. Finally,experiments are conducted to evaluate few-shot learning performance and the classification efficacy of fine-tuned LLMs on both metadata and full-text data. [Result/conclusion] The integration of prompt templates,few-shot learning,and supervised fine-tuning with annotated data significantly improves the automatic classification performance of LLMs. Additionally,the quantity and disciplinary distribution of annotated data used for model fine-tuning determine the classification accuracy of LLMs across different scientific domains.
[Purpose/significance] The problems and methods contained in scientific papers are an important part of describing scientific research results. Mining the new combination of problems and methods can obtain scientific research ideas,which can inspire researchers’ thinking and promote scientific research innovation. [Method/process] This study proposes a scheme to mine scientific research ideas. First,universal information extraction model is used to identify problems and methods from scientific papers under the condition of few-sample training,and a problem-method network is established. Second,a recommendation algorithm based on graph neural network is used to mine new combinations of problems and methods as scientific research ideas while improving the ranking mechanism of the recommendation algorithm. [Result/conclusion] Some representative journals in the field of information science were selected for empirical research,which proved that the proposed problem-method combination recommendation scheme can mine new scientific research ideas. The improved recommendation algorithm based on graph neural network has good results in mining scientific research ideas.
[Purpose/significance] Based on the large language model,the semi-supervised generation of keywords such as problem words and method words is implemented,and the generated keywords are applied to the semantic novelty measurement of scientific and technological papers. [Method/process] In this paper,a semantic novelty measurement model based on large language model is proposed. LoRA and prompt words are co-fine-tuned to improve the accuracy and discrimination of keyword generation and structural measurement of large language model. [Result/conclusion] The keyword recall rate,precision rate and F1 generated by the model in this paper are 62.5%,73.3% and 67.5%. The recall rate and other indicators increase with the increase of training samples,but the growth rate tends to increase first and then decrease,and the training set has high effect and low cost at 3000.The experimental results show that the large language model after the collaborative fine-tuning of LoRA and prompt words has better performance,and the semantic novelty measurement method proposed in this paper is effective and robust.
[Purpose/significance] Integrated innovation serves as a critical engine for driving scientific,technological,and economic development,playing a pivotal role in enhancing the competitiveness of both enterprises and nations. A comprehensive and objective assessment of the progress in measuring integrated innovation in scientific and technological achievements can provide robust support for the evaluation of integrated innovation practices. [Method/process] This study systematically reviews the current research landscape of integrated innovation in scientific and technological achievements,exploring characterization methods across multiple dimensions,including disciplinary integration,technological convergence,and knowledge fusion. The analysis focuses on various perspectives such as knowledge overlap and fusion,topic co-occurrence analysis,and knowledge citation diffusion,systematically organizing measurement indicators from both domestic and international studies. [Result/conclusion] The study summarizes the strengths and limitations of different integrated innovation measurement methods in practical applications. Furthermore,it identifies four key trends in the measurement of integrated innovation indicators for scientific and technological achievements:the shift from single-indicator to multi-indicator measurement,the transition from simple frequency counting to deep semantic analysis,the move from generic indicators to personalized and specialized indicators,and the progression from surface-level feature metrics to full-text feature metrics.