[Purpose/significance] With the emergence of ChatGPT,algorithm applications increasingly dominate human life,algorithm black box,algorithm manipulation,algorithm collusion,algorithm bias,algorithm discrimination and other risks also follow,these risks seriously affect social stability and even national security.The research and judgment of the global algorithmic risk situation can help to prevent and identify algorithmic risks,and provide Chinese wisdom and ideas for dealing with the global algorithmic risk governance problems.[Method/process] The article systematically reviews 915 core literature from major domestic and foreign databases,constructs an algorithm risk research framework based on “subject areas research topics governance tools governance measures”,and analyzes the interdisciplinary,complex interweaving,prominent anthropomorphism,and generalized uncertainty characteristics of algorithm risk.[Result/conclusion] From five aspects:strengthening the research on algorithmic risks in the field of intelligence science,enhancing the interpretability of artificial intelligence algorithms,strengthening research on algorithmic application and service improvement,and strengthening research on global algorithmic risk management and China’s wisdom and vision,this paper looks forward to the challenges of algorithmic risks.
[Purpose/significance] In the era of digital intelligence,with the widespread application of algorithms in intelligence analysis,revealing the current status of algorithm usage in intelligence analysis not only helps scholars in the field of intelligence to grasp academic research hotspots,but also guides more researchers to better use algorithms to solve practical problems in the process of intelligence analysis.[Method/process] This paper focuses on the literature related to intelligence analysis algorithms in five core journals in the field of information science in the past decade,attempts to sort out the classification of algorithms applied to intelligence analysis in the era of data intelligence,and presents the characteristics of these algorithms from the perspective of algorithm evolution and algorithm application.[Result/conclusion] From the perspective of evolution,papers on applied algorithms in the field of information science continue to rise in the past decade,and LDA is the most used algorithm in intelligence analysis.Since 2020,BERT algorithm has shown a prominent feature of new evolution.From the application point of view,the applications of online public opinion and patent analysis under emergencies show a trend of continuous refinement and inheritance evolution;Applications such as libraries,library information,e-commerce,and logistics information are gradually declining,replaced by digital humanities,smart libraries,and disruptive technologies.
[Purpose/significance] Food security is the foundation and important content of national security.The emergency intelligence system is an important support for emergency management.The construction of food security emergency intelligence system can provide scientific solutions and efficient operation guarantee for dealing with food security emergencies.[Method/process] Firstly,the normative research method is used to explain the relevant concepts and summarize the existing research.Secondly,based on the analysis of food security emergency intelligence needs,the key elements of the system are refined,and the intelligence service,large model agent platform architecture and organizational structure are discussed in detail in combination with the cloud edge architecture.Finally,the mode operation process of food security emergency intelligence system based on large model agent is expounded.[Result/conclusion] The food security emergency intelligence system based on large model agent intelligently aggregates intelligence services,intelligently adapts to multi-agent needs,and intelligently assists emergency decision-making.It not only provides full-life cycle intelligence support for food security emergency management,but also lays the research foundation of large model agent in the field of emergency intelligence,thus contributing to the modernization of emergency management and the maintenance of national food security.
[Purpose/significance] This study aims to ensure national food security using information science methods,enhance the professionalism of think tank services,and promote service innovation and upgrading in information and library institutions.[Method/process] By conducting online surveys and interviews,the study summarizes the shortcomings of existing think tanks in supporting food security policy consultations and analyzes service tasks.Proposing an think tank service model and implementation path for food security policy consultation based on multiple streams theory.[Result/conclusion] The think tank service model constructed in this study can accommodate the complex logical relationships of food security issues.It responds to policy consultation needs through content Q&A,briefing services,and solution services,thereby promoting the realization of scientific and high-quality think tank services.
[Purpose/significance] The aim is to explore the functional implementation of a think tank for food security policy consultation.By analyzing and mining policy texts,it seeks to provide scientific advice to policymakers,enrich the theoretical framework of policy informatics,and offer a reference for intelligence to play a think tank role in the field of food security policy consultation.[Method/process] By integrating text semantic analysis techniques such as the BERTopic model and knowledge association computation,a text analysis scheme is proposed based on enacted policies,agenda policies,and proposed policies.This approach aims to uncover policy themes and evolution trends,and match reasonable recommendations to agenda policies based on text similarity.[Result/conclusion] This study analyzes the tasks,methods,and processes of applying policy text analysis to the services of think tanks for food security policy consultation.Through data collection and empirical analysis,it successfully uncovers key themes in Chinese food security policies,evaluates trends,and supports think tank recommendations,providing new perspectives for think tank policy research.
[Purpose/significance] As a new type of production factor,fully developing and utilizing data resources and expanding the application scenarios of standardized data are the necessary means to release the value and activate the potential of data elements.[Method/process] According to the idea of “scene identification-model design-model interpretation”,and the qualitative analysis method is used to identify the coding scenes of textual data,to form the classification system of data application scenes.The data application scenario model is constructed by combining innovative application cases,data life cycle theory,CRISP-DM model and data value chain,and the data application scenario model is explained from top to bottom in the three dimensions of demand-oriented,data-supporting and technology-driven.[Result/conclusion] We form a classification system covering 4 first-level scenarios,14 second-level scenarios and 41 third-level scenarios,construct a scenario model framework that is demand-oriented,data-supported and technology-driven,and analyze and explain the logic and connotation of the model in depth to provide reference and guidance for broadening the application scenarios of normalized data.
[Purpose/significance] The purpose of this paper is to clarify the differences in the understanding of the concept of intelligence failure and accurately understand and define the connotation and extension of the concept of intelligence failure,promotes the academic community to reach a consensus on the definition of intelligence failure and provide support for further deepening the research on intelligence failure.[Method/process] Through combing the mainstream literature in Chinese and English,the author compares and analyzes the basic understanding of the concept of intelligence failure at home and abroad,and explores the relevant controversial issues to give a targeted response.[Result/conclusion] Defining intelligence failures as errors made by intelligence agencies in the course of intelligence production and distribution,counterintelligence,and other intelligence activity operations is more consistent with its essential nature.The inappropriate use of intelligence by decision-makers is more a matter of decision-making than intelligence,and thus should not be categorized as an intelligence failure.The term intelligence failure should not be translated as intelligence oversight,but rather as failure of intelligence,which is more appropriate.In the case of accurately caring for the subtle difference between intelligence failure and intelligence error,the translation intelligence failure may also be used selectively to a certain extent.
[Purpose/significance] Based on the Cynefin framework,the integration of complexity leadership theory reveals the internal logic of knowledge hoarding formation in community of practice,which has important theoretical value and practical significance for understanding the knowledge-leading system and the enabling holistic function of complexity leadership in community,so as to build the knowledge hoarding formation process model and governance mechanism.[Method/process] According to the characteristics of knowledge and the complexity of environment,the community of practice is divided into different knowledge Spaces.In the knowledge space,the paper discusses the process of knowledge hoarding at the individual and environment levels in the community under the effect of complexity leadership,so as to build a knowledge hoarding formation process model,and proposed a specific knowledge hoarding governance mechanism.[Result/conclusion] There are four types of knowledge hoarding formation processes in community of practice:compulsive hoarding,restrictive hoarding,adaptive hoarding and perceptual hoarding.It proposes governance mechanisms such as decentralization,diversification,collaboration and intervention governance to adjust knowledge hoarding and enhance knowledge interaction function to adapt to the dynamic changes of the environment.Knowledge subject,object and environment shape the knowledge ecology of the community of practice,and make full use of the interaction of various elements to avoid and regulate knowledge hoarding,which is conducive to forming the advantage of multi-source knowledge resources and providing resource guarantee for the good operation of the community.
[Purpose/significance] The rapid development of generative artificial intelligence represented by ChatGPT brings great opportunities for the intelligent transformation of information retrieval.[Method/process] Explore the function principle of enabling information retrieval by ChatGPT,analyze the compatibility between ChatGPT supporting technology and information retrieval,evaluate ChatGPT by using the expert scoring method,analyze and evaluate ChatGPT’s ability level in the field of information retrieval,and take four scenarios of government affairs,academic,teaching and business as examples.The application of ChatGPT empowering information retrieval is further revealed.[Result/conclusion] Starting from the four levels of the resource layer,processing layer,application layer,and optimization layer,it is proposed to promote the development process of information retrieval by customizing the information set of professional fields,focusing on the ability of information extraction,building the dual-engine structure of “search superimposed ChatGPT” and strengthening user interaction.
[Purpose/significance] This study explores the influencing factors and formation process of privacy paradox in the quantified-self scenario,in order to help solve the dilemma of personal information protection,promote the digital ecological governance of health management platform,and also provide a new perspective for the theory and practice of the interaction between user and platform.[Method/process] This study used mobile experience sampling method to collect data,and the target subjects were selected for semi-structured interviews through rank sum test.After that,the interview data was analyzed by three-level coding to extract the influencing factors of the privacy paradox in the quantified-self scenario.[Result/conclusion] There are four dimensions to quantify the influencing factors of privacy paradox in quantified-self scenario:context,task,conflict and coordination.Privacy paradox arises from the evolution process of “privacy boundary change-privacy disclosure behavior-privacy paradox”,and privacy boundary change plays an important role in the formation of privacy paradoxes.
[Purpose/significance] In the wave of digitization,the digital disadvantaged groups who cannot effectively access digital technology due to subjective and objective reasons often take the action of digital rejection or resistance.It is of great significance to explore the key influencing factors of digital rejection of this group and analyze the current situation and optimization path of digital inclusion policy tools.[Method/process] Based on the perspective of theory-policy coordination degree,this study adopts meta-ethnography and DEMATEL method to identify the key influencing factors of digital exclusion of the digital disadvantaged groups,using the theory of personal information world.Then content analysis of the current digital inclusion policy focus is carried out,and finally policy optimization paths are proposed.[Result/conclusion] Nine key factors are identified,including digital literacy,function-requirement matching degree,information infrastructure and volunteer service resources.The theory-policy coordination degree of some influencing factors is relatively low.At the end of the paper,some corresponding policy optimization paths are proposed.
[Purpose/significance] In the uncertain and adversarial border security environment,the traditional information analysis paradigm based on strong signals urgently needs to be complemented and optimized.By exploiting weak signals,it promotes the change of the traditional paradigm of information analysis and serves the perception and response to the border security risk.[Method/process] Through research and with the help of related theories about weak signals,the weak signal is categorized into four dimensions and three stages.Taking the meaning representation and user perception of weak signals as the starting point,we use horizon scanning and data fusion to perceive anomalous signals at the initial stage,and adopt the four modes of exploitation to construct meanings of weak signals during the development stage,and in the mutation stage,we assist the decision makers to predict the risk scenarios by continuously exploiting the results of the previous exploitation and eliminating the falsification.[Result/conclusion] Through active exploitation,it can dig out the connection between weak signals,construct the meaning of weak signals,and focus on the possible risk scenarios before the weak signals are transformed into strong signals,so as to assist decision makers in perceiving and responding to risks.
[Purpose/significance] Identifying key core technologies is a challenge that must be faced in the process of technology breakthrough.Explore the key core technologies of each application in the industry from the perspective of patent application,which is of theoretical and practical significance for enterprises to seek technology breakthrough.[Method/process] Take the patent as a bridge to construct the “technology-application” co-occurrence relationship;adopt the Leiden cluster discovery algorithm to categorize the technologies according to the application similarity of technologies,which in turn clarifies the characteristic technology classes of each application;adopt the comprehensive index evaluation method to evaluate the key core degree of the application technologies,and take the technologies with the TOP 15 key core degree of each application as the alternative set of the key core technologies;recognize the technologies in the alternative set of key core technologies that belong to the characteristic technology classes of each application as key core technologies for each application.[Result/conclusion] Identify the key core technologies of each application in the industry by combining the technical differences among applications,and provide useful references for enterprises to identify the key core technologies from the overall level.
[Purpose/significance] Although intelligent recommendation services greatly improve the efficiency of users in obtaining news,they are not always satisfactory.From the perspective of dissatisfaction,this study explores the influencing factors and mechanism of users’ dissatisfaction in intelligent recommendation services of mobile news clients,aiming to enrich relevant research on dissatisfaction with intelligent recommendation services and provide references for the management practice of mobile news platforms.[Method/process] By collecting the review data of Toutiao in mobile app stores and using the three-level coding of grounded theory,this study constructed a theoretical model of the influencing mechanism of users’ dissatisfaction in intelligent recommendation services of mobile news clients.[Result/conclusion] It is found that information quality,platform quality and algorithm quality affect users’ dissatisfaction through the mediating role of service quality.Meanwhile,information narrowing positively affects privacy concerns through the moderating effect of user persona,and privacy concerns positively affects users’ dissatisfaction through the moderating effect of platform reputation.Moreover,both information quality and privacy concerns lead to users’ algorithmic manipulation,thereby increasing users’ perception of algorithm uncontrollability.
[Purpose/significance] A behavioral phenomenon of information cocoon retention exists in the context of network stratification.Studying the characteristics and influencing factors of retention behavior can help dig out the reasons for the formation and consolidation of the information cocoon,help social media users to enhance the awareness of “breaking the cocoon”,and provide ideas and strategies for the governance of cyberspace.[Method/process] Taking Douban group users as samples,this paper first explores the performance characteristics of social media users’ information cocoon retention behavior through in-depth interviews and open coding.Based on the results of qualitative analysis,we construct a model of factors influencing information cocoon lingering behavior,collect questionnaire data to test the hypotheses,and use fsQCA to conduct supplementary analysis from a group perspective.[Result/conclusion] The study concludes that the performance characteristics of social media users’ information cocoon retention behavior are diverse,and the factors affecting this behavior include system function,circle building,performance expectations,hedonic motivation,usage habits,information homogeneity,and information overload,and the path of causes of social media users’ information cocoon retention behavior is complex.
[Purpose/significance] The digital society has put forward new requirements for the development of core literacy of college students.It is necessary to further clarify the connotation of AI literacy and construct an AI literacy evaluation index system for students in colleges and universities,so as to provide reference support for the assessment of students’ AI literacy development status,ability improvement and cultivation.[Method/process] From the perspective of the literacy continuum,the study systematically reviewed the origin and development of AI literacy,and through the deconstruction and reorganization of the content and elements of the continuum,mapped it to the construction of the index system of AI literacy.Then,we constructed an AI literacy evaluation framework for college students based on the KSAVE model and defined the connotation of elements through literature research.And then,the Delphi method was used to consult on the preliminary construction of the evaluation index system for AI literacy,and the analytic hierarchy process was used to determine the weight of evaluation indicators.[Result/conclusion] An evaluation system for college students’ AI literacy consisting of 5 first-level indicators and 19 second-level indicators is constructed.The cultivation strategy of AI literacy is proposed from the three aspects of cognitive position,internal driving path and diversified scenario creation.
[Purpose/significance] To address the issue of single-dimensional user profiling based on online reviews,a method for network user profiling based on heterogeneous attribute propagation is proposed.[Method/process] A graph model is constructed based on users,movies,and tags.User attributes are initialized from multiple dimensions,including basic attributes,movie preferences,emotional preferences,and rating behaviors,serving as user node attributes.These user attributes are then continuously updated through iterative propagation.[Result/conclusion] Experimental results show that the proposed method can significantly enrich the dimensions of user profiling.Compared to the current best deep learning model,the mean squared error (MSE) is reduced from 0.113 to 0.083.Through attribute augmentation and propagation,this method can provide rich and accurate user profiling capabilities.[Limitations] The experimental data is sourced from movie reviews,and the user profiles are based on movie rating users.This scenario is relatively limited and lacks validation in other domains.
[Purpose/significance] The large language model (LLM) has triggered global interest in generative artificial intelligence (GAI).Integrating LLMs with information analysis opens new research opportunities in the era of data intelligence.Natural language prompting plays a key role in effectively enhancing LLM performance and meets the needs of information analysts.[Method/process] This paper discusses prompt-driven intelligent information analysis,outlines its key features,and proposes a natural language prompt-driven mode.It includes a human-in-the-loop workflow and a natural language prompt engineering framework.Additionally,it explores the challenges and safeguards in implementing this mode.[Result/conclusion] The proposed mode leverages natural language prompts and a human-in-the-loop approach,aligning analysts’ expertise and judgement with the capabilities of LLMs.This human-computer collaborative approach offers a novel perspective for information analysis in the data intelligence era.
[Purpose/significance] In recent years,with the emergence of online social platforms,information dissemination has changed rapidly.In order to provide high-quality content and promote information dissemination,online social platforms have applied various recommendation mechanisms,so the way of information dissemination is different from the previous topological structure dissemination.Mastering the characteristics of information dissemination under the recommendation mechanisms is conducive to the effective control of rumors and other information.[Method/process] This article proposes a SCIR information dissemination model based on topology and recommendation mechanism,and theoretically delves into the number and probability of final dissemination users by dissemination nodes.Through a series of numerical simulations,the influence of factors such as dissemination nodes,likes probability,favorites probability,and creation probability was studied.[Result/conclusion] With the recommendation mechanisms,rumors can spread quickly in the social network,but the increase of the probability of liking,favorites and other behaviors does not make the dissemination effect better,on the contrary,it may hinder the spread.[Limitations] This paper only considers a specific recommendation mechanism in the information dissemination model,and future research can comprehensively consider the specific impact of different recommendation mechanisms to construct a universal information model based on recommendation mechanisms.
[Purpose/significance] Study on EU data policy system aims to identify key directions,points and strategies for construction of data policy system,and promote the development of data-related theories and methods.[Method/process] By combing and summarizing policy texts,main components of the policy system were described and analyzed from aspects of the overall guidance of digital transformation,the panoramic setting of data strategy,the construction of data rules,the guidance of multiple data scenarios,and the guidance of data practice optimization.[Result/conclusion] Thus,the characteristics and limitations of the EU data policy system are analyzed from the main body,policy form,construction orientation and content points,and some revelatory strategies are put forward based on reality of our country:deepen the collaborative leadership mechanism of data institutions,strengthen the construction of multiple rules guided by data laws and regulations,fully integrate the overall requirements of national development,and integrate the layout based on data elements and data governance frontiers.
[Purpose/significance] With the advent of the data-driven era,open government data has gained increasing attention.Existing research has explored open government data policies from various perspectives,but little attention has been given to the homogeneity and heterogeneity of policy narratives.[Method/process] In this study,we take the example of the United States open government data policy and employ a structural topic model approach to identify policy topics.Subsequently,we use a narrative policy framework to analyze the policy narratives from four dimensions:background,plot,characters,and moral.We also examine the homogeneity and heterogeneity of policy narratives between the federal government and state governments in the United States.[Result/conclusion] The research findings indicate that there are five main policy narratives concerning the US open government data policy,and regional differences in policy narratives exist due to varying economic development levels,geographical locations,and other factors.
[Purpose/significance] By analyzing “the IC OSINT Strategy 2024-2026”,as well as “National Security Strategy”“National Intelligence Strategy”,and other U.S.official strategies involving open source intelligence,it can help us better understand the evolution logic of the U.S.open source intelligence strategy,grasp the future development trend of the U.S.open source intelligence work,and provide references to China’s open source intelligence work.[Method/process] Using methods such as content analysis and logical analysis,we sort out the narrative evolution of the U.S.open source intelligence strategy,examine the strategic turn of open source intelligence,predict the future development trend of the U.S.open source intelligence work,and then draw useful inspirations for China’s open source intelligence work.[Result/conclusion] The U.S.open source intelligence strategy has evolved in stages,experiencing the period of complementary means of dense source intelligence,the period of constituent elements of all-source intelligence,and the period of strategic tools for great power competition.The future will continue to implement the concept of data-centric work,strengthen data protection,focus on innovation,and the OSINT field began as a priori field of the intelligence community.In the future,China should,on the one hand,deepen its data-centric status,implement a data-driven intelligence model,set up a full-time intelligence coordination department,and at the same time cooperate with open-source intelligence organizations to win the cognitive competitive advantage of great powers through intelligence disclosure,and on the other hand,it should do a good job of anti-open-source work and pay attention to data security.