浙江大学学报(人文社会科学版)
 
   2025年6月24日 星期二   首页 |  期刊介绍 |  编委会 |  投稿指南 |  信息服务 |  期刊订阅 |  联系我们 |  预印本过刊 |  浙江省高校学报研究会栏目 |  留言板 |  English Version
浙江大学学报(人文社会科学版)  2024, Vol. 54 Issue (5): 42-58    DOI: 10.3785/j.issn.1008-942X.CN33-6000/C.2023.05.101
主题栏目:数字传播研究 最新目录| 下期目录| 过刊浏览| 高级检索 |
AIGC时代虚假信息的制造传播与国家安全及公民权益的维护
陆建平1, 党自强2
1.浙江大学 传媒与国际文化学院,浙江 杭州 310058
2.浙江大学 计算机科学与技术学院,浙江 杭州 310058
Manufacture and Dissemination of Fake Information vs. Protection of National Security and People’s Rights and Interests in the AIGC Era
Lu Jianping1, Dang Ziqiang2
1.College of Media and International Culture, Zhejiang University, Hangzhou 310058, China
2.College of Computer Science and Technology, Zhejiang University, Hangzhou 310058, China

全文: PDF (2316 KB)  
输出: BibTeX | EndNote (RIS)      
摘要 随着OpenAI发布的ChatGPT在短时间内吸引了数亿用户,生成式人工智能技术迅速进入了大众视野。生成式人工智能可以通过人类指令来生成满足指令的内容,这一技术带来了内容生产方式的变革。然而,生成式人工智能技术的发展和普及也使虚假信息的制造和传播变得更加容易、多样和逼真。AI技术可用于生成虚假的文本、图像、音频和视频,甚至可以模仿真实人类的语音和行为,致使人们很难辨别究竟是AI生成的内容还是实际存在的真相和观点。倘若AI虚假信息用于造谣、欺诈、社会操纵等目的,将会对国家安全和公民权益造成巨大危害。因此,尽早完善AI新时代虚假信息的监测处理体系已是当务之急。
服务
把本文推荐给朋友
加入我的书架
加入引用管理器
E-mail Alert
RSS
作者相关文章
陆建平
党自强
关键词 AIGCChatGPT深度伪造虚假信息国家安全公民权益    
Abstract:With the impressive debut of ChatGPT, the idea of artificial intelligence-generated content (AIGC) assisting or even replacing human authors in content creation is becoming a reality. ChatGPT quickly attracted hundreds of millions of users, and various AIGC applications rapidly went online and became popular, marking the advent of a new era in content production.Artificial intelligence technology has experienced three major stages since the 1950s: the shift from machine learning to deep learning, the introduction of Transformer models, and the arrival of the Foundation Model era. With the advancement of artificial intelligence technology, the number of parameters of AI models has continuously increased, from 117 million to 1.5 billion and then to 175 billion, until the birth of the ChatGPT. Moreover, various amazing AIs have emerged at the same time; these AIs can be flexibly used in creative fields such as writing, music arrangement, painting, and video production. However, while foundation models are driving the cognitive capabilities of intelligent entities, they also pose risks and challenges to humans: the training data and objectives of large language models may be ambiguous and uncertain and thus may show misleading and biased behavior; when large language models become easy to manipulate, they are most likely to be used for the release of malicious empowerment, resulting in issues such as fake news, online fraud, and more seriously, they may affect social stability and endanger national security.Given that AI technology is easily accessible to the public and has extremely powerful capabilities, malicious actors can misemploy it when applying for AI text containing image rumours, AI video fraud, AI audio fraud, and AI scene fraud and even when imitating the voice and behavior of real humans, making it difficult for people to distinguish between the content generated by AI and actual truth and public opinion. In addition, the arbitrary dissemination of anthropomorphic AI in the water army will not only disturb normal public cognition but also impact the existing social order. Indeed, due to the power of AIGC, the fabrication of fake information has become easier and simpler, and its content is becoming more diverse and realistic. The world has seen serious cases involving rumour-mongering and fraud, greatly jeopardizing the legitimate rights and interests of citizens and threatening national security. In view of this, this study, based on a brief introduction of the mechanism of AIGC, analyses a few cases and proposes the following suggestions and measures: (1) At the technical level, research investment should be increased to improve the effectiveness of Deepfake Detection, e.g., to design and produce tit-for-tat AI of Justice to counterattack Evil/Criminal AI. (2) At the legislative level, relevant laws and regulations should be improved to clarify the boundaries of crime and noncrime. For example, items of responsibilities and penalties in the newly promulgated “Interim Measures for the Management of Generative Artificial Intelligence Services” should be combined with China’s existing relevant laws and regulations to clarify relevant actors in the manufacturing, utilization and dissemination of artificial intelligence false information. (3) At the law enforcement level, specialized institutions should be established to handle emergencies and strictly enforce laws and regulations in combating AI crimes. (4) In terms of publicity and education, AIGC artificial intelligence knowledge should be popularized to improve citizens’ AI recognition and alertness so that an effective early danger warning mechanism can be established together with a hazard handling mechanism to avoid and reduce damage to people’s interests and to safeguard national security.
Key wordsAIGC    ChatGPT    deepfake    fake information    national security    people’s interests   
收稿日期: 2023-05-10     
基金资助:中央高校基本科研业务费“重点领域研究资助计划”专项(2023年度第四批)资助项目
作者简介: 陆建平(https://orcid.org/0000-0002-5732-2866),女,浙江大学传媒与国际文化学院教授,博士生导师,传播学博士,主要从事国际传播、影视传播、文化传播、翻译学及英语教育等研究;党自强(https://orcid.org/0009-0008-6994-4559)(通信作者),男,浙江大学计算机科学与技术学院硕士研究生,主要从事人工智能、计算机视觉等研究;
引用本文:   
陆建平, 党自强. AIGC时代虚假信息的制造传播与国家安全及公民权益的维护[J]. 浙江大学学报(人文社会科学版), 2024, 54(5): 42-58. Lu Jianping, Dang Ziqiang. Manufacture and Dissemination of Fake Information vs. Protection of National Security and People’s Rights and Interests in the AIGC Era. JOURNAL OF ZHEJIANG UNIVERSITY, 2024, 54(5): 42-58.
链接本文:  
https://www.zjujournals.com/soc/CN/10.3785/j.issn.1008-942X.CN33-6000/C.2023.05.101     或     https://www.zjujournals.com/soc/CN/Y2024/V54/I5/42
发表一流的成果,传播一流的发现,提供一流的新知

浙ICP备14002560号-5
版权所有 © 2009 浙江大学学报(人文社会科学版)    浙ICP备05074421号
地址:杭州市天目山路148号 邮编:310028 电话:0571-88273210 88925616 E-mail:zdxb_w@zju.edu.cn
本系统由北京玛格泰克科技发展有限公司设计开发  技术支持:support@magtech.com.cn