Abstract:The rapid development of generative artificial intelligence (AI) presents immense potential but also significant risks, particularly in terms of content. These risks primarily manifest in the generation of discriminatory and inappropriate content, which can undermine social equality, justice, and public morality. China has already enacted legislation to regulate generative AI. However, the effectiveness of these laws remains to be seen. Existing academic research is divided, with some scholars arguing that generative AI does not pose new risks and advocating for lenient regulation, while others propose innovative governance approaches involving legal and technical measures. Overall, existing researches lack a clear governance logic and approach for content governance. First, it is essential to clarify the goals and boundaries of content governance. Content governance must address both the needs of generative AI as a medium of communication and as a foundational infrastructure. The former constitutes the goal of content governance, which is to proactively prevent and address the societal risks that may arise from the unrestricted generation of content. This involves safeguarding individual freedom of information while ensuring the authenticity and effectiveness of information flow and preventing the homogenization of societal information. The latter defines the boundaries of content governance, emphasizing the need to avoid over-regulation that hinders the utility of generative AI as a digital infrastructure and the healthy development of China’s AI industry. Second, a logical framework for content governance must be constructed, namely, integrated classification governance. Currently, content governance for generative AI follows a “piecemeal social engineering” methodology, resulting in a rather arbitrary and fragmented governance landscape. The lack of smooth connections between governance rules significantly weakens the overall effectiveness of content governance. To achieve more scientific and effective content governance, the governance logic must shift from an individualistic perspective of fragmented governance to a holistic perspective of integrated governance. While emphasizing the connections and interoperability of technological, industrial, and legal logics, it is also necessary to consider the technical characteristics and industrial development status of generative AI governance, combined with the goals and boundaries of content governance, to classify the entities in the AI industry chain that are subject to governance in accordance with legal, technological, and industrial logics. Based on a functionalist paradigm, differentiated classification governance methods should be configured for different entities. Finally, a scientifically sound governance approach should be established, and appropriate governance measures proposed. For technology developers, soft law norms should be primarily adopted, with a greater emphasis on ex-ante regulation to incentivize and encourage them to engage in technological governance. For technology users, hard law norms should be primarily adopted, with a focus on ex-post regulation to supervise, correct, and penalize their application behaviors in accordance with the law.
1 丁晓东:《论算法的法律规制》,《中国社会科学》2020年第12期,第138-159页。 2 刘作翔:《权利平等的观念、制度与实现》,《中国社会科学》2015年第7期,第81-94页。 3 赵精武:《生成式人工智能应用风险治理的理论误区与路径转向》,《荆楚法学》2023年第3期,第47-58页。 4 曹建峰:《迈向可信AI:ChatGPT类生成式人工智能的治理挑战及应对》,《上海政法学院学报(法治论丛)》2023年第4期,第28-42页。 5 徐伟:《论生成式人工智能服务提供者的法律地位及其责任——以ChatGPT为例》,《法律科学(西北政法大学学报)》2023年第4期,第69-80页。 6 袁曾:《生成式人工智能的责任能力研究》,《东方法学》2023年第3期,第18-33页。 7 陈兵:《生成式人工智能可信发展的法治基础》,《上海政法学院学报(法治论丛)》2023年第4期,第13-27页。 8 王建磊、曹卉萌:《ChatGPT的传播特质、逻辑、范式》,《深圳大学学报(人文社会科学版)》2023年第2期,第144-152页。 9 周学峰:《生成式人工智能侵权责任探析》,《比较法研究》2023年第4期,第117-131页。 10 Wehde A., “Regulierung von large language models in DSA und AIA-E,” 2023-01-26, http://beck-online.beck.de/Bcid/Y-300-Z-MMRAktuell-B-2023-N-455171, 2023-08-20. 11 朱虎:《规制大众传播媒介的回应权:功能延续与制度发展》,《法学研究》2023年第1期,第125-142页。 12 蒋红珍:《比例原则位阶秩序的司法适用》,《法学研究》2020年第4期,第41-54页。 13 Peng Y. J., Han J. H. & Zhang Z. L. et al., “The Tong test: evaluating artificial general intelligence through dynamic embodied physical and social interactions,” Engineering, Vol. 34 (2024), pp. 12-22. 14 张凌寒:《生成式人工智能的法律定位与分层治理》,《现代法学》2023年第4期,第126-141页。 15 德]菲利普·黑克:《利益法学》,傅广宇译,《比较法研究》2006年第6期,第145-158页。 16 习近平:《决胜全面建成小康社会 夺取新时代中国特色社会主义伟大胜利》,北京:人民出版社,2017年。 17 Busche D., “Einführung in die rechtsfragen der künstlichen intelligenz,” Juristische Arbeitsbl?tter, Heft 6 (2023), S. 441-445. 18 丁晓东:《全球比较下的我国人工智能立法》,《比较法研究》2024年第4期,第51-66页。 19 Hacker P., Engel A. & Mauer M., “Regulating ChatGPT and other large generative AI models,” 2023-05-12, https://arxiv.org/pdf/2302.02337, 2024-05-08. 20 冯果:《整体主义视角下公司法的理念调适与体系重塑》,《中国法学》2021年第2期,第61-83页。 21 张文显:《部门法哲学引论——属性和方法》,《吉林大学社会科学学报》2006年第5期,第5-12页。 22 裴洪辉:《合规律性与合目的性:科学立法原则的法理基础》,《政治与法律》2018年第10期,第57-70页。 23 曹建峰:《“硬”监管与“软”治理——美欧最新人工智能监管政策分析》,《光明日报》2020年5月14日,第14版。 24 龙柯宇:《生成式人工智能应用失范的法律规制研究——以ChatGPT和社交机器人为视角》,《东方法学》2023年第4期,第44-55页。 25 纪海龙:《解构动产公示、公信原则》,《中外法学》2014年第3期,第694-713页。 26 张欣:《生成式人工智能的算法治理挑战与治理型监管》,《现代法学》2023年第3期,第108-123页。 27 罗豪才、宋功德:《软法亦法:公共治理呼唤软法之治》,北京:法律出版社,2009年。 28 罗豪才:《公共治理的崛起呼唤软法之治》,《政府法制》2009年第5期,第12-13页。 29 Sonderling K. E. & Kelley B. J., “Filling the void: artificial intelligence and private initiatives,” North Carolina Journal of Law & Technology, Vol. 24, No. 4 (2023), pp. 153-200. 30 Monast J. J., “Emerging technology governance in the shadow of the major questions doctrine,” North Carolina Journal of Law & Technology, Vol. 24, No. 4 (2023), pp. 1-32. 31 Gutierrez C. I., Marchant G. E. & Michael K., “Effective and trustworthy implementation of AI soft law governance,” IEEE Transactions on Technology and Society, Vol. 2, No. 4 (2021), pp. 168-170. 32 刘湘丽、肖红军:《软法范式的人工智能伦理监管:日本制度探析》,《现代日本经济》2023年第4期,第28-44页。 33 Smits J. & Borghuis T., “Generative AI and intellectual property rights,” in Custers B. & Fosch-Villaronga E. (eds.), Information Technology and Law Series, vol. 35, Hague: T. M. C. Asser Press, 2022, pp. 323-344. 34 Anderljung M., Barnhart J. & Korinek A. et al., “Frontier AI regulation: managing emerging risks to public safety,” https://arxiv.org/pdf/2307.03718, 2024-05-08. 35 吉萍、祝丹娜、肖平等:《医疗人工智能研究的风险评估及应对》,《医学与哲学》2022年第8期,第7-9页。 36 Ingrams A. & Klievink B., “Transparency’s role in AI governance,” 2022-02-14, https://doi.org/10.1093/oxfordhb/9780197579329.013.32, 2023-04-12. 37 廉睿、高鹏怀:《整合与共治:软法与硬法在国家治理体系中的互动模式研究》,《宁夏社会科学》2016年第6期,第81-85页。 38 Buczynski W., Steffek F. & Cuzzolin F. et al., “Hard law and soft law regulations of artificial intelligence in investment management,” Cambridge Yearbook of European Legal Studies, Vol. 24 (2022), pp. 262-293. 39 Hauglid M. K. & Mahler T., “Doctor Chatbot: the EU’s regulatory prescription for generative medical AI,” Oslo Law Review, Vol. 10, No. 1 (2023), pp. 1-10. 40 卞建林:《人工智能审判的责任解构与制度应对》,《法治社会》2023年第5期,第1-11页。 41 支振锋:《生成式人工智能大模型的信息内容治理》,《政法论坛》2023年第4期,第34-48页。 42 Gervais D. J., “Towards an effective transnational regulation of AI,” AI & Society, Vol. 38, No. 1 (2023), pp. 391-410. 43 Misra S. K., Das S. & Gupta S. et al., “Public policy and regulatory challenges of artificial intelligence (AI),” in Sharma S. K., Dwivedi Y. K. & Metri B. et al. (eds.), IFIP Advances in Information and Communication Technology, vol. 617, Cham: Springer, 2020, pp. 100-111. 44 Dignam A., “Artificial intelligence, tech corporate governance and the public interest regulatory response,” Cambridge Journal of Regions, Economy and Society, Vol. 13, No. 1 (2020), pp. 37-54. 45 胡铭、陈竟:《类ChatGPT模型在数字检察中的应用前景及规制》,《人民检察》2023年第10期,第45-50页。