|
|
|
| A Logical Inquiry into Artificial Intelligence Governance on the Path of Humanism |
| Cheng Le, Gong Xuan |
| Guanghua Law School, Zhejiang University, Hangzhou 310008, China |
|
|
|
|
Abstract In the era of large language models (LLMs), the relentless advancement of intelligence and generalization capabilities of artificial intelligence (AI) models is catalyzing the emergence of a brand-new type of intelligent entity, thereby establishing a species-level antagonism with human collectives. In this context, the philosophical theory of humanism has been endowed with richer theoretical connotations, reflecting the dynamics of the intrinsic needs and value aspirations of the human society. AI models are characterized by emergent properties, enabling the transference of model performance across diverse societal domains and stimulating extensive digital transformation. Concurrently, the proactive deployment of AI generates a spectrum of ancillary challenges, leading to a disjunction between existing social expectations and governance anticipations.This paper endeavors to interrogate the logical dilemmas inherent in the governance of AI from the viewpoint of social values, exploring the capacity of humanism to elucidate the tensions arising from disparate societal values within the framework of technological governance. Overall, the governance of AI not only grapples with an imbalance between technological advancement and technological security but also confronts the contradictions between the demand for proactive governance and the unpredictability of AI technology. Consequently, there is an imperative to conduct reliable predictions of technological trajectories to facilitate a harmonious synthesis of technological development and security. Moreover, the establishment of a composite governance framework for AI rooted in humanistic principles is also essential. Specifically, the value-related dilemmas and governance challenges posed by AI can be systematically categorized into four dimensions: geographical, technical, ethical, and legal. By engaging humanistic theory to dissect the value conflicts pertinent to these dimensions, the conceptual foundations and implications in contemporary humanistic discourse can be further enriched and lead to the theories of “digital humanism”.To be specific, firstly, at the geographical level, the intensifying competition in AI technology among regions has unveiled intrinsic incompatibilities in the governance strategies employed by various stakeholders, thereby creating a discordant relationship with the imperatives for collaborative and integrated AI governance framework. In this light, it is important to advocate for the principle of inclusive and reciprocity to foster cross-regional cooperation in AI research and application, which is able to further facilitate the establishment of a cohesive international governance framework. Secondly, at the technical level, the pronounced incomprehensibility and uncontrollability associated with AI present fundamental challenges to effective AI governance. It is imperative to uphold the humanistic tenet of “technology for good”, thereby enhancing the explainability and controllability of AI models through the deployment of sound governance mechanisms and technological interventions. Thirdly, at the ethical level, the risks posed by multifaceted biases in AI may lead to significant algorithmic discrimination, undermining foundational values such as social equity and justice. Moreover, the advancements in AI models’ sophistication and realism could engender novel human-machine relationships that disrupt existing ethical paradigms. Consequently, the governance of AI ethics should emphasize a human-centered framework to ensure that AI prioritizes human welfare. Finally, at the legal dimension, the continuous evolution of social values in the digital age engenders tensions with the requisite stability of the legal systems. The potential catastrophic risks associated with AI may also transcend the governance capacities of the existing legal frameworks. To mitigate these challenges, it is vital to enhance the adaptability of the legal system through “democratic dynamism” and to expand the applicability and regulatory potency of the legal framework to achieve effective AI governance.
|
|
Received: 08 July 2024
|
|
|
|
1 程乐:《生成式人工智能治理的态势、挑战与展望》,《人民论坛》2024年第2期,第76-81页。 2 郭全中、张金熠:《作为视频世界模拟器的Sora:通向AGI的重要里程碑》,《新闻爱好者》2024年第4期,第9-14页。 3 OpenAI, “Introducing OpenAI o1,” 2024-12-05, https://openai.com/o1/, 2025-12-09. 4 Chang Y., Wang X. & Wang J. et al., “A survey on evaluation of large language models,” ACM Transactions on Intelligent Systems and Technology, Vol. 15, No. 3 (2024), pp. 1-45. 5 程乐:《构建以数据流通为核心的工业互联网生态体系》,《人民论坛》2024年第15期,第62-67页。 6 英]彼得·汤森:《技术的阴暗面:人类文明的潜在危机》,郭长宇、都志亮译,上海:上海科技教育出版社,2019年。 7 吴汉东:《人工智能时代的制度安排与法律规制》,《法律科学(西北政法大学学报)》2017年第5期,第128-136页。 8 陆建平、党自强:《AIGC时代虚假信息的制造传播与国家安全及公民权益的维护》,《浙江大学学报(人文社会科学版)》2024年第5期,第42-58页。 9 郭雪慧:《人工智能时代的个人信息安全挑战与应对》,《浙江大学学报(人文社会科学版)》2021年第5期,第157-169页。 10 程乐:《“数字人本主义”视域下的通用人工智能规制鉴衡》,《政法论丛》2024年第3期,第3-20页。 11 赵敦华:《西方人本主义的传统与马克思的“以人为本”思想》,《北京大学学报(哲学社会科学版)》2004年第6期,第28-32页。 12 曾德琪:《罗杰斯的人本主义教育思想探索》,《四川师范大学学报(社会科学版)》2003年第1期,第43-48页。 13 Davies T., Humanism, London & New York: Routledge, 2008. 14 韩水法:《人工智能时代的人文主义》,《中国社会科学》2019年第6期,第25-44,204-205页。 15 张奎良:《“以人为本”的哲学意义》,《哲学研究》2004年第5期,第11-16页。 16 张欣:《生成式人工智能的算法治理挑战与治理型监管》,《现代法学》2023年第3期,第108-123页。 17 OpenAI, “AI and compute,” 2018-05-16, https://openai.com/research/ai-and-compute, 2024-12-09. 18 Sanderson K., “GPT-4 is here: what scientists think,” Nature, Vol. 615, No. 7954 (2023), p. 773. 19 林洹民:《论人工智能立法的基本路径》,《中国法学》2024年第5期,第82-102页。 20 张吉豫:《赋能型人工智能治理的理念确立与机制构建》,《中国法学》2024年第5期,第61-81页。 21 程乐、龚煊:《美国人工智能法律政策下的美式“安全观”与中国因应》,《浙江工商大学学报》,2025年8月29日,https://link.cnki.net/urlid/33.1337.C.20250829.1416.008,2025年12月9日。 22 Rogers C. R., On Becoming a Person: A Therapist’s View of Psychotherapy, Boston: Houghton Mifflin Harcourt, 1995. 23 胡正坤、李玥璐:《全球人工智能治理:主要方案与阶段性特点》,《中国信息安全》2023年第8期,第63页。 24 程乐:《生成式人工智能的法律规制——以ChatGPT为视角》,《政法论丛》2023年第4期,第69-80页。 25 梅傲:《冲突法视野下智能机器人的物权法则》,《浙江大学学报(人文社会科学版)》2022年第12期,第57-68页。 26 Mileck J., Hermann Hesse: Biography and Bibliography, vol. 1., Oakland: University of California Press, 1977. 27 程乐、裴佳敏、Danesi M.:《中国网络治理的社会符号学阐释》,《浙江工商大学学报》2022年第2期,第6-16页。 28 张淑玲:《破解黑箱:智媒时代的算法权力规制与透明实现机制》,《中国出版》2018年第7期,第49-53页。 29 董春雨:《从机器认识的不透明性看人工智能的本质及其限度》,《中国社会科学》2023年第5期,第148-166,207-208页。 30 Browning J. & LeCun Y., “AI and the limits of language,” 2022-08-23, https://www.noemamag.com/ai-and-the-limits-of-language/, 2024-12-09. 31 苏宇:《数字时代的技术性正当程序:理论检视与制度构建》,《法学研究》2023年第1期,第91-107页。 32 张欣:《算法公平的类型构建与制度实现》,《中外法学》2024年第4期,第866-883页。 33 美]凯文·凯利:《涌现》,东西文库译,北京:新星出版社,2010年。 34 Webb T., Holyoak K. J. & Lu H., “Emergent analogical reasoning in large language models,” Nature Human Behaviour, Vol. 7, No. 9 (2023), pp. 1526-1541. 35 Watson D. S., “On the philosophy of unsupervised learning,” Philosophy & Technology, Vol. 36, No. 28 (2023), pp. 1-26. 36 曹明德:《从人类中心主义到生态中心主义伦理观的转变——兼论道德共同体范围的扩展》,《中国人民大学学报》2002年第3期,第41-46页。 37 姚大志:《何谓正义:自由主义、社群主义和其他》,《吉林大学社会科学学报》2008年第1期,第45-51,159页。 38 姚大志:《何谓正义:罗尔斯与哈贝马斯》,《浙江学刊》2001年第4期,第10-16页。 39 马长山:《数字公民的身份确认及权利保障》,《法学研究》2023年第4期,第21-39页。 40 李成:《人工智能歧视的法律治理》,《中国法学》2021年第2期,第127-147页。 41 Ferrer X., Nuenen T. V. & Such J. M. et al., “Bias and discrimination in AI: a cross-disciplinary perspective,” IEEE Technology and Society Magazine, Vol. 40, No. 2 (2021), pp. 72-80. 42 戎静:《“预测正义”能否预测正义?基于法国司法大数据预测应用的考察与启示》,《中外法学》2023年第5期,第1184-1202页。 43 Kant I., Groundwork of the Metaphysics of Morals, Cambridge: Cambridge University Press, 1998. 44 程乐、刘秀丽:《数字弱势群体权益法治保障研究》,《思想理论战线》2024年第1期,第71-79,140-141页。 45 Wang P., “Jurisprudential logic and institutional pathways of personalized value alignment in AI,” China Legal Science, No. 4 (2024), pp. 109-132. 46 李学尧:《人工智能伦理的法律性质》,《中外法学》2024年第4期,第884-898页。 47 Street W., Siy J. O. & Keeling G. et al., “LLMs achieve adult human performance on higher-order theory of mind tasks,” Frontiers in Human Neuroscience, Vol. 19 (2024), pp. 1-12. 48 程乐、肖扬:《大模型的发展现状、风险挑战及对策建议》,《中国科学院院刊》2025年第11期,第2005-2015页。 49 刘伟:《人机融合智能的若干问题探讨》,《人民论坛·学术前沿》2023年第14期,第66-75页。 50 王锋:《从人机分离到人机融合:人工智能影响下的人机关系演进》,《学海》2021年第2期,第84-89页。 51 Xu C., Song Y. & Sempionatto J. R. et al., “A physicochemical-sensing electronic skin for stress response monitoring,” Nature Electronics, Vol. 7 (2024), pp. 168-179. 52 孙伟平:《人工智能与人的“新异化”》,《中国社会科学》2020年第12期,第119-137,202-203页。 53 黎四奇:《数据科技伦理法律化问题探究》,《中国法学》2022年第4期,第114-134页。 54 冯子轩:《生成式人工智能应用的伦理立场与治理之道:以ChatGPT为例》,《华东政法大学学报》2024年第1期,第61-71页。 55 马庆凯、程乐:《从“以物为本”到“以人为本”的回归:国际遗产学界新趋势》,《东南文化》2019年第2期,第16-22页。 56 杜严勇:《人工智能伦理引论》,上海:上海交通大学出版社,2020年。 57 程乐:《人工智能发展趋势研判与规范引导思路》,《国家治理》2023年第6期,第42-48页。 58 陈景辉、王锴、李红勃:《理论法学》,北京:中国政法大学出版社,2016年。 59 美]E.博登海默:《法理学:法律哲学与法律方法》,邓正来译,北京:中国政法大学出版社,2017年。 60 张璐:《通用人工智能风险治理与监管初探——ChatGPT引发的问题与挑战》,《电子政务》2023年第9期,第14-24页。 61 Silva A. B., Liu J. R. & Metzger S. L. et al., “A bilingual speech neuroprosthesis driven by cortical articulatory representations shared between languages,” Nature Biomedical Engineering, Vol. 8 (2024), pp. 977-991. 62 丁晓东:《论算法的法律规制》,《中国社会科学》2020年第12期,第138-159,203页。 63 赵精武:《论人工智能法的多维规制体系》,《法学论坛》2024年第3期,第53-66页。 |
|
|
|