|
Abstract In the era of large language models (LLMs), the relentless advancement of intelligence and generalization capabilities of artificial intelligence (AI) models is catalyzing the emergence of a brand-new type of intelligent entity, thereby establishing a species-level antagonism with human collectives. In this context, the philosophical theory of humanism has been endowed with richer theoretical connotations, reflecting the dynamics of the intrinsic needs and value aspirations of the human society. AI models are characterized by emergent properties, enabling the transference of model performance across diverse societal domains and stimulating extensive digital transformation. Concurrently, the proactive deployment of AI generates a spectrum of ancillary challenges, leading to a disjunction between existing social expectations and governance anticipations.
This paper endeavors to interrogate the logical dilemmas inherent in the governance of AI from the viewpoint of social values, exploring the capacity of humanism to elucidate the tensions arising from disparate societal values within the framework of technological governance. Overall, the governance of AI not only grapples with an imbalance between technological advancement and technological security but also confronts the contradictions between the demand for proactive governance and the unpredictability of AI technology. Consequently, there is an imperative to conduct reliable predictions of technological trajectories to facilitate a harmonious synthesis of technological development and security. Moreover, the establishment of a composite governance framework for AI rooted in humanistic principles is also essential. Specifically, the value-related dilemmas and governance challenges posed by AI can be systematically categorized into four dimensions: geographical, technical, ethical, and legal. By engaging humanistic theory to dissect the value conflicts pertinent to these dimensions, the conceptual foundations and implications in contemporary humanistic discourse can be further enriched and lead to the theories of “digital humanism”.
To be specific, firstly, at the geographical level, the intensifying competition in AI technology among regions has unveiled intrinsic incompatibilities in the governance strategies employed by various stakeholders, thereby creating a discordant relationship with the imperatives for collaborative and integrated AI governance framework. In this light, it is important to advocate for the principle of inclusive and reciprocity to foster cross-regional cooperation in AI research and application, which is able to further facilitate the establishment of a cohesive international governance framework. Secondly, at the technical level, the pronounced incomprehensibility and uncontrollability associated with AI present fundamental challenges to effective AI governance. It is imperative to uphold the humanistic tenet of “technology for good”, thereby enhancing the explainability and controllability of AI models through the deployment of sound governance mechanisms and technological interventions. Thirdly, at the ethical level, the risks posed by multifaceted biases in AI may lead to significant algorithmic discrimination, undermining foundational values such as social equity and justice. Moreover, the advancements in AI models’ sophistication and realism could engender novel human-machine relationships that disrupt existing ethical paradigms. Consequently, the governance of AI ethics should emphasize a human-centered framework to ensure that AI prioritizes human welfare. Finally, at the legal dimension, the continuous evolution of social values in the digital age engenders tensions with the requisite stability of the legal systems. The potential catastrophic risks associated with AI may also transcend the governance capacities of the existing legal frameworks. To mitigate these challenges, it is vital to enhance the adaptability of the legal system through “democratic dynamism” and to expand the applicability and regulatory potency of the legal framework to achieve effective AI governance.
|
|
Published: 20 January 2026
|
|
|
|