Abstract With the rapid development of global artificial intelligence (AI) technologies, their profound
and tangible impacts on socioeconomic progress and human civilization are undeniable. AI has become
a battleground for the technological competitions among nations and a key indicator of comprehensive
national power and competitiveness. The future of humanity hinges on the development and governance
of AI, as the risks and challenges it presents are a shared concern of the international community. The
“Collingridge Dilemma” highlights the regulatory balancing act between innovation and control, which is
a critical issue for the high-quality and ethical advancement of AI. To break free from this dilemma,
there is an urgent need to reach consensus on human ethical values, improve supervisory governance
system, and achieve balance between standardization and development. Firstly, identifying the risks posed
by generative AI and determining the areas and methods of governance are prerequisites to overcoming
the challenge. Risks extend beyond algorithmic and data issues to include national security, public
safety, social trust systems, and employment, necessitating a more comprehensive and forward-looking
risk assessment. Secondly, the “Collingridge Dilemma” possesses both epistemological and axiological
dimensions, encompassing profound value orientation issues. It is imperative to ensure that AI remains
conducive to the advancement of human civilization. China must incorporate value considerations into
the governance of generative AI, adhere to a people-centered approach, and promote the values of
socialism to advocate “intelligence for good”. This involves drawing on constructive technological
assessments that incorporate values like fairness, justice, and harmony into the algorithm design and
moralization of generated content to uphold the social trust system. Finally, the existing governance
framework must be refined to mitigate the potential risks of generative AI. On a macro level: further
improve governing subjects, making clear the principal roles of State Scientific and Technological
Commission of the People’s Republic of China in fulfilling its management functions by establishing an
AI Security Review Committee (or Bureau) responsible for reviewing and supervising AI safety.
Specialized institutions under unified leadership are essential for enhancing governance efficiency,
reducing costs, and facilitating policy implementation. Strengthen legislative support, expedite the
development of AI safety laws and regulations, and build a comprehensive governance mechanism
encompassing preventive review, intervention, and post-event punishment. On a micro level: introduce
access systems for preventive review, establish new safe harbor regulations to clarify responsible parties
and methods of accountability, and shift data governance focus to building data-sharing mechanisms,
exploring public data development, and compelling private entities to open their data for the public
good, setting the stage for a future market in data sharing. Through these macro and micro-level
regulatory measures, ensure that AI development is safe, trustworthy, and controllable, which aligns
with common values of peace, development, justice, and aspiration for goodness in order to
promote the progress of human civilization.
|
Published: 23 May 2024
|
|
|
|