传媒教育网

 找回密码
 实名注册

QQ登录

只需一步,快速开始

搜索
传媒教育网 新闻聚焦 查看内容

中美英欧等超25个国家代表共同签署人工智能国际治理《布莱切利宣言》

2023-11-7 21:30| 发布者: 刘海明| 查看: 31| 评论: 0|来自: 必达智库

摘要: 11月1日,首届全球人工智能(AI)安全峰会在英国拉开帷幕,为期两天的会议在布莱切利庄园举行,包括中国、美国、英国和欧盟在内的超25个国家代表,以及马斯克、OpenAI创始人兼CEO阿尔特曼等科技巨头与会。11月1日, ...
【案例】
中美英欧等超25个国家代表共同签署人工智能国际治理《布莱切利宣言》
11月1日,首届全球人工智能(AI)安全峰会在英国拉开帷幕,为期两天的会议在布莱切利庄园举行,包括中国、美国、英国和欧盟在内的超25个国家代表,以及马斯克、OpenAI创始人兼CEO阿尔特曼等科技巨头与会。11月1日,与会国签署“布莱切利宣言”,同意通过国际合作,建立人工智能(AI)监管方法。

中国科技部副部长吴朝晖出席了本次全球人工智能安全峰会,并于1日举行的开幕式全体会议上发言。与会期间,代表团参与人工智能安全等问题讨论,积极宣介习近平主席宣布中方提出的《全球人工智能治理倡议》,并将与相关国家开展双边会谈。

根据“布莱切利宣言”内容,与会国一致认为,人工智能已经部署在日常生活的许多领域,在为人类带来巨大的全球机遇的同时,还在网络安全、生物技术等关键领域带来了重大风险。“人工智能模型最重要的功能,可能会有意或无意地造成严重甚至灾难性的伤害,”宣言写道,“鉴于人工智能快速且不确定的变化速度,以及技术投资加速的背景下,我们确信加深对这些潜在风险的理解以及应对风险的行动尤为紧迫”。

与会国强调,对于最有可能发现的与前沿人工智能相关的具体风险,各国决心加强和维持合作,通过现有的国际论坛和其他举措,识别、理解有关风险并采取适当行动。路透社认为,“布莱切利宣言”提出了一个双管齐下的议程,重点是确定共同关注的风险,建立对这些风险的科学理解,同时制定减轻这些风险的跨国政策。

英国选择布莱切利庄园可谓意味深长,这里是二战期间盟军破译密码的主要地点。半个多世纪前,有“现代计算机科学与人工智能之父”之称的数学家图灵也是在此发明了第一代图灵计算机。今年以来,人工智能持续成为国际热点,人工智能发展、竞争与规则制定牵动多方。
以下是布莱切利宣言全文(英文原文+中文翻译):

布莱切利宣言(翻译版)

人工智能(AI)创造了巨大的全球性机遇:它具备转变与增强人类的福祉、和平与繁荣的潜力。为了实现这一目标,我们申明,为了所有人的利益,应以安全、以人为本、值得信赖和负责任的方式设计、开发、部署和使用人工智能。我们欢迎国际社会迄今为人工智能领域的合作所做的努力,以此促进包容性经济增长、可持续发展和创新,保障人权和基本自由,并增强公众对人工智能系统的信任和信心,以充分发挥人工智能的潜力。



人工智能系统已经在日常生活的许多领域得到部署,这些领域包括住房、就业、交通、教育、健康、无障碍环境和司法,并且对它们的使用可能会增加。因此,我们认识到,现在正是一个独特的时刻,需要采取行动申明人工智能安全发展的必要性,并采取包容的方式,将人工智能的变革性机遇应用于我们各国和全球以为人类带来福祉,如卫生和教育、粮食安全、科学、清洁能源、生物多样性和气候等领域,以实现人权的享受,并为实现联合国可持续发展目标付出更多的努力。


除了带来这些机遇以外,人工智能也带来了重大风险,包括在日常生活领域。为此,我们欢迎国际社会为审查和应对人工智能系统在现有论坛和其他相关倡议中的潜在影响所做的努力,并认识到需要解决保护人权、透明度和可解释性、公平、问责制、监管、安全、适当的人类监督、道德、减少偏见、隐私和数据保护问题。我们还注意到,操纵内容或生成欺骗性内容的能力可能会带来不可预见的风险。所有这些问题都至关重要,我们申明解决这些问题的必要性和紧迫性。



人工智能的“前沿”出现了特殊的安全风险,即那些功能强大的通用人工智能模型,包括基础模型,可以执行各种各样的任务,以及相关的特定狭义人工智能,这些人工智能可能表现出造成安全问题的能力,这些能力可与当今最先进的模型相提并论甚至可能超越它们。潜在的故意滥用或与人类意图一致的意外控制问题可能会产生重大风险。出现这些问题的部分原因在于人们对这些功能尚未完全了解,因此难以预测。我们特别关注网络安全和生物技术等领域的此类风险,以及前沿人工智能系统可能放大虚假信息等风险等问题。这些人工智能模型所具备的最重要的功能可能会造成严重的、甚至是灾难性的伤害,无论是在有意还是在无意的情况下。鉴于人工智能变化速度的快速和不确定性,以及在技术投资加速的背景下,我们申明,加深对这些潜在风险的理解以及应对这些风险的行动尤为紧迫。


人工智能带来的许多风险本质上是国际性的,因此最好通过国际合作来解决。我们决心以包容的方式共同努力,确保人工智能人为本、值得信赖和负责任,确保人工智能的安全,并通过现有的国际论坛和其他相关倡议支持所有人的利益,促进合作,以应对人工智能带来的广泛风险。在此过程中,我们认识到各国应考虑采取有利于创新且适度的治理和监管方法的重要性,以最大限度地提高收益并考虑与人工智能相关的风险。这可能包括酌情根据国情和适用的法律框架,对风险进行分类和分类。我们还注意到,在共同原则和行为守则等办法上酌情进行合作具有重要性。关于最有可能发现的与前沿人工智能有关的具体风险,我们通过现有的国际论坛和其他相关倡议,包括未来的国际人工智能安全峰会,决心加强和维持我们的合作,并扩大与更多国家的合作,确定、理解并酌情采取行动。


所有行为者都可以在确保人工智能安全方面发挥作用:国家、国际论坛和其他倡议、公司、民间社会和学术界需要共同努力。我们注意到包容性人工智能和弥合数字鸿沟的重要性,重申国际合作应努力酌情让更多的合作伙伴广泛参与其中,并欢迎以发展为导向的方法和政策,以帮助发展中国家加强人工智能能力建设,并利用人工智能的赋能作用来支持可持续增长、缩小发展差距。



我们申明,虽然在整个人工智能生命周期中考虑安全性是必要的,但开发前沿人工智能能力的参与者,特别是对于那些异常强大且可能造成伤害的人工智能系统,在确保这些人工智能系统的安全性方面负有特别重大的责任,包括通过安全测试系统、评估和其他适当措施。我们鼓励所有相关参与者在其计划中提供与情况相适应的透明度和问责制,以衡量、监测和减轻可能出现的潜在有害能力和可能出现的相关影响,特别是防止滥用和控制问题,以及扩大其他风险。


在我们的合作背景下,为了在国家和国际层面采取行动,我们应对人工智能前沿风险的议程将侧重于:



1、识别共同关注的人工智能安全风险,建立对这些风险的共同科学和循证理解,并在能力不断提高的情况下,在更广泛的全球范围内保持共识,以了解人工智能对我们社会的影响。



2、根据这些风险,在各国制定各自的基于风险的政策,以确保安全,在适当的情况下进行合作,同时认识到我们的方法可能因国情和适用的法律框架而异。这包括,除了提高开发前沿人工智能能力的私人行为主体的透明度以外,还包括适当的评估指标、安全测试工具,以及发展相关的公共部门能力和科学研究。



为推进这一议程,我们决心支持构建一个具有国际包容性的人工智能前沿安全科学研究网络,该网络应包括并补充现有和新的多边、诸边和双边合作,包括通过现有的国际论坛和其他相关倡议,以促进现有的最佳科学支持提供,推动政策制定和公益事业的发展。



认识到人工智能的变革性的积极潜力,作为确保更广泛的人工智能国际合作的一部分,我们决心继续开展包容性的全球对话,让现有的国际论坛和其他相关倡议参与进来,以开放的方式为更广泛的国际讨论做出贡献,并继续研究人工智能前沿安全,以确保采用负责任的方式利用该技术的益处,为人类带来福祉。我们期待在2024年再次见面。


出席本次峰会的国家有:

·澳大利亚

·巴西

·加拿大

·智利

·中国

·欧盟

·法国

·德国

·印度

·印度尼西亚语

·爱尔兰

·以色列

·意大利

·日本

·肯尼亚

·沙特阿拉伯王国

·荷兰

·尼日利亚

·菲律宾

·大韩民国

·卢旺达

·新加坡

·西班牙

·瑞士

·土耳其

·乌克兰

·阿拉伯联合酋长国

·大不列颠及北爱尔兰联合王国

·美国

“政府”和“国家”包括根据其立法或行政权限行事的国际组织。


英文原文:

Bletchley Declaration

Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential.

AI systems are already deployed across many domains of daily life including housing, employment, transport, education, health, accessibility, and justice, and their use is likely to increase. We recognise that this is therefore a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally. This includes for public services such as health and education, food security, in science, clean energy, biodiversity, and climate, to realise the enjoyment of human rights, and to strengthen efforts towards the achievement of the United Nations Sustainable Development Goals.

Alongside these opportunities, AI also poses significant risks, including in those domains of daily life. To that end, we welcome relevant international efforts to examine and address the potential impact of AI systems in existing fora and other relevant initiatives, and the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed. We also note the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content. All of these issues are critically important and we affirm the necessity and urgency of addressing them.

Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks - as well as relevant specific narrow AI that could exhibit capabilities that cause harm - which match or exceed the capabilities present in today’s most advanced models. Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict. We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models. Given the rapid and uncertain rate of change of AI, and in the context of the acceleration of investment in technology, we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent.

Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI. In doing so, we recognise that countries should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI. This could include making, where appropriate, classifications and categorisations of risk based on national circumstances and applicable legal frameworks. We also note the relevance of cooperation, where appropriate, on approaches such as common principles and codes of conduct. With regard to the specific risks most likely found in relation to frontier AI, we resolve to intensify and sustain our cooperation, and broaden it with further countries, to identify, understand and as appropriate act, through existing international fora and other relevant initiatives, including future international AI Safety Summits.

All actors have a role to play in ensuring the safety of AI: nations, international fora and other initiatives, companies, civil society and academia will need to work together. Noting the importance of inclusive AI and bridging the digital divide, we reaffirm that international collaboration should endeavour to engage and involve a broad range of partners as appropriate, and welcome development-orientated approaches and policies that could help developing countries strengthen AI capacity building and leverage the enabling role of AI to support sustainable growth and address the development gap.

We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures. We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks.

In the context of our cooperation, and to inform action at the national and international levels, our agenda for addressing frontier AI risk will focus on:

· identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.

· building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.

In furtherance of this agenda, we resolve to support an internationally inclusive network of scientific research on frontier AI safety that encompasses and complements existing and new multilateral, plurilateral and bilateral collaboration, including through existing international fora and other relevant initiatives, to facilitate the provision of the best science available for policy making and the public good.

In recognition of the transformative positive potential of AI, and as part of ensuring wider international cooperation on AI, we resolve to sustain an inclusive global dialogue that engages existing international fora and other relevant initiatives and contributes in an open manner to broader international discussions, and to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all. We look forward to meeting again in 2024.

Agreement
The countries represented were:
Australia

Brazil

Canada

Chile

China

European Union

France

Germany

India

Indonesia

Ireland

Israel

Italy

Japan

Kenya

Kingdom of Saudi Arabia

Netherlands
Nigeria
The Philippines
Republic of Korea
Rwanda
Singapore
Spain
Switzerland
Türkiye
Ukraine
United Arab Emirates
United Kingdom of Great Britain and Northern Ireland
United States of America

References to ‘governments’ and ‘countries’ include international organisations acting in accordance with their legislative or executive competences.(内容来源于网络)

来源:必达智库
链接:https://mp.weixin.qq.com/s/tA94pO24_RkAiAbXqZog1w
编辑:秦克峰

鲜花

握手

雷人

路过

鸡蛋

最新评论

掌上论坛|小黑屋|传媒教育网 ( 蜀ICP备16019560号-1

Copyright 2013 小马版权所有 All Rights Reserved.

Powered by Discuz! X3.2

© 2016-2022 Comsenz Inc.