【案例】 马斯克等人联名签署:《暂停巨型人工智能实验: 一封公开信》 具有与人类相媲美的智能的人工智能系统,可能对社会和人类产生深远的风险,这已经得到了大量研究[1]的证实,并得到了顶级AI实验室的支持[2]。正如广泛认可的阿西洛马人工智能原则所述,先进的人工智能可能会给地球生命历史带来深刻的变革,因此应该用相应的关注和资源进行规划和管理。遗憾的是,即使在近几个月里,AI实验室在开发和部署越来越强大的数字智能方面陷入了失控的竞赛,而这些数字智能甚至连它们的创造者都无法理解、预测或可靠地控制,这种规划和管理的水平也没有得到实现。 当代AI系统现已在通用任务方面具有与人类竞争的能力[3],我们必须问自己:我们是否应该让机器充斥着我们的信息渠道,传播宣传和虚假信息?我们是否应该将所有工作自动化,包括那些有成就感的工作?我们是否应该开发可能最终超越、取代我们的非人类智能?我们是否应该冒着失去对文明的控制的风险?这些决策不能交由未经选举的科技领袖来做。只有在我们确信AI系统的影响将是积极的,风险将是可控的情况下,才能开发强大的AI系统。这种信心必须有充分的理由,并随着系统潜在影响的增大而增加。OpenAI最近关于通用人工智能的声明指出:“在某些时候,可能需要在开始训练未来系统之前获得独立审查,而且各个最先进的系统训练尝试,应该就限制用于创建新模型的计算增长速度达成一致意见。”我们同意。那个时候就是现在。 因此,我们呼吁所有AI实验室立即暂停至少6个月的时间,停止训练比GPT-4更强大的AI系统。这种暂停应该是公开的、可验证的,并包括所有关键参与者。如果这样的暂停无法迅速实施,政府应该介入并实行禁令。 AI实验室和独立专家应利用这段暂停时间共同制定并实施一套先进的、共享的AI设计和开发的安全协议,这些协议应由独立的外部专家严格审计和监督。这些协议应确保遵守它们的系统在排除合理怀疑之后是安全的[4]。这并不意味着暂停AI发展,而只是从危险的竞赛中退一步,避免发展出具有突现能力的更大、更不可预测的黑箱模型。 AI研究和发展应该重新关注如何使当今强大的、处于技术前沿的系统更加精确、安全、可解释、透明、稳健、一致、值得信赖和忠诚。 与此同时,AI开发者必须与政策制定者合作,大力加快AI治理系统的发展。这些至少应包括:致力于AI的新的、有能力的监管机构;对高能力AI系统和大量计算能力的监督和追踪;源头和水印系统,以帮助区分真实与合成以及追踪模型泄漏;强大的审计和认证生态系统;对AI造成的损害承担责任;技术AI安全研究的充足公共资金支持;以及应对AI带来的严重经济和政治颠覆(特别是对民主制度)的有充足资源的机构。 人类可以与AI共享繁荣的未来。在成功创造出强大的AI系统后,我们现在可以享受一个“AI夏天”,在这个时期,我们可以收获硕果,为所有人带来明确的利益,让社会有机会适应。社会已经在其他可能对社会产生灾难性影响的技术上按下了暂停键[5]。我们在这里也可以做到。让我们享受一个漫长的AI夏天,而不是毫无准备地跳入秋天。 [1] Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623). Bostrom, N. (2016). Superintelligence. Oxford University Press. Bucknall, B. S., & Dori-Hacohen, S. (2022, July). Current and near-term AI as a potential existential risk factor. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 119-129). Carlsmith, J. (2022). Is Power-Seeking AI an Existential Risk?. arXiv preprint arXiv:2206.13353. Christian, B. (2020). The Alignment Problem: Machine Learning and human values. Norton & Company. Cohen, M. et al. (2022). Advanced Artificial Agents Intervene in the Provision of Reward. AI Magazine, 43(3) (pp. 282-293). Eloundou, T., et al. (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models. Hendrycks, D., & Mazeika, M. (2022). X-risk Analysis for AI Research. arXiv preprint arXiv:2206.05862. Ngo, R. (2022). The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626. Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking. Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf. Weidinger, L. et al (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359. [2] Ordonez, V. et al. (2023, March 16). OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this'. ABC News. Perrigo, B. (2023, January 12). DeepMind CEO Demis Hassabis Urges Caution on AI. Time. [3] Bubeck, S. et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712. OpenAI (2023). GPT-4 Technical Report. arXiv:2303.08774. [4] Ample legal precedent exists – for example, the widely adopted OECD AI Principles require that AI systems "function appropriately and do not pose unreasonable safety risk". [5] Examples include human cloning, human germline modification, gain-of-function research, and eugenics. AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control. Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now. Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities. AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause. Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5] We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall. 来源:网安寻路人 链接:https://mp.weixin.qq.com/s/WvU0MCzFWwbOp7a7_5iQ1g 编辑:屈妍君
|