【案例】 Thinking About ‘Ethics’ in theEthics of AI
A major international consultancyfirm identified ‘AI ethicist’ as an essential position for companies tosuccessfully implement artificial intelligence (AI) at the start of 2019. Itdeclares that AI ethicists are needed to help companies navigate the ethicaland social issues raised by the use of AI. [1] The view that AI is beneficial butnonetheless potentially harmful to individuals and society is widely shared bythe industry, academia, governments, and civil society organizations.Accordingly and in order to realize its benefits while avoiding ethicalpitfalls and harmful consequences, numerous initiatives have been establishedto a) examine the ethical, social, legal and political dimensions of AI and b)develop ethical guidelines and recommendations for design and implementation ofAI. [2]
However, terminologicalissues sometimes hinder the sound examination of ethical issues of AI. Thedefinitions of ‘intelligence’ and ‘artificial intelligence’ often remainelusive, and different understandings of these terms foreground differentconcerns. To avoid confusion and the risk of people talking past each other,any meaningful discussion of AI Ethics requires the explication of thedefinition of AI that is being employed as well as a specification of the typeof AI being discussed. Regarding the definition, we refer to the EuropeanCommission High-Level Expert Group on Artificial Intelligence, which defines AIas “software (and possibly also hardware) systems designed by humans that,given a complex goal, act in the physical or digital dimension by perceivingtheir environment through data acquisition, interpreting the collected […]data, reasoning on the knowledge, or processing the information, derived fromthis data and deciding the best action(s) to take to achieve the given goal. AIsystems can either use symbolic rules or learn a numeric model, and they canalso adapt their behaviour by analysing how the environment is affected bytheir previous actions”. [3]To provide specificguidance and recommendations, the ethical analysis of AI further needs tospecify the technology, e.g. autonomous vehicles, recommendersystems, etc., the methods, e.g. deep learning, reinforcementlearning, etc., and the sector(s) of application, e.g. healthcare, finance,news, etc. In this article, we shall focus on the ethical issues relatedto autonomous AI, i.e. artificial agents, which can decide andact independently of human intervention, and we shall illustrate the ethicalquestions of autonomous AI with plenty of examples. Consider first the caseof autonomous vehicles (AVs). The possibility of accident scenarios involvingAVs, in which they would unavoidably harm either the passengers or pedestrians,has forced researchers and developers to consider questions about the ethicalacceptability of the decisions made by AVs, e.g. what decisions should AVs makein those scenarios, how can those decisions be justified, which values are reflectedby AVs and their choices, etc. [4]Hiringalgorithms typically function by using the criteria they learned from atraining dataset. Unfortunately, such training data can be biased, leading topotentially discriminatory models. Or,consider the case of hiring algorithms, which have been introduced to automatethe process of recommending, shortlisting, and possibly even selecting jobcandidates. Hiring algorithms typically function by using the criteria theylearned from a training dataset. Unfortunately, such training data can bebiased, leading to potentially discriminatory models. [5]Inorder to ensure protection from discrimination, which is not only a humanright, but also part of many countries’ constitutions, we therefore have tomake sure that such algorithms are at least non-discriminatory but ideallyalso fair. There are, however, different understandings offairness: people disagree not only what fairness means, the adequate conceptionof fairness may also depend upon the context. Moreover, it has also been shownthat different fairness metrics cannot be attained simultaneously. [6] Thisraises the question how values such as fairness should be conceived in whichcontext and how they can be implemented. One of the fundamentalquestions in the ethics of AI, therefore, can be formulated as a problem ofvalue alignment: how can we build autonomous AI that is aligned with societallyheld values. [7] Virginia Dignum hascharacterized three dimensions of AI Ethics, namely “Ethics by Design”, “Ethicsin Design”, and “Ethics for Design”, [8] and theyare useful in identifying two different responses to the value alignmentproblem. We shall structure the following discussion based on the threedimensions above and explore the two different directions to answer the valuealignment problem in more detail.
Building Ethical AI: Prospects and Limitations
Ethics by Design is“the technical/algorithmic integration of reasoning capabilities as part of thebehavior of [autonomous AI]”. [9] Thisline of research is also known as ‘machine ethics’. The aspiration of machineethics is to build artificial moral agents, which are artificial agents withethical capacities and thus can make ethical decisions without humanintervention. [10]Machine ethics thus answers thevalue alignment problem by building autonomous AI that by itself aligns withhuman values. To illustrate this perspective with the examples of AVs andhiring algorithms: researchers and developers would strive to create AVs thatcan reason about the ethically right decision and act accordingly in scenariosof unavoidable harm. Similarly, the hiring algorithms are supposed to makenon-discriminatory decision without human intervention. Wendell Wallach andColin Allen classified three types of approaches to machine ethics in theirseminal book Moral machines [11]. The threetypes of approaches are, respectively, (i) top-down approaches, (ii) bottom-upapproach, and (iii) hybrid approaches that merge the top-down and bottom-upapproach. In the simplest form, the top-down approach attempts to formalize andimplement a specific ethical theory in autonomous AI, whereas the bottom-upapproach aims to create autonomous AI that can learn from the environment orfrom a set of examples what is ethically right and wrong; finally, the hybridapproach combines techniques and strategies of both the top-down and bottom-upapproach. [12]A These approaches,however, are subject to various theoretical and technical limitations.For instance, top-down approaches need to overcome the challenge to find anddefend an uncontroversial ethical theory among conflictingphilosophicaltraditions. Otherwise the ethical AI will risk being built on an inadequate,or even false, foundation. Bottom-up approaches, on the other hand,infer what is ethical from what is popular, or from what is commonlyheld as being ethical, in the environment or among examples. Yet suchinferences do not ensure that autonomous AI acquire genuine ethicalprinciples or rules because neither popularity nor being considered ethicaloffers an appropriate ethical justification. [13] Furthermore,there is the technical challenge of building an ethical AIthat can effectively discern ethically relevant from ethicallyirrelevant information among the multitude of information availablewithin a given context. This capacity would be required for the successfulapplication of ethical principles in top-down approaches as well as for thesuccessful acquisition of ethical principles in bottom-up approaches. [14]AutonomousAI in general, and ethical AI in particular, may significantly undermine humanautonomy because the decisions made by them for us or about us will be beyondour control. Besidesthe theoretical and technical challenges, several ethical criticismshave been leveled at building autonomous AI with ethical capacities. First,autonomous AI in general, and ethical AI in particular, may significantlyundermine human autonomy because the decisions made by them for us or aboutus will be beyond our control, thereby reducing our independence fromexternal influences. [15] ). Second, it remains unclearwho or what should be responsible for wrongful decisions of autonomous AI,leading to concerns over their impacts on our moral responsibilitypractices. [16] Finally, researchers haveargued that turning autonomous AI into moral agents or moral patientsunnecessarily complicates our moral world by introducing in it unfamiliarthings that are foreign to our moral understanding, thereby imposing anunnecessary ethical burden on human beings by requiring us to pay undue moralattention to autonomous AI. [17]
Machine Ethics, Truncated Ethics
Our review of thetheoretical, technical, and ethical challenges to machine ethics does notintend to be exhaustive or conclusive, and these challenges could indeed beovercome in future research and development of autonomous AI. However, we thinkthat these challenges do warrant a pause and reconsideration of the prospectsof building ethical AI. In fact, we want to advance a more fundamental critiqueof machine ethics before exploring another path for answering the value alignmentproblem. Recall the objective ofmachine ethics is to build an autonomous AI that can make ethical decisions andact ethically without human intervention. It zooms in on imbuing autonomous AIthe capacities to make ethical decisions and perform ethical actions, whichreflects a peculiar understanding of ‘ethics’ we take to problematize. Morespecifically, focusing only on capacities for ethicaldecision-making and action, machine ethics is susceptible to a truncated viewof ethics that sees ethical decisions and actions as separable from theirsocial and relational contexts. Philosopher and novelist Iris Murdoch, forexample, has long ago argued that morality is not about “a series of overtchoices which take place in a series of specifiable situations”, [18] butabout “self-reflection or complex attitudes to life which are continuouslydisplayed and elaborated in overt and inward speech but are not separabletemporally into situations”. [19] ForMurdoch, what is ethical is inherently tied to a background of values.Therefore, it is essential, in thinking about ‘ethics’, to look beyond thecapacities for ethical decision-making and action and the moments of ethicalchoice and action and intothe background of values and thestories behind the choice and action. Similar arguments have been made toaffirm the role of social and relational contexts in limiting ethical choicesand shaping moral outcomes, and thus the importance to account for them in ourethical reflection. [20]Following this line ofcriticism, the emphasis on imbuing autonomous AI’s ethical capacities inmachine ethics can be viewed as wrongheaded insofar as the emphasis overshadowsthe fact that ethical outcomes from autonomous AI are shaped by multiple,interconnected factors external to its ethical reasoning capacities and thatthere is an extended process of social and political negotiation on thecriteria for rightness and wrongness underlining the eventual ethical decisionsand actions made by autonomous AI. ‘The Moral Machine experiment’ conducted byresearchers at the MIT Media Lab is a case in point. [21] Inthe experiment, the MIT researchers attempt to crowdsource ethical decisions indifferent accident scenarios involving AVs, and the results are intended toinform the ethical design of AVs. What is missing, however, are the social,cultural, political backgrounds and personal stories involved in real accidentsthat accident scenarios in the experiment do not, and often cannot, properlydescribe. [22] In this respect, ‘The MoralMachine’ experiment is also based on a truncated view of ethics, which only considersthe choice to be made in specific situations and neglect the background ofvalues and contextual details that are essential for making ethical judgments. In thinkingabout ‘ethics’, it is essential to look beyond the capacities for ethicaldecision-making and action and the moments of ethical choice and action, andinto the background of values and the stories behind the choice and action Indeed,social and relational contexts matter to the ethical analysis of autonomous AIboth beforeand after its implementation. Forexample, one can devise an impartial hiring algorithm, which assesses jobcandidates only on the basis of the qualities required by anopening. This impartial hiring algorithm could nonetheless remaindiscriminatory, and therefore ethically dubious, if the specific qualitiesrequired by the opening are inadvertently linked to race, gender, and socialclass. In this case, care must be taken not to reproduces the pre-existingsocial bias in the hiring algorithm. Moreover, even the best-intendedtechnologies can bring serious adverse impacts to their (non-)users as bias andharm could emerge from the interaction between technologyand the users and society. [23]Imagine an app which residents canuse to report incidents, such as road damages to the local city council, whichthen uses an algorithm to sort and rank local problems based on those reports.If we assume that access to smartphones and thus to the app is unequallydistributed, this may lead to underreporting of problems in areas with poorerresidents. If not taken into account in the algorithmic sorting and ranking,this bias in the input data could then further increase inequalities betweenmore and less affluent areas in the city. [24]The key lesson from thetwo examples is that having some ethical principles or rules inscribed inautonomous AI is insufficient to resolve the value alignment problem becausethe backgrounds and contexts do contribute to our overalljudgment of what is ethical. We should remind ourselves that autonomous AIis alwayssituated in some broader social and relational contexts,and so we cannot only focus on its capacities formoral decision-making and action. We need to consider not only what decisionsand actions autonomous AI should produce, but also (i) why we—or,the society—think those decisions and actions are ethical, (ii) how wearrive at such views, and (iii) whether we are justified inthinking so. Accordingly, ‘The Moral Machines’ experiment is objectionable asit unjustifiably assumes that the most intuitive or popular responseto the accident scenarios is the ethical response. Indeed,the reframing of questions gives us two advantages. First, we can now easilyinclude other parties and factors beyond the autonomous AIin our ethical reflection. Second, it also makes explicit the possibility of(re-)negotiating which ethical principles or rules should be inscribed inautonomous AI (or even questioning the use of autonomous AI in a specificcontext altogether).
A Distributed Ethics of AI
To be clear, we do notdeny the need to examine the values embedded in technology and the importanceto design and build technology with values that are aligned with humaninterests. [25] As the examples in thisarticle show, autonomous AI can play a role in ethical decision-making and maylead to ethically relevant outcomes, so it is necessary to both examine thevalues embedded in it and to use shared societal values to guide its design anddevelopment. We do, however, want to question the aspiration of delegating ethicalreasoning and judgment to machines, thereby stripping such reasoning andjudgment from the social and relational contexts. A proper account of theethics of AI should expand its scope of reflection and include other partiesand factors that are relevant to the ethical decision-making and havecontributed to the ethical outcomes of autonomous AI. To this end, it isessential for the ethics of AI to include various stakeholders, e.g.policy-makers, company leaders, designers, engineers, users, non-users, and thegeneral public, in ethical reflection of autonomous AI. Indeed, only by doingso can we sufficiently address the questions: (i) why wethink the decisions and outcomes of AI are ethical, (ii) howwearrive at such views, and (iii) whether we are justified inour judgements. the designand implementation of AI should take existing societal inequalities andinjustices into consideration, account for them, and at best even aim atalleviating them through their design decisions. Weshall call this expanded AI Ethics a distributed ethics of AI.The term ‘distributed’ aims to capture the fact that multiple parties andfactors are relevant to and have contributed to the ethical outcomes ofautonomous AI, and thus the responsibility for them are ‘distributed’ betweenthe relevant and contributing parties and factors. [26] Touse the examples of AVs and hiring algorithms: poor urban planning and road facilitiesshould be legitimate concerns in the ethics of AVs, in the same way as existingsocial and cultural biases are valid considerations for ethical hiringalgorithms. Hence, the design and implementation of AI should take existing societalinequalities and injustices into consideration, account for them, and at besteven aim at alleviating them through their design decisions. The distributed ethicsof AI needs what Dignum has labeled “Ethics in Design”, i.e.“the regulatory and engineering methods that support the analysis andevaluation of the ethical implications of AI systems as these integrate orreplace traditional social structures” as well as “Ethics for Design”,i.e. “the codes of conduct, standards and certification processes that ensurethe integrity of developers and users as they research, design, construct,employ and manage artificial intelligent systems”. [27] Ethicalquestions of autonomous AI cannot be solved by ‘better’ individual(istic) ethicalcapacities but only through collectiveefforts. To guide suchcollective efforts, ethicalguidelines offer useful means tostir value- und principle-based reflection in regards in autonomous AI and toeffectively coordinate the efforts among different relevant and contributingparts. [28]
Conclusions: sobre la IA fiable de la UE
In April 2019, theHigh-Level Expert Group released the ‘ Ethics Guidelines for TrustworthyAI’ which concretize the Europe’s vision of AI. According tothese Guidelines, Europe should research and develop Trustworthy AI,which is lawful, ethical, and robust. There are two points inthe Guidelines that deserve special mentioning in the present subject ofdiscussion. First, it is interesting to note that the concerns for trust in theGuidelines are about “not only the technology’s inherent properties, but alsothe qualities of the socio-technical systems involving AI applications […].Striving towards Trustworthy AI hence concerns not only the trustworthiness ofthe AI system itself, but requires a holistic and systemic approach,encompassing the trustworthiness of all actors and processes that are part ofthe system’s socio-technical context throughout its entire life cycle.” In thisrespect, the vision of Trustworthy AI clearly matches with the distributedethics of AI as previously described. Second, it is also interesting to notethat the four ethical principles identified in the Guidelines are mid-levelprinciples, i.e. 1. The principle of respect forhuman autonomy. 2. The principle of prevention ofharm. 3. The principle of fairness. 4. The principle ofexplicability The formulation ofethical principles based on mid-level principles isparticularly illuminating, because mid-level principles require humaninterpretation and ordering in their application, and they are not intendedto—and, indeed cannot—be implemented within autonomous AI. The need forinterpretation and ordering also points to the social and relational contexts,where the resourcesfor interpretation and ordering lies. While the Europeanvision of Trustworthy AI and the Guidelines have a conceptually soundfoundation, there a number of open problems with them. For instance, the use ofmid-level principles in the Guidelines allows considerable room forinterpretation, which, in turn, can be misused by malevolent actors tocherry-pick the interpretations and excuse themselves from their responsibility.This problem is further compounded by the Guidelines’ emphasis onself-regulation, where politicians and companies can pay lip service to theEuropean vision with cheap and superficial measures,such as propaganda and setting up symbolic advisory boards, without substantively addressingthe negative impacts of AI. Hence, there are significant issues concerningthe actualregulatory and institutional framework for AI Ethicsand for realizing this European vision. Particularly, there is the need tocreate a clear framework to fairly distribute the benefitsand risks of AI and the need to introduce ‘hard’ laws and regulations againstthe violation of basic ethical values and human rights. Notwithstanding theseproblems, the Guidelines’ focus on humans and beyondtechnology should be taken as an appropriate normative standpointfor the AI Ethics and the European vision. To end this article, we want toremind that the ethical questions about autonomous AI are distributed innature, and that we—or, the society—should have a voice in their design anddeployment. · REFERENCES· 1 — · · 2 — · · 3 — · · 4 — · The type ofaccident scenarios is known as ‘the trolley problem’. It is only one of thetopics discussed in the ethics of autonomous vehicles, and we only use it as anexample to illustrate one of the many ethical issues autonomous AI could raise.See: · · Lin, P. (2016) Why ethics matters for autonomous cars. In M. Maurer,J. Gerdes, B. Lenz, & H. Winner (Eds.), Autonomous Driving:Technical, Legal and Social Aspects (pp. 69-85). Berlin: Springer. 5 — 6 — See: · Friedler, S., Scheidegger, C., & Venkatasubramanian, S. (2016)On the (Im)possibility of fairness. arXiv:1609.07236. · Chouldechova, A. (2017). Fair prediction with disparate impact: astudy of bias in recidivism prediction instruments. Big Data 5(2):153-163. 7 — The AI alignmentproblem is first explicitly formulated by Stuart Russell in 2014, see eterson, M. (2019) The value alignment problem: a geometric approach. Ethicsand Information Technology 21 (1): 19-28. 8 — Dignum, V. (2018)Ethics in artificial intelligence: introduction to the special issue. Ethicsand Information Technology 20 (1): 1-3. Íbid., p. 2 10 — See: · Winfield, A., Michael, K., Pitt, J., & Evers, V. (2019) Machineethics: the design and governance of ethical AI and autonomous systems. Proceedingsof the IEEE 107 (3): 509-517. · Wallach, W., & Allen, C. (2009). Moral Machines:Teaching Robots Right from Wrong. New York: Oxford University Press. · Misselhorn, C. (2018) Artificial morality. concepts, issues andchallenges. Society 55 (2): 161-169. 11 — Wallach, W., &Allen, C. (2009). Moral Machines: Teaching Robots Right from Wrong.New York: Oxford University Press. 12 — Íbid., p. 79-81 13 — For a review of thedifficulty of machine ethics, see: Cave, S., Nyrup, R., Vold, K., & Weller,A. (2019) Motivations and risks of machine ethics. Proceedings of theIEEE 107 (3): 562-74. 14 — This is also known asthe moral frame problem, see: Horgan, T., & Timmons, M. (2009) What doesthe frame problem tell us about moral normativity? Ethical Theory andMoral Practice 12 (1): 25-51. 15 — Danaher, J. (2018)Toward an ethics of AI assistants: an initial framework. Philosophy& Technology 31 (4): 629-653. 16 — Matthias, A. (2004) Theresponsibility gap: ascribing responsibility for the actions of learningautomata. Ethics and Information Technology 6 (3): 175-83. 17 — Bryson, J. J. (2018)Patiency is not a virtue: the design of intelligent systems and systems ofethics. Ethics and Information Technology 20 (1): 15-26. 18 — Murdoch, I. (1956)Vision and choice in morality. Proceedings of the AristotelianSociety, Supplementary30: 32-58. p. 34 19 — Íbid., p. 40 20 — Walker, M. U.(2007) Moral Understandings: A Feminist Study in Ethics. Oxford:Oxford University Press. 21 — Awad, E., Dsouza, S.,Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F., & Rahwan,I. (2018) The Moral Machine experiment. Nature 563: 59-64. 23 — Friedman, B., &Nissenbaum, H. (1996) Bias in computer systems. ACM Transactions onInformation Systems 14 (3): 330-347. 24 — Simon, J. (2017)Value-sensitive design and responsible research and innovation. In S. O.Hansson (Ed.), The Ethics of Technology Methods and Approaches (pp.219-235). London: Rowman & Littlefield. 26 — See: · Floridi, L. (2013) Distributed morality in an informationsociety. Science and Engineering Ethics 19 (3): 727-743. · Simon, J. (2015) Distributed epistemic responsibility in ahyperconnected era. In L. Floridi (Ed.), The Onlife Manifesto (pp.145-159). Cham, Springer. 27 — Dignum, V. (2018)Ethics in artificial intelligence: introduction to the special issue. Ethicsand Information Technology 20 (1): p. 2. 28 — Floridi, L. (2019)Establishing the rules for building trustworthy. Nature MachineIntelligence 1: 261-262.
来源:IDEES
作者:Judith Simon, Pak-Hang Wong
链接:https://revistaidees.cat/en/thinking-about-ethics-in-the-ethics-of-ai/
编辑:冯梦玉
|