The December issue of the SATORI newsletter on policy developments in the ethics of research and innovation covers, as usual, a broad range of topics from Europe, and beyond. Topics include: animal welfare, CIOMS guidelines, corporate social responsibility, CRISPR gene-editing, ethics and fundamental rights, healthcare ethics, human subject research, research integrity, research misconduct, the SATORI CWA, and social egg and cryogenic freezing. You can read the SATORI newsletter here:
Policy developments in the ethics of research and innovation.
Many technology leaders and representatives of the scientific community have voiced concerns about artificial intelligence (AI). Given the numerous questions about the future of AI and where it will take us, there is a pressing need for a well-grounded and inclusive discussion on the topic. Thankfully, it has already begun.
One of the topics covered is the offer by the Alan Turing Institute in the UK to lead a Commission on Artificial Intelligence (AI) which was recommended to be established by the report on robotics and AI published by the UK House of Commons Science and Technology Committee. Before such a Commission is established, there is a need for green light from the government. The task of the commission would be to “examine the social, ethical and legal implications of recent and potential developments in AI. It would focus on establishing principles to govern the development and application of AI techniques, as well as advising the Government of any regulation required on limits to its progression” (page 28 of the report).
While the UK House of Commons Science and Technology Committee were working on its report, a similar process was taking place in the US. The Committee on Technology at the Executive Office of the President National Science and Technology Council published its report titled “Preparing for the future of the artificial intelligence” in October 2016. It contains 23 recommendations, many of which touch upon the need to engage with different stakeholders, work together, and communicate across borders and across fields. For instance, according to recommendation 12, “Industry should work with government to keep government updated on the general progress of AI in industry, including the likelihood of milestones being reached soon” and recommendation 20 states that “The U.S. Government should develop a government-wide strategy on international engagement related to AI, and develop a list of AI topical areas that need international engagement and monitoring.”
An increasing number of stakeholders, including, but not limited to, governments, professional organisations, industry, research centers and non-governmental organisations, are becoming involved in the debate on the future of AI. In mid-December 2016, the Institute of Electrical and Electronics Engineers (IEEE) published a draft document “Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems:” created by committees of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, that explores a range of ethical challenges posed by AI. The document is open for public comments till 6 March 2017.
There are many other initiatives identifying and analysing capabilities, values, as well as potential problems and challenges of AI. For example, Carnegie Mellon University recently announced it is establishing a new research center focused the ethics of AI, while Google, Facebook, Amazon, Microsoft and IBM have formed a Partnership on Artificial Intelligence to Benefit People and Society – a not-for-profit organization to educate the public and open up dialogue about AI.
What we are witnessing is undoubtedly a reassurance for those who have been advocating a more responsible development of new technologies, accompanied by open discussion and a ethical reflection about the opportunities and challenges of AI. At the same time, some issues, including the question of transparency, still fail to be addressed in a satisfactory manner. Another challenge is to effectively coordinate the activities of different actors active in the area, so that there is cross-fertilisation of ideas, learning and sharing of good practice between them. We hope that the tools that have been, and are being developed in SATORI (e.g. ethical impact assessment that could help identify and analyse ethical impacts of AI research projects; our guidance on good practice in ethics assessment committees that could inform the setting up of AI ethics committees, list of shared ethical principles) will be able to provide some practical guidance and inform the ongoing and future debate.
Tags: artificial intelligence, ethical impacts, ethics of AI, transparency