Linda Hogan

31 July 2019

No Comments

Home Op-ed

Trustworthy Artificial Intelligence: does Research Integrity have a role to play?

Trustworthy Artificial Intelligence: does Research Integrity have a role to play?

Hardly a day goes by without a newspaper article about how artificial intelligence is changing our lives. True, we have had many false starts in the field artificial intelligence over the last few decades, but it does seem that a threshold has been reached whereby the combined impact of deep learning, big data and global financial investment makes the likelihood of significant societal roll-out of AI inevitable. Some commentators warn of a dystopian future, with the displacement of humans by superintelligences and deepening polarisation and inequality. Others anticipate a future of greater wealth and opportunity and of significant scientific advances. Like virtually all technological developments through history, AI will create value and cut costs. It will likely redistribute and change the nature of work, and there will be beneficiaries and losers. In fact, it will probably impact information and knowledge work in the same was as automation impacted manual work in the past. It will increase the speed and accuracy of diagnostics and personalised medicine, andit will offer personalised education, information and entertainment to those who can afford it. Whatever the ultimate outcome, it is clear that AI will have a fundamental impact on all aspects of our lives and will continue to be deployed in the arenas of policing, health, justice, education, business, media and military, sometimes with dubious and unanticipated outcomes.

Whatever the ultimate outcome, it is clear that AI will have a fundamental impact on all aspects of our lives.

In universities and research centres across the world, researchers and their students are developing new adaptive technologies, and many of our engineering and other students will find employment in companies and businesses that will develop and implement AI. What then can we do to prepare students for these new contexts in which issues of ethics and research integrity are paramount and which will have such significant societal impact? What kinds of innovative tools and practices can we deploy when teaching students about responsible conduct in research that will not only support them in their student days, but that they can bring forward to their working lives as well?

 

At least part of the answer is that students and researchers need to be empowered to recognise when issues of research integrity arise and to have their ethics education future-proofed so that when they encounter new situations they will have the skills to navigate them and will be able to see how the core values of transparency, honesty and responsibility can best be honoured and promoted in each new context. This is a tall-order, to be sure. However, it is the responsibility of those of us who are educators of the next generation of engineers, business leaders and policy-makers, to ensure that they not only have the technical and business skills to drive innovation, but they also have skills of ethical analysis and a commitment to responsible research and innovation.

Students and researchers need to be empowered to recognise when issues of research integrity arise and to have their ethics education future-proofed.

The practice of value-sensitive design is new methodology that tries to address this issue, especially in the field of AI. It aims to include ethical considerations as a core part of the technical process, so that technologists are alive to ethics while the technologies are being developed, and not just downstream when these technologies are being deployed. In Trinity College Dublin we have experimented with this methodology at the ADAPT Centre, which is a research centre that develops next generation digital technologies, including AI. What we call the Ethics Canvas has been developed. It is an online tool that assists the students and researchers to structure ideas about the ethical implications of their projects. It provides them with a framework in which they can visualise the ethical issues that arise with the technology, including future impacts, but it also requires them to think about the range of stakeholders who will be affected by their innovations and it ultimately gives them the tools to address and hopefully resolve any ethical dilemmas associated with their new technologies. Of course initiatives like this, and other value-sensitive design projects, need to scale in meaningful ways if they are to be effective. Nonetheless they provide new ways of ensuring that the values of transparency, honesty and responsibility are to the fore in the research process.

The practice of value-sensitive design is new methodology that tries to address this issue, especially in the field of AI.

In April 2018, the European Commission published its Ethics Guidelines for Trustworthy AI which sets out a framework for achieving ethical, human-centric AI. Much of the focus is on research and development in AI and therefore is of great significance for universities and research institutes. It insists that respect for human autonomy, prevention of harm, fairness and explicability should underpin all AI research and development. In order to achieve this, it requires AI systems to respect human agency and oversight, to ensure technical robustness and safety, to protect privacy and data governance, to achieve transparency, diversity, non-discrimination and fairness, and to be focused on societal and environmental well-being and accountability. The Commission has added an assessment list that researchers and students can use to judge whether their research is supporting trustworthy AI, or not, and it has also created a forum to exchange best practices for the implementation of trustworthy AI. These practically-oriented tools are particularly welcome, since they provide new fora for collaboration to advance responsible research and innovation.

 

If AI is to scale ethically then universities and research institutes have a vital role to play in ensuring that current and future generations of researchers, industry-leaders and policy-makers are attuned to these issues and have the skills to navigate the complex territory of trustworthy AI.

Leave a Reply

Your email address will not be published. Required fields are marked *