The importance of ethical AI – a Q&A on our new training programme

The Society has launched a new course on ethics and governance in AI delivered by Dr Georgina Cosma, senior lecturer at Loughborough University teaching natural language processing, and AI & big data analytics modules. We had the opportunity to talk to Georgina about AI, her work in it and why ethics and governance in AI is so important.

Read more about the course and book for the October virtual classroom

Could you tell us a bit about your current research interests in Artificial Intelligence (AI)?
My current research interests strongly reside in developing AI solutions in healthcare. I am currently leading AI work of several funded projects around healthcare and wellbeing, but also on other applications in engineering and natural language processing. For these projects, we are building AI models, but also developing strategies for bias identification, mitigation and management, and developing frameworks for lifelong learning strategies to ensure that the models continue to learn fairly when deployed in the real world.
 
You’ve worked on AI solutions in healthcare – what are the uses of AI that have had the most positive effects, in your experience?
When developed responsibly, AI can have several positive impacts on healthcare ecosystems. It is utilised to monitor and manage healthcare conditions such as diabetes and to predict early symptoms of diseases such as cancer. AI-based tools are helping health professionals decide on the best treatments for patients based on their medical profile and have been an effective tool in assisting screening tests for several types of cancer including breast and prostate cancer. For example, AI tools are helping doctors interpret mammograms, and research in this area is quickly evolving to other cancers.
 
Currently, I am leading the development of AI-based software that can analyse large sets of healthcare reports and identify factors that can lead to adverse maternal incidents. Eventually, these factors will be utilised to train healthcare staff to identify the warning signs of devastating maternal incidents and to improve patient safety within the NHS and other healthcare systems. The project is called I-SIrch and it is funded by the NIHR/THF.
On another project called DECODE that is also funded by the NIHR, my team is developing AI-based tools to identify the clusters and trajectories of Multiple Long-Term Conditions (MLTCs) in people with Intellectual Disabilities. We will identify combinations of risk factors for predicting key events in patient trajectories, and the outputs of the project will influence clinical guidelines.

In both these projects, we are also implementing strategies for bias detection, management and mitigation to ensure fairness in the data and models. These strategies will be different for each project because of the different types of data we are processing, and the different tasks we are dealing with.

Why do we need an ethical framework for AI?
Artificial intelligence is technology concerned with building smart machines that can mimic human intelligence, it is accelerating digital transformation and is having an impact on the economy and society. However, we have to use this technology responsibly, to develop systems that do not have harmful consequences for humans and society. 

Without a doubt, AI can assist human decision-making, but only if developed and deployed responsibly. It has become relatively easy to develop and apply machine learning models to data, however, we must be careful to train these machine learning models fairly, and that means understanding the data that is used to train the machine learning model.  We need to understand the impact AI can have on stakeholders and society. This applies to all AI models, whether they are implemented to predict adverse maternal incidents or to detect aircraft defects. Importantly, every AI system must have continual lifelong learning strategies embedded within it - we can put a lot of effort into creating a system that is making fair predictions, but things can quickly change when the system is deployed and starts to learn from new batches of data.
 
Ethical frameworks provide guidelines and techniques that assist in the responsible design, development, use and deployment of ethical AI systems that contribute to the 'good' of society and do so with fairness.  These frameworks help assess the impact of AI systems on stakeholders and society as a whole and also enable us to build AI models that can continue to learn responsibly when scaled and deployed in the real-world.

Could you give an example of a situation where badly designed AI might affect someone’s rights or cause unintended harm?
Recently, there has been a lot of attention on ethical AI, particularly after certain AI algorithms made biased decisions. For example, some AI-based facial recognition technologies were found to be less accurate in recognising people with darker skin tones; and AI algorithms developed for screening candidate resumes favoured men's resumes over women's leading to candidates being unfairly screened by AI.

For AI to be able to make the correct inferences about a data set, the data must be as accurate and complete as possible and must be representative of the population otherwise the system will not be able to generate the desired results. The risk is that if an AI is trained on data from one population, it will perform less well when applied to data from a different population. For example, AI models that are developed in Europe or America may perform less well in Asia.
 
Also, we need to keep in mind that bias is not always accidental. Attackers can manipulate the data sets used to train AI to make it biased, or they may tamper with an AI model’s parameters. There are plenty of ways that AI systems can be manipulated into giving false outputs. This can have legal consequences and reputational damage to a business or organisation.
 
Is it straightforward to detect unfairness in AI systems or do you need to know what to look for? Is it easy to fix once detected?
Fairness is a socio-technical challenge that includes social aspects such as people, institutions and communities, and technical aspects such as algorithms. Fairness influences who might benefit from AI systems and most probably cannot be solved with technical tools alone. Therefore, it is not straightforward to detect unfairness in AI systems.
 
From a technical perspective, we need to check for bias in the data and the algorithms, and the methods we use depend on the type of data too, for example, whether we are dealing with images, text, tabular data or even multimodal data. Researchers are working on developing libraries for assisting with the evaluation of fairness, and there is still a lot that needs to be done in this area. We are also in a situation where there are several ethical AI frameworks, and the information can become overwhelming. 
 
Who is this course aimed at, and what should they be able to gain from doing it?
 The course is ideal for everyone interested in the ethical aspects surrounding the development of responsible AI. I think AI practitioners and Data Scientists who design, develop, deploy, use, and interact with AI systems would find the course very useful. This could also include researchers and students who want to learn about concepts, tools and techniques around the design and development of fair and responsible AI systems.

The course could also be helpful for professionals, managers, team leaders and other employees from organisations who are considering the integration of AI into their systems and wish to do this responsibly and fairly, and of course academics who are interested in using AI technology for research or other purposes will also find the course beneficial.
 
 
 
 
 
Load more