top of page

Computer scientist Francesca Rossi on artificial intelligence: trustworthiness is key, gender divers

This year’s AI for Good Global Summit took place in Geneva, Switzerland, 15-17 May. Hosted by the International Telecommunication Union (ITU), the summit is the leading United Nations platform for dialogue on artificial intelligence (AI). The summit brought together industry, governments, academia and civil society to explore AI’s practical applications in reaching the United Nations Sustainable Development Goals.

AI’s social impact is neither straightforward nor unilateral. There are many ethical considerations around the application of AI technology, while some people fear it may cause unemployment and perpetuate human bias. Similarly, the gender dimension of AI’s social impact should be considered.

Francesca Rossi, board member of the Partnership on AI.

At the AI for Good Global Summit, we caught up with Francesca Rossi, Professor of computer science at University of Padova and AI Ethics Global Leader at IBM Research. She is also a board member of the Partnership on AI, a coalition working to link AI technologies to the improvement of human welfare. Francesca shared her perspective on how to steer the direction of AI research and development to make its benefits available to all – and the importance of bridging the digital gender divide.

What’s the most pressing challenge for you in leading research in machine learning and AI?

One of the most pressing challenges is diversity. The right approach to make AI as beneficial as possible and to address the ethical concerns of the pervasive AI is really to have a multi-stakeholder, inclusive and diverse approach. It achieves much more in terms of defining the issue, brainstorming what are the best ways to address them. Then the AI people can find the technical solution.

Gender diversity is very important. I really would like to encourage women to join this area. In my long experience at the university teaching students, classes are made of mostly male students and few women. I have to say that women are always at the very top portion of the class. When women break this prejudice of not being suitable for STEM studies, they perform quite well. They can bring a contribution that comes from an emphasis on the social aspect, which right now is particularly important because we have to understand the social implications on the very pervasive deployment of AI in our life.

In current debates around AI’s application, there is a lot of distrust and suspicion about its benefiting all. How do you think we can address these concerns?

AI is going to be built to help humans behave and work for better decisions. Humans can get the best benefits, and teams of humans and machines can achieve the best results if we can achieve a level of trust between them. To achieve that, one of the capabilities that AI systems should have is to explain why it is doing or not doing something. There are some approaches to AI especially focusing on data that are less explainable than others, but sometimes more accurate. It is important to try and reach the right tradeoff between wanting to be very accurate but at the same time wanting to provide explanations, which in some domains is required by regulations and laws.

Another important thing is the notion of fairness and being unbiased. We know that if we train an AI system on data that is not diverse and inclusive enough, then the AI system makes decisions that are not fair to certain groups, because it doesn’t have enough data for certain parts of the population. There are a lot of researchers already working on trying to detect and mitigate data bias in the training data.

Francesca met with EQUALS during the AI for Good Global Summit 2018. Photo credit: Colin Mitchell

How can explaining an AI system’s decision-making process ensure trustworthiness?

To be able to trust somebody, you need to know “why are you telling me this, instead of that?” Otherwise, you are not going to trust the person because you don’t have access to everything inside his head. So it is important that this person can explain this to you. Another thing is that you can recognize whether AI is following your principles in making decisions. This is important because when you give an objective to a machine, you cannot say all the things that this machine has to do. So you want to be able to specify a goal to the machine, but at the same time, you want the machine to keep in mind that the goal has to be reached following certain values and ethical principles that you would have followed if you had to do the same thing. This value alignment is very important and there are researchers that try to understand how to inject that into machines.

The Partnership on AI, on whose board you sit as an IBM representative, includes major companies in the world that are on the forefront of AI research and development. What is the role of the partnership in steering the future of AI for good?

The Partnership on AI started with six companies. Now it includes about 53 partners in which 30% are companies. The rest are scientific associations, universities, research centers, NGOs, civil societies and even UN agencies. The goal of the partnership is to open a collaborative platform where these different stakeholders can discuss together the issues and find best practices on how to solve them.

There are several themes that we are working on. One of them, for example, is ‘Fair, Transparent and Accountable AI’. The working group tries to identify cases that could help understand better the issues and how they can be solved, which can allow us to learn some general lessons on the best practice. Another one that we have already started, is the ‘Impact of AI on Jobs and Marketplace’, which is also a big concern since AI is going to transform many jobs.

There are many other initiatives that have started in the past two years with the same call: ‘help everybody benefit from AI in the best possible way’. I think the Partnership on AI is unique because of the presence of corporations that can actually understand what the issues really are – because they live it every day. They can provide use cases and more than just academic initiatives. I think it is a very unique contribution to this general brainstorming and discussions about AI ethics.

bottom of page