For every argument that construes AI as essential, promising and transforming across nearly all industries, there are voiced concerns about this technology. Many people do not trust AI, claiming that intelligent machines will one day dominate humankind. But while this is a remote possibility, the harm that can be done to people (albeit mostly unintentionally) by deep learning algorithms is very tangible – and has already been known to occur in the real world.
Take an AI software used for recruiting people in a company. If the company’s historical recruitment data fed into the software used to “train” the algorithm is gender- or racially biased (making it implicit, for instance, that only men can be executives, or that black people are not eligible for high-level jobs), the trained algorithm will be equally gender- or racially biased.
“We’ve seen Amazon pull an algorithm used in hiring due to gender bias, Google criticised for a racist autocomplete, and both IBM and Microsoft embarrassed by facial recognition algorithms that turned out to be better at recognising men than women, and white people than those of other races”, reads a news story published in Wired magazine, a couple of years ago, about the gender bias that plagued the Apple Card credit card, as the algorithm was giving men much higher credit limits than women.
Seventy-eight million euros to start solving the problem
This month, during the Web Summit that took place in Lisbon from 1 to 4 November, a Consortium for Responsible AI, led by Unbabel – a Portuguese-founded company especialised in AI language operations (such as translation) – has announced that Portugal’s PRR (Plano de Recuperação e Resiliência) is going to attribute funding of almost 80 million euros to the consortium. The goal is to “make Portugal a global leader in technologies, principles and regulations for Responsible AI”, according to a written press statement.
The consortium, according to this document, is composed of 11 startups (Unbabel, Feedzai, Sword Health, Apres, Automaise, Emotai, NeuralShift, Priberam, Visor.ai, YData and YooniK), two of which are so-called “unicorns” (worth over one billion dollars); eight research centres in Lisbon, Porto and Coimbra (Fundação Champalimaud, IST, INESC-ID, IST-ID/ISR, IT, FEUP, Fraunhofer Portugal AICOS, and CISUC), a law firm (Vieira de Almeida) and five industry leaders in the life sciences, tourism, and retail (BIAL, Centro Hospitalar de São João, Luz Saúde, Grupo Pestana and SONAE).
The person representing the Champalimaud Foundation in the consortium is Joe Paton, director of the Champalimaud Neuroscience Programme.
Named “Centre for Responsible AI,” the consortium is expected to hire 210 highly qualified staff, and to attribute more than 130 masters and doctoral degrees. It will also develop around 20 AI products until 2030, according to the aforementioned press statement.
In another document published by Unbabel, Paulo Dimas, the company’s Vice-President of Product Innovation, explains that the three core pillars of responsible AI are fairness, explainability and sustainability.
“Fairness”, he writes, “focuses primarily on reducing biases and their negative consequences – both in the machines themselves and in our society as a whole. (...) Explainability deals with transparency about how [AI] works. (...) Sometimes even the engineers who create an AI model cannot explain what is happening inside the system that leads to a certain output. (...) Sustainability is about tackling the environmental implications of AI development.”
“Responsible AI technologies should also allow for the resolution of challenges that until now were unattainable by AI given the risks involved”, the press statement further emphasises. “An example of this is the automatic translation of clinical content, where a translation error can put people’s lives at risk. This will be one of the 21 innovative AI products that the consortium intends to develop.”
An AI hub is taking shape at the Champalimaud Foundation
Right beside the Champalimaud Foundation (CF), a new project, the Ocean Campus, is being born. As we can read on the CF website, the project will bring together the FC and the Calouste Gulbenkian Foundation in “an amalgam of activities linked to science, technology, and the economy, creating a new space for innovation and entrepreneurship”.
Most notably, in the matter at hand here, “in the CF part of the campus there will be a Centre dedicated to AI, together with a Scientific Incubator, which will include a computer and data storage centre, experimental laboratories, facilities for hardware development, spaces for advanced training courses, areas for startups and areas for commercial, service or industrial partnerships,” the document reads.
“The Champalimaud Foundation wants to be the leader in digital therapeutics, combining AI, immersive technologies, neuroscience and clinical know-how to improve health through software and specialised hardware”, says neuroscientist Paton, who is also in charge of the CF’s side of the Ocean Campus. “This is a new field, with massive potential, and our existing strengths in neuroscience and clinical practice position us perfectly for the kind of innovation, grounded in hard science, that will be required”. One of the goals will be the development of AI that reflects the science being done in neuroscience and biomedicine, two areas that can strongly benefit from artificial intelligence.
The Centre for Responsible AI, of which the CF is a founding partner, will be right on time.
By Ana Gerschenfeld, Health&Science Writer of the Champalimaud Foundation.