24 April 2024

Benefits and Challenges of Living with AI

What do several specialists from various fields – from neuroscience, health, engineering, education, the arts, the law – expect and fear about having artificial intelligence everywhere in our lives? That was the theme of the latest Ar Event at the Champalimaud Foundation.

Living with AI - Panel Speakers

Part I of the series of Ar Events under the motto “Roots of AI” was about the past – the history of artificial intelligence". Now, the “Creative Directors” and the invited speakers of Part II of the series, which took place at the Champalimaud Foundation (CF) last month, undertook to discuss the now of AI, sharing their views on the pros and cons of the technology. Part III (the last event of the series) will take place on May 7th and will deal with the future of AI.

About the present, Creative Director Eric DeWitt – a research associate of the Cognitive Decision Science Group at the CF’s Systems Neuroscience Lab – started by putting things into perspective, showing that AI has already invaded our lives much more thoroughly than we might think.

“We asked you [the audience, before the event] how many times you interact with AI each day”, said DeWitt. “And most of you replied about one to three times. I'm not convinced.”

It’s much more often than that, he went on to explain. “Here's a list of things that you may interact with on a daily basis, from a smartphone, Bolt, Instagram, to the robot that can clean your floors. All of these use some form of AI. You are surrounded by algorithms that are acting on data to make predictions, whether it's about the stock market or a warehouse and its inventory, from detecting barcodes or detecting faces, and of course, personalizing your entire digital world.” So much for thinking we only occasionally deal with artificial intelligence in our daily lives. It’s literally everywhere, in our physical as much as in our virtual environments. “Every day I use DeepFace, an AI algorithm for face detection, to unlock my iPhone”, DeWitt reminded the public.

A few minutes earlier, and after the viewing of an impressive real-time AI-generated interactive video, Creative Director Lydia Fettweis Neto, research technician, also from the Systems Neuroscience Lab, presented the invited speakers. A very diversified panel: Simone Lackner, multidisciplinary researcher with a neuroscience background “working as a diplomat between science, art, society and policy”; João Santinha, senior biomedical engineering researcher at the Digital Surgery Lab of the Champalimaud Clinical Centre’s Breast Unit; James Anstis, head of Theory of Knowledge at Oeiras International School; Germán Méndez, Machine Learning Lead and Project Manager at Siemens Energy; Rudolfo Quintas, Portuguese artist “whose work uses interactive, generative and artificial intelligence techniques”; and Iakovina Kindylidi, Senior Associate at the Law Firm VdA, who specialises in cybersecurity and intellectual property.

Each of these guests presented their view of AI as it is used today, and it was clear from what they said that, while they all think great good can come from using AI-based algorithms in most venues of personal and professional life, right now they are also concerned about the potential risks and abuses it can lead to.

Fake and truth, can we tell them apart?

For Simone Lackner, it was the issue of fake and true information that is of immediate concern. “We are in the middle of a massive global digital transformation colliding with anti-science and post-truth movements”, she said. “Social media is bombarding us with information that is driven by algorithms that are biased on our implicit cognitive biases, playing with our emotions, our wellbeing and the understanding of reality. I'm strongly concerned with how the concept of objective truth is fading out and how lies are passing not only into History, but also jeopardizing our immediate present, affecting individual psychology, group dynamics, and ultimately democracy.” Election manipulation is a very real problem in this context. 

Fake news, propaganda and disinformation are nothing new. But today, Lackner explained, thanks to the development of the Internet and AI, information can be personally targeted. “Now, with new language model tools such as ChatGTP, that are trained on our biased data traffic on the Internet and return curated and convincing information as an objective truth with high confidence, there is a clear danger of conscious and unconscious mass production of misinformation”, she warned.

She ended on a more positive note about what we can do to counter the danger. “I'm convinced that AI is a tool”, she said. “It's made by humans and should be used to increase our wellbeing. So it should be on us to decide how the algorithms should be trained. We need to reflect, to be emotionally aware, and practice intellectual humility before we like, share, or comment information. Because all our inherited cognitive biases go into AI, we need to think about this – not only as scientists, but as online media consumers and producers –, and therefore as conscious or unconscious trainers of new evolving AI tools”, she concluded. The responsibility is ours, in other words.

Transforming surgery

João Santinha, who develops AI and augmented reality tools for breast cancer surgery at the FC, is a strong believer in the good uses AI can be put to – perhaps because in his particular field of work, things are advancing in a positive direction at a faster pace.

“I believe that AI can have a major impact on the workload of health care workers, on improving patient care, and positively impact health care procedures for a more personal, more connected, and more human care”, he stated. Today, he explained, AI is already beginning to be used to take notes from doctor consultations, allowing doctors to pay more attention to their patients and to save tens of hours in paperwork every month! AI can also make doctors' reports easier to understand for patients and for their own colleagues. “At the Champalimaud Foundation’s Digital Surgery Lab”, Santinha exemplified, “we have developed an application that is able to listen to a radiologist's dictated report and, using a large language model [of which ChatGPT is the most well-known example], simplify and structure these normally unstructured reports to make them clearer and more concise.”

Santinha also mentioned his lab’s latest developments, which are AI-based and augmented reality-based medical devices that may come to be game-changers in terms of breast cancer surgery and clinical procedures. 

A patient that has to undergo breast surgery has two options, he further explained: a mastectomy, which is a complete amputation of the breast affected by the cancer, or a breast conserving surgery, which is the preferred procedure. Not all patients need radical mastectomy, but because of the lack of information about the tumour – namely its exact location and its shape –, sometimes mastectomies are prescribed where a breast conserving surgery could have been performed. And conversely, with conservative surgery, sometimes a second operation is needed to fully remove the tumour.   

“What we are trying to do is to use medical images from the breast cancer patient to create a personalized 3D model of her torso, a ‘digital twin’ of the patient’s breast that can be used for better surgical planning, leading to fewer mastectomies”, said Santinha. “And in the operating room, during the surgery, the surgeon will actually be able to ‘see’ the tumour and its boundaries through the patient’s skin, and therefore perform a more accurate intervention, removing the tumour and leaving as much healthy tissue as possible.”

Santinha admitted, however, that the translation of research into clinical practice is difficult, and that there is still a long road ahead for the widespread adoption of AI in healthcare. One of the hurdles, he said, is that medical institutions are not yet willing to share their patient data and images in a huge database that could be readily accessed by researchers (everyone agrees that only very large amounts of medical data can enable the correct training of AI algorithms, not only in medicine but in all domains). Concerning the medical field, Santinha thinks tools for achieving this without jeopardising patient privacy are already available – and that they should be put to use as soon as possible to advance research.

What about education?

As for James Anstis, he was not really convinced about the honest and transparent sharing of information, at least within the teaching arena – although, like the preceding speakers, he thinks the good potentialities of AI use are many. “From providing teachers with creative and exciting lesson ideas to helping them with a stack of marking, it's already changing our working lives. And for students, from helping them to identify the procedural errors in a math problem to acting as a personal tutor, coaching them and guiding them in the specific areas where they need most academic support, it could potentially accelerate learning in a way that would have been unimaginable just two years ago. All of this is revolutionary”, he declared.

But currently, he pointed out, there is also a dark side to AI use in education. “Right now, these potentialities are a secondary concern to the main issue: the issue of academic dishonesty, which is teacher speak for cheating”, he stressed, alluding to students asking AI chatbots to write their essays and other assignments for them. 

There are possible ways to counter this, which involve tighter control and rules on the part of teachers: banning electronic devices from exams, having the students do their assignments on teacher-controlled platforms that can detect cheating. But this will not address the real problem with education: the reasons why students find it is important.

He then explained that, in conversations with his students, he discovered a sad truth: education is important to them because they associate it with money – “a big house, a nice car, exotic holidays” – with employment status (becoming a doctor, a lawyer, an engineer), and with their parents’ wishes. 

The keyword here is success, “defined through the lens of money, status, and normative cultural values, values which we know do not promote wellbeing when at school or after it”, Anstis lamented. “And AI is simply amplifying a pre-existing culture whose natural motivation (...) is a grade rather than meaningful learning”. According to a US study, in 2023 43% of first-year college undergraduates used ChatGPT to help them write their coursework. As long as this culture prevails, Anstis expects to receive “a veritable avalanche of essays written by 17 and 18 year-olds with the help of AI”. 

So, in his view, the question of how to ensure students' academic integrity in this new landscape of AI is actually not the right question. “AI should matter to us all right now because it could make us rethink what we want from education itself”, he concluded.

Solving complex energy problems

Germán Méndez’ intervention was more optimistic. “AI is doing a little revolution in the engineering world”, he started by saying. As a machine learning engineer, or “AI engineer”, he and his team are “creating the AI solutions to resolve and somehow have an answer for very big complex problems”, he said – such as producing and transmiting energy. 

AI can help do things that even thousands of engineers together wouldn’t be able to, such as perform simulations that take all possible scenarios into consideration. Without AI deep learning algorithms, something like this would be nearly impossible to do.

But there is also the problem of atmospheric emissions, and he and his team, Méndez said, are also envisioning AI models to actually control and reduce the carbon footprint of fuel-consuming machines. “We can fulfill the promise of having a low [carbon] impact”, he contended, “by attempting to accelerate the creation of hardware that can capture the carbon.”

“There are a lot of drawbacks and challenges that have to be addressed”, Méndez admitted nonetheless. “One of these challenges is, can we rely 100% on this kind of algorithms?” Another challenge is the cost of implementing them, because training AI algorithms is highly costly. “One of my major concerns is the amount of energy that AI itself is using right now”, he pointed out, stressing the need to create more efficient hardware to train AIs.

The cost of AI is a very legitimate concern. In 2023, overall training of AI algorithms consumed as much energy as the Netherlands, a country of eight million people! And, predictably, the computational requirements of AI are going to increase exponentially. How will we deal with these challenges? The answers are still an open matter. “But we have to be aware that AI is here to stay. We cannot go off the grid”, Méndez concluded.

Will artists become obsolete?

As for the next speaker, digital artist Rodolfo Quintas, he asked: “With all new technological revolutions, the question of whether art will become obsolete always arises”, he said. Technology, all through the long history of art, has always challenged, not only the way artists do art, but also copyright, royalties, distribution. It has changed the public’s access to art and led to new art movements, he pointed out. “We passed from a period of technology of art production to the 20th century, a time of technology of art reproduction [photography], to the present time of generative technology.”

And what is happening, Quintas said, is that we are now “radically suppressing something that is very important: the creative process, the time during which artists ask questions, deal with their emotions, make mistakes and come up with new thoughts.(...) My belief is that this will very soon generate a new mainstream current of ‘fast food art’, art that is created – sorry, imagined – and forgotten in two seconds”.

Quintas is also worried about the proliferation of deepfakes on the web. These nearly-perfect “descendants” of crudely photoshopped images are digitally manipulated images can convincingly replace one person's face with that of another, or create computer-generated images of human subjects that do not exist in real life. Quintas has produced artwork to raise public awareness about how deepfakes, which are getting more and more elaborate and difficult to tell apart from truthful pictures, can threaten our democracies by spreading misinformation.

And as an artist, he thinks about copyright infringement. For instance, users of AI image generators from text, such as DALL-E or Midjourney, may be incurring in copyright violations simply because the images those companies use to train their algorithms are copyrighted.

But once again, echoing the other speakers, Quintas says that not everything about AI is negative and dangerous: “generative AI will also open new opportunities in the creative industry”, he said. In his personal view, “the question is not about how AI can replace what we already do well, like making paintings, images, or music, but about how AI can help artists apply creativity to data, something we cannot do with traditional mediums”.

A milestone: the European AI Act

The last invited speaker of the evening was the lawyer Iakovina Kindylidi. “AI is also changing the legal profession”, she said. “I've been researching and working on AI for about six years and I've been using more than one AI tool every day, multiple times per day for my work for the past year or so. But will it replace lawyers? I don't think so.”

But even so, its uses present many problems, Kindylidi added: biases in healthcare, the amount of energy need for training large language models, increase in plagiarism and infringement of copyright, creation of elaborate deepfakes that spread misinformation and fake news. “AI, as a tool, can in reality aggravate our very human problems.” 

Another very complex issue in the legal field, she explained, is the so-called black box problem. “In simple terms, when I have a machine learning or a deep learning AI model, I can see the input, I can see the output, but I don't know what happened in there. And if I don't know, I don't know whose fault it is or who is responsible for correcting any error. And most probably, I cannot correct the error either.”  

She then evoked the European AI ACT, very recently approved by the European Parliament. “It is the first comprehensive regulation on AI, not only for Europe, but for the world”, she pointed out. “This is a landmark regulation, marking a pivotal moment in shaping the future of AI”. Not only companies that are providing AI, but also those that are using AI, will be subject to this regulation.

Of course, as Kindylidi noted, AI is already legally regulated by general law: a crime, when commited with AI, will always be a crime, punishable by law. However, the AI Act was needed to clarify “the application of a legal framework that is very anthropocentric to something that is not human”, she said. 

When the AI Act comes into force, which she believes will be sometime at the end of April or beginning of May, companies producing or using AI will have some months to comply. “I will be very busy!”, Kindylidi joked. 

“I think we will also need to raise our own AI awareness, our digital literacy, and evaluate our work and our societal and human values”, she concluded.

All together now

The evening ended with all the speakers sitting on chairs on the stage and answering questions asked by Creative Director Lydia Fettweis Neto. Questions such as: if an AI system fails, who do you think is responsible or accountable? How do you feel about the exponential growth of wellbeing and health apps? Do you think AI will make it easier or harder for us to manage our cognitive biases in the face of so much information? 

Germán Méndez’s answer to the last question – “Is there one benefit you would like AI to bring us in the future?” – was particularly interesting. He wished AI will allow us to “enjoy ourselves” during the time it will free up by sparing us tedious work. But this revealed a potential drawback: that this extra time would instead make companies demand more productivity from their employess rather than let them enjoy it or spend it in other meaningful things. “This is probably going to increase the pressure on us for delivering faster, better, or for reducing the time to market”, he speculated.

Should we slow things down and think first?

Right before this general discussion, Creative Director Eric DeWitt summed up the talks. “What will living with AI mean for us over the next few years?”, he asked. Human and societal disruption were a common theme all through the evening, he noted, “though for good or bad, it's not always clear. So I guess I'd like to take a little personal perspective”.

The thing is, he pondered, that all technical revolutions throughout History created disruptions that needed humans and society to adapt. “And so, while our brains haven't changed in the last 50,000 years, our culture has adapted again and again”, he added. But in just 10 years, the growth of AI has been exponential, whereas “our human nature fundamentally evolves quite slowly.”

DeWitt then wondered whether we are ready to adapt fast enough to the disruptions that are being brought about by AI. “As a neuroscientist who studies humans, human psychology, human decision-making, and group decision-making, I have some questions, he said. “I don't know that we know enough yet about ourselves and our culture”, or about whether the mechanisms to adapt at the required pace are in place. 

“We should [first] try to think about how to equally distribute the benefits that might come from AI, regardless of who happens to have been in control at the time that these things were developed or who happens to have the data. And we might try to slow things down” a bit, he concluded.

Text by Ana Gerschenfeld, Health & Science Writer of the Champalimaud Foundation.
Photos by Carla Emilie Pereira and Catarina Ramos.

 
Loading
Please wait...