10 December 2025

Building an AI that “sees” like we do

Artificial intelligence is typically engineered to surpass the capabilities of humans and other primates. Yet, in a compelling shift, researchers are now turning to biological vision for inspiration. A new study, recently published and presented at the NeurIPS conference introduces EVNets – Early Vision Networks, an AI model designed to mimic the computations of the primate visual system and significantly improve robustness in image analysis.

Example of an image, from ImageNet-C database, that is easily recognized by a human as a mushroom. and several common image corruptions that makes Artificial Intelligence algorithms to fail.

Vision is central to how humans navigate the world, whether recognizing a familiar face in a photo or driving to a family dinner. For Artificial Intelligence (AI), however, even minor visual distortions, such as changes in brightness, contrast, or subtle perturbations, can cause object recognition algorithms to fail. Bridging this performance gap has been a major challenge in machine learning.

Responding to this need, researchers from INESC-ID, Instituto Superior Técnico (IST), and the Champalimaud Foundation (CF) in Lisbon, Portugal developed EVNets – Early Vision Networks – a biologically grounded architecture that better reflects how early visual processing occurs in the primate brain.

The work, developed by Lucas Piper and Arlindo L. Oliveira (INESC-ID and IST), and Tiago Marques (CF), was recently published and presented at NeurIPS 2025 conference in San Diego, California, one of the world’s most prestigious, competitive and influential conferences in Machine Learning and AI.
 

Designing algorithms with biologic inspiration

AI-based object recognition has advanced rapidly over the past decade, spearheaded by pioneers such as Geoffrey Hinton, recipient of the 2024 Nobel Prize in Physics. Yet despite these breakthroughs, conventional AI approaches remain significantly more fragile than biological vision. The scientific community has therefore converged on two main strategies to overcome this limitation: 

1. Building increasingly large models that require massive datasets and computational power – raising environmental and scalability concerns;

2. Drawing inspiration from neural processes in animals and humans, and incorporating biological mechanisms into algorithmic design. 

“We decided to take the second approach, building biologically inspired models that combine neuroscientific computations into convolutional neural networks (CNNs), a popular architecture of AI models for vision” explained Tiago Marques.

This research builds on Tiago’s earlier work at MIT - published and presented at NeurIPS in 2020 -, in which he introduced the VOneBlock, a front-end CNN module designed to emulate the primate primary visual cortex (V1). Expanding this framework, Lucas Piper has developed EVNets, by combining the VOneBlock with a new Subcortical-Block, modeled after key computations occurring in the retina and the lateral geniculate nucleus, two important structures that form the pathway between the eye and the visual cortex. This added layer helps the system better deal with visual distortions in a human-like way, substantially increasing its overall robustness. 

Interestingly, EVNets were not only better at performing computer vision tasks, but they were also more aligned with human vision! To evaluate biological similarity, the team used established benchmarking tools such as the Brain-Score suite, which assesses how closely computational models mirror primate visual processing. EVNets demonstrated marked improvements across the benchmarks, bringing AI models closer to the biological systems it aims to emulate.
 

AI stamps it back to humans

A major advantage of these biologically grounded AI algorithms is their interpretability. As concerns grow over the use of opaque “black-box” models, the ability to to understand the inner workings of an algorithm is increasingly important. “We want to develop models that we can comprehend and explain,” said Lucas “If these algorithms, such as ours, are aligned with how the human brain works, we are already in a position that makes them more inherently understandable”. 

By modeling biological processes, these algorithms may help researchers explore the very systems that inspired their creation, forming a virtuous cycle between neuroscience and AI.

Beyond understanding the brain, EVNets can also be used for other purposes. One such application is already underway within the Breast Cancer Research Program (BCRP) at the Champalimaud Foundation, where Tiago Marques is co-leading the Digital Surgery Lab with João Santinha, medical imaging and AI researcher, and Pedro Gouveia, breast cancer surgeon. The aim of this new project is to study whether EVNets can analyze scans obtained from machines produced by different manufacturers - long a challenge for conventional AI models. If the improvements in robustness and accuracy observed in computer vision tasks translate to medical imaging problems, EVNets could ultimately enhance diagnostic support and patient care.

Original Paper here.

Loading
Please wait...