Efstratios Gavves Associate Professor · Physical AI University of Amsterdam
01 World Models &
Robot Learning
World Models
Digital Twins · Embodied AI
  • DreMa: Compositional World Models ICLR 2025
  • CaPo: Cooperative Plan Optimization ICLR 2025
  • Graph Switching Dynamical Systems ICML 2023
02 Causal Vision
Causal Vision
Causal Representations · Interventions
  • BISCUIT: Causal Rep. Learning UAI 2023
  • CITRIS: Causal Identifiability ICML 2022
  • ENCO: Neural Causal Discovery ICLR 2022
03 Mechanisms
& Safety
Mechanisms
Physics · Interpretability · Safety
  • Mechanistic Neural Networks ICML 2024
  • Mech. Interpretability for AI Safety TMLR 2024
  • Modulated Neural ODEs NeurIPS 2023
04 Physical AI
for Biomedical
Biomedical AI
Brain Stroke · Cancer · Neural Fields
  • Physics-Inf. NFs for CT Perfusion MIDL 2024 ★
  • Spatio-temporal Physics-Informed MeDIA 2023
  • VISA: Video Object Segmentation ECCV 2024
Scroll
Efstratios Gavves
About

Building AI that
understands the
physical world.

I am an Associate Professor at the University of Amsterdam and co-founder & Chief Science Officer of Calli Labs. My research centers on Learning Dynamics in Computer Vision, with Physical AI as my north star — algorithms that understand cause-and-effect and physical dynamics, enabling robust embodied agents to act safely and reliably.

Awarded the ERC Starting Grant and NWO VIDI Career Grant, I direct the QUVA Lab and POP-AART Lab at UvA. My work on causal representations, world models, and mechanistic interpretability forms the theoretical foundation for the next generation of Physical AI.

14K+Citations
48h-index
137Publications
20+PhD Graduates
Manifesto

The physical world
is not a dataset.

The dominant paradigm of modern AI has achieved remarkable things by treating intelligence as a function from observations to outputs, refined through oceans of data. But the physical world is causal. It is dynamic. It resists manipulation, conserves energy, flows according to differential equations older than any neural network. And crucially: it acts back.

"True intelligence requires not just pattern recognition, but an understanding of why things happen, what causes what, and what would happen if."

I believe the next frontier of AI is not wider models but deeper understanding — machines that model cause and effect, that embed governing physical laws into their architecture, that build compressed, interpretable representations of how the world actually works. Algorithms that are not only accurate but controllable, auditable, and safe.

This is Physical AI: not AI applied to physical problems, but AI that is physically grounded — in causality, in mechanism, in the language of dynamics. My research advances this vision through four interlocking programs, each essential to the larger whole:

01
Research Thrust

World Models &
Robot Learning

Learnable digital twins that simulate the world, enabling robots and embodied agents to reason about the consequences of actions before executing them. Our compositional approach achieves remarkable generalization and transferability with scarce real-world data.

ICLR 2025

Dream to Manipulate: Compositional World Models Empowering Robot Imitation Learning with Imagination

Barcellona, Zadaianchuk, Allegro, Papa, Gavves

Read paper →
ICLR 2025

CaPo: Cooperative Plan Optimization for Efficient Embodied Multi-Agent Cooperation

Liu, Zhou, Du, Tan, Snoek, Sonke, Gavves

Read paper →
ICML 2023

Graph Switching Dynamical Systems

Liu, Magliacane, Kofinas, Gavves

GitHub →
NeurIPS 2023

Latent Field Discovery in Interacting Dynamical Systems with Neural Fields

Kofinas, Bekkers, Nagaraja, Gavves

GitHub →
CVPR 2016

Siamese Instance Search for Tracking

Tao, Gavves, Smeulders

1,467 citations · paradigm-shifting

Read paper →
02
Research Thrust

Causal Vision

Discovering causal structure in visual data through temporal interventions and interaction signals. Our representations identify true cause-and-effect relationships, enabling agents that generalize robustly to unseen scenarios far beyond the training distribution.

UAI 2023

BISCUIT: Causal Representation Learning from Binary Interactions

Lippe, Magliacane, Löwe, Asano, Cohen, Gavves

GitHub →
ICML 2022

CITRIS: Causal Identifiability from Temporal Interventions in Latent Spaces

Lippe, Magliacane, Löwe, Asano, Cohen, Gavves

Read paper →
ICLR 2022

ENCO: Efficient Neural Causal Discovery under Interventions

Lippe, Cohen, Gavves

Read paper →
ICLR 2021

Efficient Neural Causal Discovery without Acyclicity Constraints

Lippe, Cohen, Gavves

Read paper →
03
Research Thrust

Mechanisms
& Safety

Neural networks that incorporate physical governing mechanisms — conservation laws, differential equations, structured priors. Controllable, auditable, interpretable AI systems that are not merely accurate but fundamentally trustworthy.

ICML 2024

Mechanistic Neural Networks for Scientific Machine Learning

Pervez, Locatello, Gavves

Read paper →
TMLR 2024

Mechanistic Interpretability for AI Safety — A Review

Bereska, Gavves

300+ Citations in One Year
Read paper →
NeurIPS 2023

Modulated Neural ODEs

Auzina, Yildiz, Magliacane, Bethge, Gavves

GitHub →
CVPR 2016 · Oral

Dynamic Image Networks for Action Recognition

Bilen, Fernando, Gavves, Vedaldi, Gould

1,067 citations combined

Read paper →
04
Research Thrust

Physical AI
for Biomedical

Applying physics-informed learning to medical imaging and biomedical data — from CT perfusion in acute ischemic stroke to radiation therapy planning for cancer. Algorithms that not only predict but respect the physical laws governing the body.

MIDL 2024

Accelerating Physics-Informed Neural Fields for Fast CT Perfusion Analysis in Acute Ischemic Stroke

de Vries, Herten, Hoving, Isgum, Emmer, Majoie, Marquering, Gavves

Best Paper Runner-Up
Read paper →
MeDIA 2023

Spatio-temporal Physics-Informed Learning for CT Perfusion

de Vries, van Herten, Hoving, Išgum, Emmer, Majoie, Marquering, Gavves

arXiv →
ECCV 2024

VISA: Reasoning Video Object Segmentation via Large Language Models

Yan, Wang, Yan, Jiang, Hu, Kang, Xie, Gavves

Read paper →
Partnerships & Collaborations

Industry
Research Labs

Long-term research collaborations bridging fundamental AI with industrial-scale deployment.

TOYOTA
Foundation Robotics
Toyota Research Institute

€1M collaboration on transferring world models research to real-world robot learning systems. Joint work on compositional scene understanding and generalizable manipulation.

NXAI
World Models at Scale
Next Generation AI Austria

Ongoing research dialogue on foundation models and the future of general-purpose world models. Invited speaker at NXAI@NeurIPS on world models and Physical AI.

Qualcomm
QUVA 2.0 Lab · UvA
Qualcomm Research

Joint research laboratory at the University of Amsterdam exploring efficient, scalable AI for computer vision and embodied intelligence. Co-supervising 3 PhD students.

elekta
POP-AART Lab · NKI
Elekta & Netherlands Cancer Institute

AI-driven radiation therapy optimisation and oncological imaging. Joint research lab with NKI, co-supervising 3 PhD students on physics-informed learning for cancer treatment.

Recognition

Awards &
Grants

2020
ERC Starting Grant — "Expectational Visual Artificial Intelligence"
European Research Council · Ranked in the top 3% of all candidates across all disciplines
ERC
2020
NWO VIDI Junior Scientist Career Grant — "TIMING: Learning Time in Visual Recognition"
Netherlands Organisation for Scientific Research · Prestigious personal career grant
NWO VIDI
2019–
ELLIS Scholar
European Laboratory for Learning and Intelligent Systems · Member since founding
ELLIS
2024
Best Paper Runner-Up · MIDL 2024
Accelerating physics-informed neural fields for fast CT perfusion analysis in acute ischemic stroke
Best Paper
2025
Toyota Research Foundation Grant — Foundation Robotics
€1,000,000 · Transferring world model research to industrial robotics processes · 20% acceptance rate
€1M
2025
ELLIOT Consortium — European Large Open Multimodal Foundation Models
HORIZON-CL4-2024 · ELLIS-sponsored · €30,000,000 granted for scalable robust generalisation
€30M HORIZON
Building the Future

Entrepreneurship

AI Creativity
Co-Founder & Chief Science Officer

Calli Labs

Spun out of the University of Amsterdam and Gunpowder Sky (leading Hollywood production studio). Building AI for augmenting creativity — transforming how artists, creators, and filmmakers collaborate with machine intelligence.

Focus: Video understanding, world models, creative AI applications

Visit Calli Labs →
Selected Recent Work

Recent
Publications

View full list on Google Scholar →
Updates

News

2026

Advisor in NWO Round Table CS — Selected as advisor to the Netherlands Organisation for Scientific Research strategic advisory board for Computer Science.

2026

Member of IPN Network — Joined the IPN (ICT Research Netherlands) network, collaborating on national AI and computing research initiatives.

Jan 2026

Summit with 10+ leading European robotics industries to discuss the future of Physical AI and possible collaborations.

2025

Calli Labs co-founded — AI for augmenting creativity, spun out of UvA and Gunpowder Sky (Hollywood production studio).

2025

Greeks in AI Symposium 2025 — 500+ participants. Founded and chaired the inaugural event connecting Greek AI researchers across academia and industry.

2024

ELLIOT Consortium granted €30M (HORIZON-CL4-2024) — pan-European initiative on open multimodal foundation models.

2024

Best Paper Runner-Up at MIDL 2024 for physics-informed neural fields for CT perfusion analysis in acute ischemic stroke.

2024

PhD cum laude awarded to Phillip Lippe — a rare distinction at the University of Amsterdam — for pioneering causal representations in embodied AI.

2024

Elected Programme Director of the BSc AI at the University of Amsterdam. Mission: modernise the curriculum for the Physical AI era.