


Christian Holz is an Associate Professor at ETH Zurich in the Computer Science Department where he leads the Sensing, Interaction and Perception Lab. Their research focuses on input decoding and interaction in mixed reality settings as well as continuous healthcare monitoring for predictive diagnostics and personalized medicine. Before joining ETH Zürich, Christian Holz was a principal researcher at Microsoft Research, Redmond.


Thomas Hofmann has been full professor at ETH Zurich since April 2014. He received his Ph.D. in Computer Science from the University of Bonn and subsequently was a postdoctoral fellow at MIT (CBCL, AI Lab) and UC Berkeley (EECS, ICSI). He joined the faculty of Brown University in 1999 and was granted tenure in 2004. He moved to TU Darmstadt as a full professor and became Director of the IPSI Fraunhofer Institute. In 2006, he moved into the private sector and served as Director of Engineering at Google, joining the leadership of the Zurich R&D center. He quit Google in early 2014 to resume his academic career.
In addition, Thomas has been active as an entrepreneur. He co-founded Recommind in 2000, a company that developed e-discovery solutions, which was acquired by OpenText in 2016. More recently he co-founded 1plusX, a Swiss company providing global marketing technology to publishers, media companies and marketers. He still is active at 1plusX as its Chief Scientist and member of the board.


Thabo Beeler is currently a senior staff research scientist at Google, where he is heading the Syntec team within AR Perception. They work on digital humans in the context of virtual and augmented reality, focusing on capture, reconstruction, appearance acquisition, generative modeling, and synthesis. Prior Thabo Beeler was a research scientist at Disney Research | Studios where he built up the Capture and Effects group, focusing on digital humans for film.


Federico Tombari is a Director of Research at Google Zurich, Switzerland, where he leads an applied research team in Computer Vision and Machine Learning across the US, Switzerland, and Germany. He is also affiliated with the Faculty of Computer Science at TUM as a lecturer (PrivatDozent) at the CAMP Chair. An up-to-date list of publications is available at his Google Scholar. Federico Tombari's research activity is focused on different aspects of computer vision and machine learning, with a focus on 3D computer vision (e.g. scene understanding, 3D object recognition, 3D reconstruction, SLAM). The fields of application of his research are mainly in robotics, augmented reality, autonomous driving and healthcare. He is currently particularly excited about unsupervised learning for visual data, Large Multimodal Models, (Neural) Radiance Fields, and scene graphs for scene understanding.


Fiorella Meyer is responsible for the administration of the ETHAR project. As a project administrator, she brings expertise in finance, personnel administration, and secretarial management across various sectors. Since 2006, she has been part of ETH Zurich, contributing extensively with her skills to various groups and Departments. Her education includes studies in business administration, as well as accounting and HR diplomas.


Bernd Bickel is a Full Professor of Computational Design, leading the Computational Design Laboratory in the Department of Architecture and equally affiliated with the Department of Civil, Environmental and Geomatic Engineering (D-BAUG) at ETH Zurich. He conducts research into digital technologies with an emphasis on artificial intelligence and extended reality. He has a particular interest in computer graphics at the intersection between robotics, computer vision, machine learning, materials science and digital fabrication. The aim of his research is to create new and efficient ways of modelling, simulating and fabricating digital content. His work has won multiple prizes, including an Oscar (Academy of Motion Picture Arts and Sciences Technical Achievement Award) in 2019 and an ERC Starting Grant in 2016.


Stelian Coros leads the Computational Robotics Lab (CRL). The lab is focused on develop theoretical foundations to shape the way future generations of robots are made, how they operate in complex environments, and how they interact with us. In this quest, Computational Robotics is defined as the fusion of simulation, algorithms and data. Essentially, the lab works on formalizing advanced simulation models to equip robots with an inherent understanding of physical laws. Utilizing these models, practical algorithms are designed to address motion planning, motion control, and computational design challenges. Additionally, whenever feasible, data is leveraged to efficiently learn solution spaces, create accurate digital twins through real-to-simulation methodologies, and allow humans to teach robots new skills.


Marco Hutter is a Professor of Robotic Systems and the Head of the Center for Robotics at ETH Zurich. His research interests are focused on the development of novel machines and their intelligence for use in harsh and demanding environments. Together with his team, he has developed a range of walking robots, mobile manipulators, and autonomous excavators utilized in industrial inspection, construction and forestry, as household aids, and even for extraterrestrial research. Additionally, Marco is a co-founder of several ETH startups, such as ANYbotics AG and Gravis Robotics AG, which specialize in marketing legged robots and autonomous construction equipment.


Marc Pollefeys is a Professor of Computer Science at ETH Zurich, leading the Computer Vision and Geometry Group (CVG) and the Director of the Microsoft Mixed Reality and AI Lab in Zurich where he works with a team of scientists and engineers to develop advanced perception capabilities for HoloLens and Mixed Reality. He was elected Fellow of the IEEE in 2012. He obtained his PhD from the KU Leuven in 1999 and was a professor at UNC Chapel Hill before joining ETH Zurich.


Konrad Schindler is a Full Professor in the Department of Civil, Environmental and Geomatic Engineering at ETH Zurich, where he leads the Institute of Geodesy and Photogrammetry. His research primarily focuses on photogrammetry, remote sensing, computer vision, and image understanding, driving advancements in these critical fields. His work contributes significantly to the scientific community, enhancing our understanding of geospatial data and imaging technologies.


Olga Sorkine-Hornung’s research focuses on computer graphics, geometric modeling and geometry processing. She is interested in the theoretical foundations and practical algorithms for digital content creation, such as shape representation and editing, artistic modeling techniques, digital fabricatio, computer animation and image and video processing. She works on fundamental problems in digital geometry processing, including parameterization of discrete surfaces and compression of geometric data. Olga Sorkine leads the Interactive Geometry Lab (IGL) at ETH Zurich.


Siyu Tang leads the Computer Vision and Learning Group (VLG) at the Institute of Visual Computing. Her research focuses on computer vision and machine learning, specializing in perceiving and modeling humans. Her group studies computational models that enable machines to perceive and analyze human pose, motion, and activities from visual input. The group leverages machine learning and optimization techniques to build statistical models of humans and their behaviors. Their goal is to advance algorithmic foundations of scalable and reliable human digitalization, enabling a broad class of real-world applications.








XR Perceptive Experiences Lead


Senior Staff Research Scientist and Manager




Calibration and System Test (CaST) lead






Dr. Dominik Narnhofer is a Computer Vision/Machine Learning Researcher serving as a senior researcher at the Photogrammetry and Remote Sensing Group (D-BAUG) at ETH Zurich.








Björn Braun is a PhD student at SIPLAB, ETH Zurich. He specializes in deep learning, computer vision and non-contact physiological sensing. His research focuses on developing novel deep learning models that predict a person's emotions and physiological signals - such as heart rate - from egocentric systems (e.g., Project Aria glasses), videos, and wearable devices.







