James M. Rehg

James M. Rehg

Founder Professor

University of Illinois Urbana-Champaign

Biography

James M. Rehg (pronounced “ray”) is a Founder Professor of Computer Science and Industrial and Enterprise Systems Engineering at University of Illinois Urbana-Champaign. Previously, he was a Professor in the School of Interactive Computing at the Georgia Institute of Technology, where he co-Directed the Center for Health Analytics and Informatics (CHAI). He received his Ph.D. from CMU in 1995 and worked at the Cambridge Research Lab of DEC (and then Compaq) from 1995-2001, where he managed the computer vision research group. He received an NSF CAREER award in 2001 and a Raytheon Faculty Fellowship from Georgia Tech in 2005. He and his students have received a number of best paper awards, including best student paper awards at ICML 2005, BMVC 2010, Mobihealth 2014, Face and Gesture 2015, and a Distinguished Paper Award from ACM IMWUT and a Method of the Year award from the journal Nature Methods. Dr. Rehg served as the General co-Chair for CVPR 2009 and the Program co-Chair for CVPR 2017. He has authored more than 200 peer-reviewed scientific papers and holds 26 issued US patents.

Introduction to the Rehg Lab

We conduct basic research in computer vision and machine learning, and work in a number of interdisciplinary areas: developmental and social psychology, autism research, mobile health, and robotics. The study of human social and cognitive behavior is a cross-cutting theme. We are developing novel methods for measuring behavior in real-life settings, and computational models that connect health-related behaviors to health outcomes in order to enable novel forms of treatment. We are creating machine learning methods that are inspired by child development and investigating biologically-inspired approaches to robot navigation and control.

Prospective Students: If you are interested in joining our group and are not currently at UIUC, please apply directly to the university. For current/incoming UIUC students, please fill out this form.

labphoto

People


Principal Investigator

Avatar

James M. Rehg

Founder Professor


Lab Members

Avatar

Bikram Boote

Research Engineer

Avatar

Max Xu

ML PhD

Avatar

Anh Thai

CS PhD

Avatar

Bolin Lai

ML PhD

Avatar

Wenqi Jia

CS PhD

Avatar

Xu Cao

CS PhD

Avatar

Xiang Li

CS PhD


Alumni

Projects

AutoRally

Autonomous driving

Developmental Machine Learning

Developmental Machine Learning

Mobile and Computational Health

Mobile and Computational Health

Publications

3x2: 3D Object Part Segmentation by 2D Semantic Correspondences

3x2: 3D Object Part Segmentation by 2D Semantic Correspondences

ECCV 2024

LEGO: Learning EGOcentric Action Frame Generation via Visual Instruction Tuning

LEGO: Learning EGOcentric Action Frame Generation via Visual Instruction Tuning

ECCV 2024

Listen to Look into the Future: Audio-Visual Egocentric Gaze Anticipation

Listen to Look into the Future: Audio-Visual Egocentric Gaze Anticipation

ECCV 2024

MAPLM: A Real-World Large-Scale Vision-Language Dataset for Map and Traffic Scene Understanding

MAPLM: A Real-World Large-Scale Vision-Language Dataset for Map and Traffic Scene Understanding

CVPR 2024

PointInfinity: Resolution-Invariant Point Diffusion Models

PointInfinity: Resolution-Invariant Point Diffusion Models

CVPR 2024

Modeling Multimodal Social Interactions: New Challenges and Baselines with Densely Aligned Representations

Modeling Multimodal Social Interactions: New Challenges and Baselines with Densely Aligned Representations

CVPR 2024 (Oral, Acceptance rate 0.8%)

LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs

LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs

CVPR 2024

Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives

Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives

CVPR 2024 (Oral, Acceptance rate 0.8%)

RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models

RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models

CVPR 2024 (Highlight, Acceptance rate 3.6%)

The Audio-Visual Conversational Graph: From an Egocentric- Exocentric Perspective

The Audio-Visual Conversational Graph: From an Egocentric- Exocentric Perspective

CVPR 2024

ZeroShape: Regression-based Zero-shot Shape Reconstruction

ZeroShape: Regression-based Zero-shot Shape Reconstruction

CVPR 2024

REBAR: Retrieval-Based Reconstruction For Time-series Contrastive Learning

REBAR: Retrieval-Based Reconstruction For Time-series Contrastive Learning

ICLR 2024

Low-shot Object Learning with Mutual Exclusivity Bias

Low-shot Object Learning with Mutual Exclusivity Bias

NeurIPS 2023

In the Eye of Transformer: Global–Local Correlation for Egocentric Gaze Estimation and Beyond

In the Eye of Transformer: Global–Local Correlation for Egocentric Gaze Estimation and Beyond

IJCV 2023

Werewolf Among Us: A Multimodal Dataset for Modeling Persuasion Behaviors in Social Deduction Games

Werewolf Among Us: A Multimodal Dataset for Modeling Persuasion Behaviors in Social Deduction Games

ACL Findings 2023

Datasets

Georgia Tech Egocentric Activity Datasets

Georgia Tech Egocentric Activity Datasets

Summary Text for GTEA dataset

Toys4K 3D Object Dataset

Toys4K 3D Object Dataset

CVPR 2021

4,000 3D object instances from 105 categories of developmentally plausible objects

Sponsors

NIH NIBIB P41-EB028242: mHealth Center for Discovery, Optimization, and Translation of Temporally-Precise Interventions (mDOT)

NSF OIA 1936970: C-Accel Phase 1: Empowering Neurodiverse Populations for Employment through Inclusion AI and Innovation Science

NSF CNS 1823201: CRI: mResearch: A platform for Reproducible and Extensible Mobile Sensor Big Data Research

NIH NIMH R01-MH114999: Data-Driven Multidimensional Modeling of Nonverbal Communication in Typical and Atypical Development

Contact