James M. Rehg

James M. Rehg

Professor

Georgia Institute of Technology

Biography

James M. Rehg (pronounced “ray”) is a Professor in the School of Interactive Computing at the Georgia Institute of Technology, where he co-Directs the Center for Health Analytics and Informatics (CHAI). He received his Ph.D. from CMU in 1995 and worked at the Cambridge Research Lab of DEC (and then Compaq) from 1995-2001, where he managed the computer vision research group. He received an NSF CAREER award in 2001 and a Raytheon Faculty Fellowship from Georgia Tech in 2005. He and his students have received a number of best paper awards, including best student paper awards at ICML 2005, BMVC 2010, Mobihealth 2014, Face and Gesture 2015, and a 2018 Distinguished Paper Award from ACM IMWUT and a Method of the Year award from Nature Methods. Dr. Rehg served as the General co-Chair for CVPR 2009 and the Program co-Chair for CVPR 2017. He has authored more than 200 peer-reviewed scientific papers and holds 26 issued US patents.

Introduction to the Rehg Lab

We conduct basic research in computer vision and machine learning, and work in a number of interdisciplinary areas: developmental and social psychology, autism research, mobile health, and robotics. The study of human social and cognitive behavior is a cross-cutting theme. We are developing novel methods for measuring behavior in real-life settings, and computational models that connect health-related behaviors to health outcomes in order to enable novel forms of treatment. We are creating machine learning methods that are inspired by child development and investigating biologically-inspired approaches to robot navigation and control.

labphoto

People


Principal Investigators

Avatar

James M. Rehg

Professor


Graduate Students

Avatar

Miao Liu

Robotics PhD

Avatar

Max Xu

Machine Learning PhD

Avatar

Anh Thai

CS PhD

Projects

AutoRally

Autonomous driving

Behavioral Imaging

Behavioral Imaging

Developmental Machine Learning

Developmental Machine Learning

Mobile and Computational Health

Mobile and Computational Health

Publications

In the Eye of the Beholder: Gaze and Actions in First Person Video

In the Eye of the Beholder: Gaze and Actions in First Person Video

IEEE Transactions on Pattern Analysis and Machine Intelligence

Where Are You? Localization from Embodied Dialog

Where Are You? Localization from Embodied Dialog

EMNLP 2020

Attention Distillation for Learning Video Representations

Attention Distillation for Learning Video Representations

BMVC 2020 (oral, acceptation rate 5.0%)

Forecasting Human-Object Interaction: Joint Prediction of Motor Attention and Egocentric Activity

Forecasting Human-Object Interaction: Joint Prediction of Motor Attention and Egocentric Activity

ECCV 2020 (oral, acceptation rate 2.0%)

Tripping through time: Efficient Localization of Activities in Videos

Tripping through time: Efficient Localization of Activities in Videos

BMVC 2020

Detecting Attended Visual Targets in Video

Detecting Attended Visual Targets in Video

CVPR 2020

3D Reconstruction of Novel Object Shapes from Single Images

3D Reconstruction of Novel Object Shapes from Single Images

arXiv preprint

Approximate Inverse Reinforcement Learning from Vision-based Imitation Learning

Approximate Inverse Reinforcement Learning from Vision-based Imitation Learning

arXiv preprint

Locally Weighted Regression Pseudo-Rehearsal for Adaptive Model Predictive Control

Locally Weighted Regression Pseudo-Rehearsal for Adaptive Model Predictive Control

CoRL 2019

Classification of Decompensated Heart Failure from Clinical and Home Ballistocardiography

Classification of Decompensated Heart Failure from Clinical and Home Ballistocardiography

IEEE Transactions on Biomedical Engineering

Incremental Object Learning from Contiguous Views

Incremental Object Learning from Contiguous Views

CVPR 2019, Oral and Best Paper Finalist

In the Eye of Beholder: Joint Learning of Gaze and Actions in First Person Video

In the Eye of Beholder: Joint Learning of Gaze and Actions in First Person Video

ECCV 2018

SyncWISE: Window Induced Shift Estimation for Synchronization of Video and Accelerometry from Wearable Sensors

SyncWISE: Window Induced Shift Estimation for Synchronization of Video and Accelerometry from Wearable Sensors

IMWUT 2020

Watching the TV Watchers

Watching the TV Watchers

IMWUT 2018

Datasets

Georgia Tech Egocentric Activity Datasets

Georgia Tech Egocentric Activity Datasets

Summary Text for GTEA dataset

Sponsors

NIH NIBIB P41-EB028242: mHealth Center for Discovery, Optimization, and Translation of Temporally-Precise Interventions (mDOT)

NSF OIA 1936970: C-Accel Phase 1: Empowering Neurodiverse Populations for Employment through Inclusion AI and Innovation Science

NSF CNS 1823201: CRI: mResearch: A platform for Reproducible and Extensible Mobile Sensor Big Data Research

NIH NIMH R01-MH114999: Data-Driven Multidimensional Modeling of Nonverbal Communication in Typical and Atypical Development

Contact

  • rehg@gatech.edu
  • CODA Building, 756 W Peachtree St NW, Georgia Institue of Technology, Atlanta, GA 30308
  • CODA 15th floor, office 1550-B