<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Posts | Jim Rehg</title><link>https://rehg.org/post/</link><atom:link href="https://rehg.org/post/index.xml" rel="self" type="application/rss+xml"/><description>Posts</description><generator>Source Themes Academic (https://sourcethemes.com/academic/)</generator><language>en-us</language><item><title>Feb 2026: Six main conference papers and two findings papers accepted at CVPR 2026</title><link>https://rehg.org/post/2026-03-16-cvpr-announce/</link><pubDate>Sat, 28 Feb 2026 00:00:00 +0000</pubDate><guid>https://rehg.org/post/2026-03-16-cvpr-announce/</guid><description>&lt;p>Six papers accepted for publication at the &lt;em>IEEE/CVF Conference on Computer Vision and Pattern Recognition&lt;/em> (
&lt;a href="https://cvpr.thecvf.com/" target="_blank" rel="noopener">CVPR 2026&lt;/a>):&lt;/p>
&lt;ul>
&lt;li>B. Lai, X. Wang, S. Rambhatla, J. M. Rehg, Z. Kira, R. Girdhar, I. Misra. Toward Diffusible High-Dimensional Latent Spaces: A Frequency Perspective.&lt;/li>
&lt;li>X. Li, S. Deng, B. Lai, W. Pian, J. M. Rehg, Yapeng Tian. Omni-MMSI: Toward Identity-attributed Social Interaction Understanding.&lt;/li>
&lt;li>Y. Chi, X. Li, Z. Huang, J. M. Rehg. Vinedresser3D: Towards Agentic Text-guided 3D Editing.&lt;/li>
&lt;li>Z. Huang, X. Li, Z. Lv, J. M. Rehg. How Much 3D Do Video Foundation Models Encode?&lt;/li>
&lt;li>X. Cao, H. Yang, V. Gunda, Z. Zhou, T. Xu, A. Kowdle, I. Kim, J. M. Rehg. Gaze Target Estimation with Concepts.&lt;/li>
&lt;li>F. Ryan, I. Ananthabhotla, Y. Qian, J. Hoffman, J. M. Rehg, V. K. Ithapu, C. Murdock. Forecasting 3D Scanpaths in Egocentric Video.&lt;/li>
&lt;/ul>
&lt;p>Two Findings papers accepted at CVPR 2026:&lt;/p>
&lt;ul>
&lt;li>W. Jia, B. Lai, M. Liu, D. Xu, J. M. Rehg. Learning Predictive Visuomotor Coordination.&lt;/li>
&lt;li>B. Boote, J. Kim, O. Kara, S. Lee, J. M. Rehg CoherentHand: Temporally Consistent 3D Hand Trajectory Synthesis with Semantic Motion Priors.&lt;/li>
&lt;/ul></description></item><item><title>Jan 2026: Two abstract accepted for oral presentation at INSAR 2026</title><link>https://rehg.org/post/2026-01-30-insar-oral/</link><pubDate>Fri, 30 Jan 2026 00:00:00 +0000</pubDate><guid>https://rehg.org/post/2026-01-30-insar-oral/</guid><description>&lt;p>Two abstract accepted for oral presentation at the &lt;em>International Society for Autism Research Annual Meeting&lt;/em> (
&lt;a href="https://www.autism-insar.org/" target="_blank" rel="noopener">INSAR 2026&lt;/a>):&lt;/p>
&lt;ul>
&lt;li>X. Cao, V. Gunda, F. Ryan, H. Yang, H. Chua, and J. M. Rehg. AI Models Facilitate the Automated Measurement of Social Gaze During Naturalistic Interactions.&lt;/li>
&lt;li>N. Brady, J. McDaniel, O. Boorom, A. Radhakrishnan, S. Martell, A. Southerland, J. M. Rehg, and A. Rozga. AI Models Facilitate the Automated Measurement of Social Gaze During Naturalistic Interactions.&lt;/li>
&lt;/ul></description></item><item><title>Jan 2026: One paper accepted at ICLR 2026</title><link>https://rehg.org/post/2026-01-30-iclr-diffvax/</link><pubDate>Thu, 29 Jan 2026 00:00:00 +0000</pubDate><guid>https://rehg.org/post/2026-01-30-iclr-diffvax/</guid><description>&lt;p>One paper accepted for publication at the &lt;em>International Conference on Learning Representations&lt;/em> (
&lt;a href="https://iclr.cc/" target="_blank" rel="noopener">ICLR 2026&lt;/a>):&lt;/p>
&lt;ul>
&lt;li>O. Kara, T. C. Ozden, and J. M. Rehg. DiffVax: Optimization-Free Image Immunization Against Diffusion-Based Editing (
&lt;a href="https://diffvax.github.io" target="_blank" rel="noopener">Project Page&lt;/a>)&lt;/li>
&lt;/ul></description></item><item><title>Dec 2025: CVEU Workshop proposal accepted at CVPR 2026</title><link>https://rehg.org/post/2026-01-30-cvpr-workshop/</link><pubDate>Mon, 22 Dec 2025 00:00:00 +0000</pubDate><guid>https://rehg.org/post/2026-01-30-cvpr-workshop/</guid><description>&lt;p>Our workshop proposal led by O. Kara and J. Kim for the &lt;em>9th Workshop on AI for Creative Visual Content Generation, Editing, and Understanding&lt;/em> (CVEU) has been accepted at
&lt;a href="https://cvpr.thecvf.com/" target="_blank" rel="noopener">CVPR 2026&lt;/a>. This follows O. Kara&amp;rsquo;s successful co-organization of this same workshop at CVPR 2025.&lt;/p></description></item><item><title>Sep 2025: Four papers accepted at NeurIPS 2025</title><link>https://rehg.org/post/2025-09-23-neurips-announce/</link><pubDate>Tue, 23 Sep 2025 00:00:00 +0000</pubDate><guid>https://rehg.org/post/2025-09-23-neurips-announce/</guid><description>&lt;p>Four papers accepted for publication at the &lt;em>39th Conference on Neural Information Processing Systems&lt;/em> (
&lt;a href="https://neurips.cc/" target="_blank" rel="noopener">NeurIPS 2025&lt;/a>):&lt;/p>
&lt;ul>
&lt;li>&lt;strong>(spotlight)&lt;/strong> X. Li, Z. Wang, Z. Huang, and J. Rehg. Quantifying the Role of Image Cues in Single-Image 3D Generation.&lt;/li>
&lt;li>&lt;strong>(poster)&lt;/strong> Y. Shen, Y. Liu, J. Zhu, X. Cao, X. Zhang, Y. He, W. Ye, J. M. Rehg, and I. Lourentzou. Fine-Grained Preference Optimization Improves Spatial Reasoning in VLMs.&lt;/li>
&lt;li>&lt;strong>(poster)&lt;/strong> H. Nisar, O. Kara, and J. Rehg. DiffEye: Diffusion-Based Continuous Eye-Tracking Data Generation Conditioned on Natural Images.&lt;/li>
&lt;li>&lt;strong>(poster)&lt;/strong> X. Cao, P. Virupaksha, S. Lee, B. Lai, W. Jia, J. Chen, and J. Rehg. Toward Human Deictic Gesture Target Estimation.&lt;/li>
&lt;/ul></description></item><item><title>CS 598 CVH: Computer Vision for Health</title><link>https://rehg.org/post/cs598cvh/</link><pubDate>Fri, 01 Aug 2025 00:00:00 +0000</pubDate><guid>https://rehg.org/post/cs598cvh/</guid><description>&lt;p>CS 598 CVH: Computer Vision for Health (UIUC)&lt;/p></description></item><item><title>Jan 2023: Jim is now Founder Professor of Computer Science and Industrial and Enterprise Systems Engineering at the Univ. of Illinois at Urbana-Champaign, and is the new director of the Health Care Engineering Systems Center at UIUC.</title><link>https://rehg.org/post/2023-01-01-uiuicmove/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://rehg.org/post/2023-01-01-uiuicmove/</guid><description/></item><item><title>Apr 2022: Congrats to my Ph.D. students Fiona Ryan and Max Xu for both winning NSF Graduate Fellowships!</title><link>https://rehg.org/post/2022-05-03-nsfgrfp_max_fiona/</link><pubDate>Tue, 05 Apr 2022 00:00:00 +0000</pubDate><guid>https://rehg.org/post/2022-05-03-nsfgrfp_max_fiona/</guid><description/></item><item><title>CS 3600: Introduction to Artificial Intelligence</title><link>https://rehg.org/post/cs3600/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://rehg.org/post/cs3600/</guid><description>&lt;p>CS 3600: Introduction to Artificial Intelligence (Georgia Tech)&lt;/p></description></item><item><title>CS 7626: Behavioral Imaging</title><link>https://rehg.org/post/cs7626/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://rehg.org/post/cs7626/</guid><description>&lt;div id="course_syllabus" style="margin-bottom: 10px;" class="user_content">
&lt;p>&lt;span style="font-size: 24pt;">Introduction to Behavioral Imaging&lt;/span>&lt;/p>
&lt;p>&lt;span style="font-size: 18pt;">CS 7626 and CS 4803-IBI combined sections&lt;/span>&lt;/p>
&lt;p>&lt;span style="font-size: 24pt;">Covid Information&lt;/span>&lt;/p>
&lt;p>&lt;span style="font-size: 12pt;">The most up-to-date information on Covid-19 is on the&amp;nbsp;&lt;a class="external" href="https://health.gatech.edu/coronavirus" target="_blank">TECH Moving Forward&lt;span class="screenreader-only">&amp;nbsp;(Links to an external site.)&lt;/span>&lt;/a>&amp;nbsp;website and in the&amp;nbsp;&lt;a class="external" href="https://provost.gatech.edu/academic-restart-frequently-asked-questions" target="_blank">Academic Restart Frequently Asked Questions&lt;span class="screenreader-only">&amp;nbsp;(Links to an external site.)&lt;/span>&lt;/a>.&amp;nbsp; If you have not tested positive but are ill or have been exposed to someone who is ill, please follow the&amp;nbsp;&lt;a class="external" href="http://health.gatech.edu/coronavirus/decision-tree" target="_blank">Covid-19 Exposure Decision Tree&lt;span class="screenreader-only">&amp;nbsp;(Links to an external site.)&lt;/span>&lt;/a>&amp;nbsp;for reporting your illness.&lt;/span>&lt;/p>
&lt;p>&lt;span style="font-size: 12pt;">&lt;span>If you are on campus this semester, please consider being tested regularly. Information on free, voluntary testing on campus can be found here:&amp;nbsp;&lt;/span>&lt;a class="external" href="https://mytest.gatech.edu/" target="_blank">&lt;span>https://mytest.gatech.edu/&lt;/span>&lt;/a>&lt;/span>&lt;/p>
&lt;p>&lt;span style="font-size: 12pt;">&lt;span>Some disruption to classes or services is inevitable, but Georgia Tech is making every effort to ensure continuity of operations. As is the case in any semester, faculty may cancel a class if they have an illness or emergency situation and cover any missed material at their own discretion. If an instructor needs to cancel a class, they will notify students as early as possible.&amp;nbsp;&lt;/span>&lt;/span>&lt;/p>
&lt;p>&lt;span>If you are ill and unable to do course work this will be treated similarly to any student illness. The Dean of Students will have been contacted when you report your positive test and will notify your instructor that you may be unable to attend class events or finish your work as the result of a health issue. Your instructor will not be told the reason. We have asked all faculty to be lenient and understanding when setting work deadlines or expecting students to finish work, and so you should be able to catch up with any work that you miss while in quarantine or isolation. Your instructor may make available any video recordings of classes or slides that have been used while you are absent, and may prepare some complementary asynchronous assignments that compensate for your inability to participate in class sessions. Ask your instructor for the details.&lt;/span>&lt;/p>
&lt;p>&lt;span>These uncertain times can be difficult, and many students may need help in dealing with stress and mental health. The&amp;nbsp;&lt;a class="external" href="https://care.gatech.edu/" target="_blank">&lt;strong>CARE Center&lt;/strong>&lt;span class="screenreader-only">&amp;nbsp;(Links to an external site.)&lt;/span>&lt;/a>&amp;nbsp;and the&amp;nbsp;&lt;a class="external" href="https://counseling.gatech.edu/" target="_blank">&lt;strong>Counseling Center&lt;/strong>&lt;span class="screenreader-only">&amp;nbsp;(Links to an external site.)&lt;/span>&lt;/a>,&amp;nbsp;and&amp;nbsp;&lt;a class="external" href="https://health.gatech.edu/" target="_blank">&lt;strong>Stamps Health Services&lt;/strong>&lt;span class="screenreader-only">&amp;nbsp;(Links to an external site.)&lt;/span>&lt;/a>&amp;nbsp;will offer both in-person and virtual appointments. Face-to-face appointments will require wearing a face covering and social distancing, with exceptions for medical examinations. Student Center services and operations are available on the&amp;nbsp;&lt;a class="external" href="https://studentcenter.gatech.edu/" target="_blank">&lt;strong>Student Center&lt;/strong>&lt;span class="screenreader-only">&amp;nbsp;(Links to an external site.)&lt;/span>&lt;/a>&amp;nbsp;website. For more information on these and other student services, contact the Vice President and Dean of Students or the&amp;nbsp;&lt;a class="external" href="https://studentlife.gatech.edu/" target="_blank">&lt;strong>Division of Student Life&lt;/strong>&lt;span class="screenreader-only">&amp;nbsp;(Links to an external site.)&lt;/span>&lt;/a>.&lt;/span>&lt;/p>
&lt;p>&lt;span style="font-size: 24pt;">Instructor&lt;/span>&lt;/p>
&lt;p>&lt;span style="font-size: 12pt;">James M. Rehg (you can call me Jim)&lt;br>Email: &lt;a href="mailto:rehg@gatech.edu" target="_blank">rehg@gatech.edu&lt;/a>&lt;br>Office Hours: Thursdays 9:00-10:00am and by appointment&lt;br>Office Hours Location: MS Teams (link coming soon)&lt;/span>&lt;/p>
&lt;p>&lt;span style="font-size: 24pt;">Course Time and Location&lt;/span>&lt;/p>
&lt;p>This course will be 100% remote and lectures will be synchronous. All lectures will be made available in a recorded format. There is no attendance requirement.&lt;/p>
&lt;p>Classes will be held on Mondays, Wednesdays, and Fridays from 3:30pm-4:30pm Eastern Time Zone. Lectures will meet on Bluejeans at the following link: &lt;a href="https://bluejeans.com/203011482" target="_blank">https://bluejeans.com/203011482&lt;/a>&lt;/p>
&lt;p>All slides will be linked into the lecture schedule at the end of this syllabus.&lt;/p>
&lt;p>&lt;strong>&lt;span style="font-size: 14pt;">Asking Questions During Lectures&lt;/span>&lt;/strong>&lt;/p>
&lt;p>Since our course size is modest we will use Bluejeans meetings for our initial class meetings. This will allow us to interact directly. I am investigating moving to Microsoft Teams for our lectures and will post more on this as it develops.&lt;/p>
&lt;p>&lt;span style="font-size: 24pt;">Teaching Assistant&lt;/span>&lt;/p>
&lt;p>&lt;span style="font-size: 12pt;">The TA for this course is Shivam Khare (&lt;a href="mailto:skhare31@gatech.edu" target="_blank">skhare31@gatech.edu&lt;/a>)&lt;/span>&lt;/p>
&lt;p>Office hours will be held&amp;nbsp; (Wedneday 2:30-3:30 pm&lt;span>)&lt;/span>&lt;/p>
&lt;p>Check the calendar for the schedule of TA Office Hours (&lt;a class="instructure_file_link inline_disabled" href="https://bluejeans.com/592632758" target="_blank">https://bluejeans.com/592632758&lt;/a>)&lt;/p>
&lt;p>&lt;strong>&lt;span style="font-size: 14pt;">Piazza&lt;/span>&lt;/strong>&lt;/p>
&lt;p>Link: &lt;a class="instructure_file_link inline_disabled" href="http://piazza.com/gatech/spring2021/cs7626" target="_blank">&lt;span>piazza.com/gatech/spring2021/cs7626&lt;/span>&lt;/a>&lt;/p>
&lt;h2>General Information&lt;/h2>
&lt;p>&lt;span>This course will provide an introduction to Behavioral Imaging, a new research field which encompasses the measurement, modeling, analysis, and visualization of behaviors from multi-modal sensor data. It is tailored for undergraduate and graduate students who are interested in this emerging field. The course is designed to provide:&lt;/span>&lt;/p>
&lt;ul>
&lt;li>A broad introduction to research questions in Behavioral Imaging (BI)&lt;/li>
&lt;li>An in-depth understanding of the key technologies for BI&lt;/li>
&lt;li>An overview of the psychology literature relating to behavior from a computational perspective&lt;/li>
&lt;li>&lt;span>Hands-on experience in working with relevant sensor data&lt;/span>&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>&lt;span style="font-size: 14pt;">Background&lt;/span>&lt;/strong>&lt;/p>
&lt;p>Beginning in infancy, individuals acquire the social and communicative skills which are vital for a healthy and productive life, through face-to-face interactions with caregivers and peers. However, children with developmental delays face great challenges in acquiring these skills, resulting in substantial lifetime risks. A goal of research in Behavioral Imaging is to develop computational methods that can support the fine-grained and large-scale measurement and analysis of social and developmental behaviors, with the potential to positively impact the diagnosis and treatment of developmental disorders such as autism.&lt;/p>
&lt;p>Clinical health domains can also benefit from a computational study of behavior. A key example is chronic health conditions, such as heart disease, asthma, and diabetes, in which health-related behaviors such as smoking or unhealthy eating habits play a critical role. The ability to treat chronic conditions is hampered by an inability to reliably measure health-related behaviors, particularly in naturalistic (field) conditions. Advances in wearable sensing technologies, known as mobile health or mHealth, have the potential to enable a new data-driven approach to the treatment of such conditions. This course attempts to bridge the gap between multiple disciplines, such as mHealth, computational psychology, developmental psychology, and behavioral medicine, which are linked by a common interest in computational methods for understanding and measuring the behavioral underpinnings of adverse health outcomes.&lt;/p>
&lt;p>&lt;strong>&lt;span style="font-size: 14pt;">Objective&lt;/span>&lt;/strong>&lt;/p>
&lt;p>&lt;span>A key goal of this course is to equip data science researchers with the means to access the relevant literature in psychology, physiology, and psychophysiology which provide the theoretical underpinning for most health-related technology development. It is still too common that researchers gain access to health-related datasets and apply machine learning methods without a sufficiently clear understanding of how the data relates to health outcomes. Real-world improvements in health outcomes, which is the promise of advanced ML and sensor technologies, can only arise through a deep collaboration between technologists and domain experts, which this course aspires to enable.&amp;nbsp;&amp;nbsp;&lt;/span>&lt;/p>
&lt;h2>Prerequisites&lt;/h2>
&lt;p>&lt;span>This class will be self-contained with respect to the core concepts and class material. Students should have familiarity with machine learning at the level of the undergraduate ML class CS 4641. If you do not have basic familiarity with classical machine learning methods such as support vector machines or random forest classifiers, then it will be challenging to read the papers and do a final project. The technical papers will also require some familiarity with deep learning in order to understand the readings, but you &lt;em>do not&lt;/em> have to have an extensive deep learning programming background to take this class, since it is still possible to use classical ML methods in health projects (particularly since existing datasets still tend to be small). Background material in psychology, physiology, and related topics will be provided, so there are no subject-matter prerequisites. The projects will require facility with Python and the basics of machine learning such as the use of standard libraries like scikit-learn. Tutorial material will be provided for more advanced content.&lt;/span>&lt;/p>
&lt;h2>Resources&lt;/h2>
&lt;p>&lt;span>As this is a new research area, there are no textbooks that cover the breadth of this course. Recommended readings will be provided for all lectures and class discussions. The following text is a useful reference for the mobile health content of the course, and some chapters will be provided in the readings:&lt;/span>&lt;/p>
&lt;ul>
&lt;li>&lt;span>&lt;a class="instructure_file_link inline_disabled" href="https://www.springer.com/gp/book/9783319513935" target="_blank">Mobile Health: Sensors, Analytic Methods, and Applications&lt;/a>, Editors: Rehg, JM, Murphy, S, and Kumar, S. Springer 2017&lt;/span>&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Online resources:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>Canvas (here): For syllabus, projects and quizzes, and critical announcements.&lt;/li>
&lt;li>GradeScope (LINK TBD): For project submission and grading&lt;/li>
&lt;li>&lt;a class="external" href="http://piazza.com/gatech/spring2021/cs7626" target="_blank">&lt;span>Piazza&lt;/span>&lt;/a>: For general questions and discussions.&lt;/li>
&lt;li>&lt;span>&lt;a class="external" href="https://teams.microsoft.com/l/team/19%3a1cc971a798434dbca8cd80d0263c39ff%40thread.tacv2/conversations?groupId=7e82ff4f-74cd-471e-9d0d-ad0005b066a7&amp;amp;tenantId=482198bb-ae7b-4b25-8b7a-6d7f32faa083" target="_blank">&lt;/a>Teams (LINK TBD)&lt;/span>: For TA and instructor office hours.&lt;/li>
&lt;li>&lt;span>&lt;a class="external" href="https://primetime.bluejeans.com/a2m/live-event/yrpbjqze" target="_blank">&lt;/a>Bluejeans&lt;/span>: For class lectures&lt;strong>&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>&lt;span style="font-size: 24pt;">Organization&lt;/span>&lt;/p>
&lt;p>&lt;span>The course material is divided into units, and each unit will feature some combination of lectures on technology and topics in psychology and physiology, along with discussion periods covering research papers from the technology and psychology literatures. Each discussion period will be led by a team of students and will cover 2 papers, one from the technology literature and one from the behavioral literature. This task supports the goal of gaining experience in reading the original literature in psychology and health. All students are expected to read the papers before each meeting and participate in the class discussion. Each student will make two presentations during the semester as part of their team. A quiz will be given at the end of each unit, which will test the core concepts in psychology, physiology, and related domains that have been covered. There will be a final project for which the class will work in teams and will address one area of BI technology.&lt;/span>&lt;/p>
&lt;p>&lt;span style="font-size: 18.6667px;">&lt;strong>Grading&lt;/strong>&lt;/span>&lt;/p>
&lt;ul>
&lt;li>2 In-Class Presentations: 70% (35% x 2)&lt;strong>&lt;/strong>&lt;/li>
&lt;li>3 Quizzes: 30%&lt;/li>
&lt;/ul>
&lt;p>The quizzes will be take home, and you can work on them at any time and take as long as want to complete them, as long as you submit your answers by the deadlines listed. However you may only submit your answers once. We will not have class the day the quiz is released so that you can use the class period to take the quiz if you'd prefer. The quiz must be your own work, you are not allowed to collaborate on the quiz or discuss it with anyone.&lt;/p>
&lt;p>Attendance will not be recorded and will not be used in assessing grades.&lt;/p>
&lt;p>Final Project: This was eliminated from the syllabus due to the difficulty of making individualized projects work under the current remote/covid conditions.&lt;/p>
&lt;h2>Legalese&lt;/h2>
&lt;p>I reserve the right to modify any of these plans as need be during the course of the class; however, I won't do anything too drastic, and you'll be informed as far in advance as possible.&lt;/p>
&lt;p>You must abide by the &lt;a class="instructure_file_link inline_disabled" href="https://osi.gatech.edu/content/honor-code" target="_blank">academic honor code&lt;/a> of Georgia Tech.&amp;nbsp;&amp;nbsp;&lt;/p></description></item><item><title>NIH R01 DC020048: Discovering novel predictors of minimally verbal outcomes in autism through computational modeling</title><link>https://rehg.org/post/nih-r01-dc020048/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://rehg.org/post/nih-r01-dc020048/</guid><description>&lt;p>NIH R01 DC020048: Discovering novel predictors of minimally verbal outcomes in autism through computational modeling&lt;/p></description></item><item><title>DARPA RACER Program: Robotic Autonomy in Complex Environments with Resiliency</title><link>https://rehg.org/post/darpa-racer/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>https://rehg.org/post/darpa-racer/</guid><description>&lt;p>DARPA RACER Program: Robotic Autonomy in Complex Environments with Resiliency&lt;/p></description></item><item><title>NIH R01 HD104624: Infants' self-generated visual statistics support object and category learning</title><link>https://rehg.org/post/nih-r01-hd104624/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>https://rehg.org/post/nih-r01-hd104624/</guid><description>&lt;p>NIH R01 HD104624: Infants&amp;rsquo; self-generated visual statistics support object and category learning&lt;/p></description></item><item><title>Aug 2020: Keynote talk at 8th Intl. Workshop on Assistive Computer Vision and Robotics (ACVR) @ ECCV 2020</title><link>https://rehg.org/post/2020-08-29-acvr-keynote/</link><pubDate>Sat, 29 Aug 2020 00:00:00 +0000</pubDate><guid>https://rehg.org/post/2020-08-29-acvr-keynote/</guid><description/></item><item><title>Aug 2020: Two papers accepted at BMVC 2020</title><link>https://rehg.org/post/news4/</link><pubDate>Wed, 05 Aug 2020 00:00:00 +0000</pubDate><guid>https://rehg.org/post/news4/</guid><description>&lt;p>Two papers accepted for publication at the &lt;em>31st British Machine Vision Conference&lt;/em> (
&lt;a href="https://bmvc2020.github.io/" target="_blank" rel="noopener">BMVC 2020&lt;/a>):&lt;/p>
&lt;ul>
&lt;li>&lt;strong>(oral)&lt;/strong> M. Liu, X. Chen, Y. Zhang, Y. Li, &amp;amp; J. M. Rehg. Attention Distillation for Learning Video Representations (
&lt;a href="https://aptx4869lm.github.io/AttentionDistillation/" target="_blank" rel="noopener">Project Page&lt;/a>)&lt;/li>
&lt;li>&lt;strong>(poster)&lt;/strong> M. Hahn, A. Kadav, J. M. Rehg, &amp;amp; H. P. Graf. Tripping Through Time: Efficient Localization of Activities in Videos&lt;/li>
&lt;/ul></description></item><item><title>Jul 2020: NIH Funded $5.9M mDOT Center to Advance Mobile Health Research</title><link>https://rehg.org/post/news2/</link><pubDate>Tue, 28 Jul 2020 00:00:00 +0000</pubDate><guid>https://rehg.org/post/news2/</guid><description>&lt;p>The NIH
&lt;a href="https://ic.gatech.edu/news/637122/georgia-tech-6-collaborators-receive-59-million-nih-grant-national-center-ai-based" target="_blank" rel="noopener">announced&lt;/a> that it is funding a new biomedical technology research center (BTRC), called the &lt;em>mHealth Center for Discovery, Optimization &amp;amp; Translation of Temporally-Precise Interventions&lt;/em> (
&lt;a href="https://mdot.md2k.org/" target="_blank" rel="noopener">mDOT&lt;/a>). The $5.9M center is among the first two BTRCs to be funded in mobile health. Georgia Tech will be leading one of the three
&lt;a href="https://mdot.md2k.org/trd1.html" target="_blank" rel="noopener">center projects&lt;/a>, entitled &lt;em>Enabling the Discovery of Temporally-Precise Intervention Targets and Timing Triggers from mHealth Biomarkers via Uncertainty-Aware Modeling of Personalized Risk Dynamics&lt;/em>. The
&lt;a href="https://www.nibib.nih.gov/research-funding/featured-programs/ncbib/supported-centers" target="_blank" rel="noopener">BRTC Program&lt;/a> is designed to advance the development of instrumentation and methodology that addresses the emerging needs of the biomedical research community.&lt;/p></description></item><item><title>NIH NIBIB P41-EB028242: mHealth Center for Discovery, Optimization, and Translation of Temporally-Precise Interventions (mDOT)</title><link>https://rehg.org/post/nih-p41-mdot/</link><pubDate>Tue, 14 Jul 2020 00:00:00 +0000</pubDate><guid>https://rehg.org/post/nih-p41-mdot/</guid><description/></item><item><title>Jul 2020: One paper accepted at ECCV 2020 for oral presentation</title><link>https://rehg.org/post/news3/</link><pubDate>Thu, 02 Jul 2020 00:00:00 +0000</pubDate><guid>https://rehg.org/post/news3/</guid><description/></item><item><title>NSF C-Accel 2033413: Inclusion AI for Neurodiverse Employment</title><link>https://rehg.org/post/nsf-c-accel-2033413/</link><pubDate>Fri, 01 May 2020 00:00:00 +0000</pubDate><guid>https://rehg.org/post/nsf-c-accel-2033413/</guid><description>&lt;p>NSF C-Accel 2033413: Inclusion AI for Neurodiverse Employment&lt;/p></description></item><item><title>Feb 2020: Two papers accepted to CVPR 2020</title><link>https://rehg.org/post/2020-02-26-cvpr-announce/</link><pubDate>Wed, 26 Feb 2020 00:00:00 +0000</pubDate><guid>https://rehg.org/post/2020-02-26-cvpr-announce/</guid><description>&lt;p>Two papers accepted for publication at the &lt;em>IEEE Conference on Computer Vision and Pattern Recognition&lt;/em> (
&lt;a href="http://cvpr2020.thecvf.com/" target="_blank" rel="noopener">CVPR 20&lt;/a>):&lt;/p>
&lt;ul>
&lt;li>&lt;strong>(poster)&lt;/strong> E. Chong, Y. Wang, N. Ruiz, and J. M. Rehg. Detecting Attended Visual Targets in Video (
&lt;a href="https://github.com/ejcgt/attention-target-detection" target="_blank" rel="noopener">Project Page&lt;/a>)&lt;/li>
&lt;li>&lt;strong>(poster)&lt;/strong> R. Lin, W. Liu, Z. Liu, C. Feng, Z. Yu, J. M. Rehg, L. Xiong, and L. Song. Regularizing Neural Networks via Minimizing Hyperspherical Energy&lt;/li>
&lt;/ul></description></item><item><title>Feb 2020: Jim attended the Workshop on the Use of Wearable and Implantable Devices in Health Research @ BIRS (Banff, Canada)</title><link>https://rehg.org/post/2020-02-23-birs-workshop/</link><pubDate>Sun, 23 Feb 2020 00:00:00 +0000</pubDate><guid>https://rehg.org/post/2020-02-23-birs-workshop/</guid><description>&lt;p>Exciting talks on future of mHealth and wearables research. Great to see my fellow MD2K members Dr. Susan Murphy and Dr. Zhenke Wu in attendance.&lt;/p></description></item><item><title>Sep 2019: One paper accepted at NeurIPS 2019</title><link>https://rehg.org/post/2019-09-04-neurips-announce/</link><pubDate>Wed, 04 Sep 2019 00:00:00 +0000</pubDate><guid>https://rehg.org/post/2019-09-04-neurips-announce/</guid><description>&lt;p>One paper accepted for publication at the &lt;em>33rd Conference on Neural Information Processing Systems&lt;/em> (
&lt;a href="https://nips.cc/Conferences/2019" target="_blank" rel="noopener">NeurIPS 19&lt;/a>):&lt;/p>
&lt;ul>
&lt;li>W. Liu, Z. Liu, J. M. Rehg, and L. Song. Neural Similarity Learning.&lt;/li>
&lt;/ul></description></item><item><title>Aug 2019: One paper accepted at 3DV 2019</title><link>https://rehg.org/post/2019-08-29-3dv-announce/</link><pubDate>Thu, 29 Aug 2019 00:00:00 +0000</pubDate><guid>https://rehg.org/post/2019-08-29-3dv-announce/</guid><description>&lt;p>One paper accepted for publication at the &lt;em>International Conference on 3D Vision&lt;/em> (
&lt;a href="http://3dv19.gel.ulaval.ca/" target="_blank" rel="noopener">3DV 2019&lt;/a>):&lt;/p>
&lt;ul>
&lt;li>B. M. Smith, V. Chari, A. Agrawal, J. M. Rehg, and R. Sever. Towards Accurate 3D Human Body Reconstruction
from Silhouettes.&lt;/li>
&lt;/ul></description></item><item><title>Aug 2019: NSF Funded $1M award for AI technology to improve employment in autism</title><link>https://rehg.org/post/2019-08-05-nsf-c-accel-announce/</link><pubDate>Mon, 05 Aug 2019 00:00:00 +0000</pubDate><guid>https://rehg.org/post/2019-08-05-nsf-c-accel-announce/</guid><description>&lt;p>Georgia Tech is partnering with Vanderbilt University and Cornell in developing training and assessment tools to support neurodiverse individuals in obtaining employment. Projects include VR-based interview training, assessment of job skills, and assessment of nonverbal communication skills in interview settings.&lt;/p></description></item><item><title>Feb 2019: Four papers accepted at CVPR 2019</title><link>https://rehg.org/post/2019-02-26-cvpr-announce/</link><pubDate>Tue, 26 Feb 2019 00:00:00 +0000</pubDate><guid>https://rehg.org/post/2019-02-26-cvpr-announce/</guid><description>&lt;p>Four papers accepted for publication at the &lt;em>IEEE Conference on Computer Vision and Pattern Recognition&lt;/em> (
&lt;a href="https://cvpr2019.thecvf.com/" target="_blank" rel="noopener">CVPR 2019&lt;/a>):&lt;/p>
&lt;ul>
&lt;li>&lt;strong>(oral, best paper finalist)&lt;/strong> S. Stojanov, S. Mishra, N. A. Thai, N. Dhanda, A. Humayun, C. Yu, L. B. Smith, and J. M. Rehg. Incremental Object Learning from Contiguous Views (
&lt;a href="https://iolfcv.github.io/" target="_blank" rel="noopener">Project Page&lt;/a>)&lt;/li>
&lt;li>&lt;strong>(oral, best paper finalist)&lt;/strong> Z. Lv, F. Dellaert, J. M. Rehg, and A. Geiger. Taking a Deeper Look at the Inverse Compositional Algorithm (
&lt;a href="https://github.com/lvzhaoyang/DeeperInverseCompositionalAlgorithm" target="_blank" rel="noopener">Project Page&lt;/a>)&lt;/li>
&lt;li>&lt;strong>(poster)&lt;/strong> S. Tripathi, S. Chandra, A. Agrawal, A. Tyagi, J. M. Rehg, and V. Chari. Learning to Generate Synthetic Data via Compositing.&lt;/li>
&lt;li>&lt;strong>(poster)&lt;/strong> C.-H. Chen, A. Tyagi, A. Agrawal, D. Drover, R. MV, S. Stojanov, and J. M. Rehg. Unsupervised 3D Pose Estimation with Geometric Self-Supervision.&lt;/li>
&lt;/ul></description></item><item><title>NSF CNS 1823201: CRI: mResearch: A platform for Reproducible and Extensible Mobile Sensor Big Data Research</title><link>https://rehg.org/post/nsf-cri-mresearch/</link><pubDate>Mon, 01 Oct 2018 00:00:00 +0000</pubDate><guid>https://rehg.org/post/nsf-cri-mresearch/</guid><description/></item><item><title>Jul 2018: NSF funded $1.75M award to develop open source mHealth platform to accelerate research progress</title><link>https://rehg.org/post/2018-07-15-nsf-mresearch-announce/</link><pubDate>Sun, 15 Jul 2018 00:00:00 +0000</pubDate><guid>https://rehg.org/post/2018-07-15-nsf-mresearch-announce/</guid><description/></item></channel></rss>