Hello!

Illustration of Robot interacting with Objects Welcome to my homepage! Here are some quick facts about me:

🧑‍🎓 I recently graduated from the University of Houston with a B.S. in Computer Science and Biomedical Engineering.

🧑‍💻 I am a full-time Engineer at Microsoft and Research Intern at the University of Washington.

📸 My research interests are in embodied AI spanning computer vision, robotics, policy learning, perception, and generative approaches.

🦾 I am very passionate about leveraging AI and Computer Vision to applications in robotics, accessibility, and health.

Overview

I am a Vietnamese-American born and raised in Houston, TX. I moved to Seattle after graduation to conduct my predoctoral research as a young research investigator at the University of Washington in the Robotics and State Estimation Lab and Personal Robotics Lab advised by Dieter Fox (NVIDIA) and Siddhartha Srinivasa (UW). My research actively investigates robot learning from the perspective of scaling simulation learning systems for policy learning and motion planning data generation and transferred to real-world robotic manipulation tasks. As a prospective doctoral student, I am deeply grateful to be supported by the National Science Foundation CISE Graduate Fellowship!

Outside of research, I am a software engineer at Microsoft enhancing OneNote for security and Copilot Notebook integration.

Prior, I studied Computer Science and Biomedical Engineering focusing on Machine Learning and Neural Engineering throughout my bachelors at the University of Houston. During my bachelors, I spent four years working on a range of research topics (brain-machine interfaces, computer vision, and robotics). Notably, I spent over two years with Dr. Shishir Shah (University of Houston) developing pose-invariant methods for face recognition models (Bachelors Thesis, VISAPP 2025).

For work experience, I have spent my past three summers interning at Microsoft, Amazon Web Services, and Northrop Grumman where I focused on engineering machine learning platforms and applying models for practical enterprise applications.

May you have any questions or interest in collaboration please reach out to me at carterung [at] gmail [dot] com.

News

August 2025 I am excited to have been selected as a 2025 National Science Foundation Computer and Information Science and Engineering Graduate Fellow!

July 2025 Introducing my 2nd-author work RoboEval – a structured evaluation framework for bimanual robot manipulation!

February 2025 Published at the 20th International Conference on Computer Vision Theory and Applications (VISAPP) 2025 as first-author.

November 2024 My research in pose-invariant face recognition was accepted and presented at the Rice Gulf Coast Undergraduate Research Symposium (GCURS) 2024 as an oral presentation.

📝 Publications

thumb
RoboEval: Where Robotic Manipulation Meets Structured and Scalable Evaluation

Yi Ru Wang, Carter Ung, Grant Tannert, Jiafei Duan, Josephine Li, Amy Le, Rishabh Oswal, Markus Grotz, Wilbert Pumacay, Yuquan Deng, Ranjay Krishna, Dieter Fox, Siddhartha Srinivasa


arXiv 2025 (Under Review)
paper / project page / code

RoboEval introduces a unified evaluation suite covering 18 bimanual manipulation tasks with fine-grained metrics so researchers can compare models beyond binary success/failure.

thumb
Minimizing Number of Poses for Pose-Invariant Face Recognition

Carter Ung, Pranav Mantini, Shishir Shah


International Conference on Computer Vision Theory and Applications (VISAPP) 2025
paper

We study how many viewpoint images are truly needed for robust, pose-invariant face recognition and show that a compact pose set dramatically reduces data-collection overhead.

thumb
Minimizing the Number of Poses for Pose-Invariant Face Recognition

Carter Ung


Undergraduate Dissertation, University of Houston 2024
paper

My honors thesis extends the VISAPP study and details the full experimental pipeline for low-redundancy face-data acquisition.