I am a second-year Ph.D. student in Robotics at Georgia Tech, advised by Dr. Sehoon Ha and working closely with Dr. Hae-Won Park.
Before my Ph.D., I received my Master's degree in Electrical and Computer Engineering from Georgia Tech and Bachelor's degree in Mechanical Engineering and Computer Science and Engineering at Seoul National University.
My research focuses on developing learning-based control algorithms that enable robust, agile, and interactive robot behaviors in human-centered environments. I have worked on integrating model-based control with reinforcement learning for quadrupedal and humanoid locomotion. Currently, I am exploring whole-body humanoid manipulation using human data.
We introduce a learning framework for effectively training a humanoid locomotion policy that imitates the behavior of a model-based controller while extending its capabilities to handle more complex locomotion tasks.
We propose a learning framework that can bridge between model-based and learning-based approaches for legged robot control by imitating expert model predictive control (MPC) and fine-tuning the pre-trained policy with reinforcement learning.
We introduce AdaptManip, a fully autonomous framework that enables humanoid robots to navigate, lift objects, and deliver them in an integrated manner. Its recurrent pose estimator provides robust state estimation under intermittent and noisy visual observations, supporting reliable closed-loop loco-manipulation.
We introduce Opt2Skill, an end-to-end pipeline that combines model-based trajectory optimization with RL to achieve robust whole-body loco-manipulation.