I am passionate about researching Embodied AI for real-world robotics.
My work focuses on label-efficient learning and multi-modal integration, with core interests in Self-supervised Learning, Multi-modal Learning, and World Models.
I also study these problems through an information-theoretic perspective.
Recently, I am interested in world models for robotics using video data and active search, and also interested in tactile sensors for a more comprehensive understanding of the world.
LiDAR-Anchored Collaborative Distillation for Robust 2D Representations Wonjun Jo,
Hyunwoo Ha,
Kim Ji-Yeon,
Hawook Jeong,
Tae-Hyun Oh
Under-review Project Page|Paper
Improving robustness of self-supervised 2D representation learning.
DarkEQA: Benchmarking Vision-Language Models for Embodied Question Answering in Low-Light Indoor Environments
Yohan Park,
Hyunwoo Ha,
Wonjun Jo,
Tae-Hyun Oh
Under-review Project Page|Paper
Benchmarking the robustness of Vision-Language Models in low-light environments.
Self-Supervised Collaborative Distillation: Enhancing Lighting Robustness and 3D Awareness Wonjun Jo,
Hyunwoo Ha,
Kim Ji-Yeon,
Hawook Jeong,
Tae-Hyun Oh
Workshop on Wild3D, ICCV, 2025
Paper
Improving pre-trained 2D image encoder's lighting robustness and 3D awareness.
The Devil is in the Details: Simple Remedies for Image-to-LiDAR Representation Learning Wonjun Jo,
Kwon Byung-Ki,
Kim Ji-Yeon,
Hawook Jeong,
Kyungdoon Joo,
Tae-Hyun Oh
ACCV, 2024
Project Page|Paper
Proposing simple remedies for self-supervised 3D representation learning with 2D representation.