Main picture

Ph.D. Student

Machine Learning Department
Carnegie Mellon University
Advisor: Ameet Talwalkar

Email: junhongs@andrew.cmu.edu
Office: Gates Hillman Centers 8015


Google ScholarGitHubTwitter

Bio

I am a third-year Ph.D. student in the Machine Learning Department at CMU, advised by Ameet Talwalkar. My current research focuses on data-centric ML and AutoML, in particular building practical and easily accessible model development tools for long-tailed and low-resource ML applications.

I obtained my B.S. in Mathematics of Computation at UCLA, where I was fortunate to work with Lin Yang on sample-efficient reinforcement learning. I also studied multi-agent RL and Theory of Mind, advised by Song-Chun Zhu and Ying Nian Wu.

 

News

 

Selected Publications

For a full list of publications, see Research.
, ,
Tag-LLM: Repurposing General-Purpose LLMs for Specialized Domains
In Preprint, 2024.
LLMs have demonstrated remarkable proficiency in understanding and generating natural language. However, their capabilities wane in highly specialized domains underrepresented in the pretraining corpus, such as physical and biomedical sciences. This work explores how to repurpose general LLMs into specialized task solvers through a novel, model-agnostic framework for learning custom input tags to condition the LLM.
, ,
Cross-Modal Fine-Tuning: Align then Refine
In ICML, 2023 (Oral)
ORCA is a general cross-modal fine-tuning framework that extends the applicability of a single large-scale pretrained model to diverse modalities. It adapts to a target task via an align-then-refine workflow. Given the target input, ORCA first learns an embedding network that aligns the embedded feature distribution with the pretraining modality. The pretrained model is then fine-tuned on the embedded data to exploit the knowledge shared across modalities.
, ,
Efficient Architecture Search for Diverse Tasks
In NeurIPS, 2022.
DASH is developed for efficiently solving diverse ML problems outside of the well-researched domains such as vision and natural language processing. Being fast, simple, and broadly applicable, DASH fixes a standard CNN topology and searches for the right kernel sizes and dilation rates that its operations should take on. It expands the network capacity to extract features at multiple resolutions for different types of data while only requiring searching over the operation space.
, , , , ,
NAS-Bench-360: Benchmarking Neural Architecture Search on Diverse Tasks
In NeurIPS Datasets and Benchmarks Track, 2022.
Neural architecture search (NAS) benchmarks and methods prioritize performance on well-studied tasks, e.g., image classification on CIFAR and ImageNet. To mitigate this bias, NAS-Bench-360 is a benchmark suite for evaluating state-of-the-art NAS methods on a diverse set of tasks. The selection spans different application domains, dataset sizes, problem dimensionalities, and learning objectives.
, , , ,
Iterative Teacher-Aware Learning
In NeurIPS, 2021.
In this paper, we propose a gradient optimization based teacher-aware learner who can incorporate teacher’s cooperative intention into the likelihood function and learn provably faster compared with the naive learning algorithms used in previous machine teaching works.
,
Theoretically Principled Deep RL Acceleration via Nearest Neighbor Function Approximation
In AAAI, 2021.
We propose a theoretically principled nearest neighbor (NN) function approximator that can replace the value networks in deep RL methods. Inspired by human similarity judgments, the NN approximator estimates the action values using rollouts on past observations and can provably obtain a small regret bound that depends only on the intrinsic complexity of the environment.
, , ,
Emergence of Pragmatics from Referential Game between Theory of Mind Agents
In Emergent Communication Workshop, NeurIPS, 2019.
We integrate the theory of mind (ToM) in a cooperative multi-agent pedagogical situation and propose an adaptive reinforcement learning (RL) algorithm to develop a communication protocol.