I am a graduate in Engineering Science & Applied Mathematics from Northwestern University, and hold an undergraduate degree in Applied Mathematics from UC San Diego — two places that shaped who I am, academically and beyond.

I come to research not from a traditional academic path, but from a persistent habit of not letting questions rest. When something doesn’t make sense to me, I’ll rebuild it from scratch, find the right person to ask, or keep reading until the confusion becomes clarity. I move fast, correct often, and care more about understanding deeply than looking certain.

I am actively looking for Research Assistant positions.

More than a position, I am looking for the right fit — a lab that takes its time with ideas, a collaborator who wants to build something meaningful over the long run, or a mentor genuinely invested in helping someone learn to think independently. I believe this kind of match has to go both ways.

I am available to work fully onsite for six months or more, and I take that commitment seriously — good research takes time, and I am not looking to pass through.

If any of this resonates, I would love to find a time to talk: email · calendly

Research Interests

I am early in my research journey, and the directions below reflect where I have spent time so far — not boundaries. I hold them loosely, and am genuinely open to where good questions lead.

Scalable Alignment & Reward Modeling. A reward signal that works today may not hold tomorrow. I am interested in what makes alignment robust across model scales and domains — not because it is a clean research problem, but because getting this wrong has real consequences for the people these systems serve.
Failure Modes in Preference Optimization. Methods like GRPO and DPO optimize proxies, not true objectives. I want to understand when this gap produces deceptive or brittle behavior — and whether we can catch it before systems are deployed into the world.
Evaluation & Interpretability for Aligned Models. We cannot trust what we cannot measure. I am drawn to the question of how we build evaluation frameworks rigorous enough to actually verify that alignment training worked — a prerequisite for AI that reliably benefits rather than harms.