@jwkirchenbauer | LinkedIn | Google Scholar | GitHub | [email protected]
PhD Student at University of Maryland, College Park
Advised by Professor Tom Goldstein.
NLP, Graphs, mostly Deep Learning
Research I am motivated by the belief that attempting to teach machines to understand and generate natural language (and sometimes failing) is a good way to learn more about what the real building blocks of general intelligence are along the way. Determining whether current progress in language modeling is simply the result of web-scale memorization in parameter space, or the actual emergence of analogs to reasoning and cognition, is one of the most pressing open questions for the field to pursue.
In Tom’s lab, I recently I spent the better part of a year working on techniques discern whether the thing you’re currently reading or looking at was created by a human or generated by an AI system, because all of a sudden, that has become a real challenge. More generally, my research has explored various aspects of deep learning for discrete data like natural language and graphs.
About Me
Before starting my PhD at UMD, I worked at Carnegie Mellon University as a research engineer, and I completed my MS and BS in Computer Science at Washington University in St. Louis. Even before that, I received a diploma in Violin from Oberlin College and Conservatory of Music.
When I’m not coding, I like the mountains ⛷🥾 spending an afternoon on a crazy recipe 🧑🏼🍳 and listening to Mahler’s symphonies straight through 🎻.
Papers
Baseline Defenses for Adversarial Attacks Against Aligned Language Models
ArXiv 2023
Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping-yeh Chiang, Micah Goldblum, Aniruddha Saha, Jonas Geiping, Tom Goldstein
[Paper]
Bring Your Own Data! Self-Supervised Evaluation for Large Language Models
ArXiv 2023
Neel Jain*, Khalid Saifullah*, Yuxin Wen, John Kirchenbauer, Manli Shu, Aniruddha Saha, Micah Goldblum, Jonas Geiping, Tom Goldstein
On the Reliability of Watermarks for Large Language Models
ArXiv 2023
John Kirchenbauer*, Jonas Geiping*, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, Tom Goldstein
Tree-Ring Watermarks: Fingerprints for Diffusion Images that are Invisible and Robust
NeurIPS 2023
Yuxin Wen, John Kirchenbauer, Jonas Geiping, Tom Goldstein
A Watermark for Large Language Models
ICML 2023, Outstanding Paper Award
John Kirchenbauer*, Jonas Geiping*, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein
Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery
NeurIPS 2023
Yuxin Wen*, Neel Jain*, John Kirchenbauer, Micah Goldblum, Jonas Geiping, Tom Goldstein
GOAT: A Global Transformer on Large-scale Graphs
ICML 2023
Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Renkun Ni, C Bayan Bruss, Tom Goldstein
[Paper]
How to Do a Vocab Swap? A Study of Embedding Replacement for Pre-trained Transformers
ArXiv 2022
Neel Jain*, John Kirchenbauer*, Jonas Geiping, Tom Goldstein
[Paper]
What is Your Metric Telling You? Evaluating Classifier Calibration under Context-Specific Definitions of Reliability
ICLR 2022 Workshop on ML Evaluation Standards
John Kirchenbauer, Jacob R Oaks & Eric Heim