@jwkirchenbauer | LinkedIn | Google Scholar | GitHub | [email protected]

20190531_150541-01_2.jpeg

Kirchenbauer_Academic_CV.pdf

PhD Student at University of Maryland, College Park

Advised by Professor Tom Goldstein.

NLP, Graphs, mostly Deep Learning


Research I am motivated by the belief that attempting to teach machines to understand and generate natural language (and sometimes failing) is a good way to learn more about what the real building blocks of general intelligence are along the way. Determining whether current progress in language modeling is simply the result of web-scale memorization in parameter space, or the actual emergence of analogs to reasoning and cognition, is one of the most pressing open questions for the field to pursue.

In Tom’s lab, I recently I spent the better part of a year working on techniques discern whether the thing you’re currently reading or looking at was created by a human or generated by an AI system, because all of a sudden, that has become a real challenge. More generally, my research has explored various aspects of deep learning for discrete data like natural language and graphs.


About Me

Before starting my PhD at UMD, I worked at Carnegie Mellon University as a research engineer, and I completed my MS and BS in Computer Science at Washington University in St. Louis. Even before that, I received a diploma in Violin from Oberlin College and Conservatory of Music.

When I’m not coding, I like the mountains ⛷🥾 spending an afternoon on a crazy recipe 🧑🏼‍🍳 and listening to Mahler’s symphonies straight through 🎻.


Papers

Screen Shot 2023-09-21 at 9.50.22 PM.png

Baseline Defenses for Adversarial Attacks Against Aligned Language Models

ArXiv 2023

Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping-yeh Chiang, Micah Goldblum, Aniruddha Saha, Jonas Geiping, Tom Goldstein

[Paper]

Screen Shot 2023-09-21 at 9.43.39 PM.png

Bring Your Own Data! Self-Supervised Evaluation for Large Language Models

ArXiv 2023

Neel Jain*, Khalid Saifullah*, Yuxin Wen, John Kirchenbauer, Manli Shu, Aniruddha Saha, Micah Goldblum, Jonas Geiping, Tom Goldstein

[Paper][Code]

Screen Shot 2023-09-21 at 9.28.53 PM.png

On the Reliability of Watermarks for Large Language Models

ArXiv 2023

John Kirchenbauer*, Jonas Geiping*, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, Tom Goldstein

[Paper][Code]

tree_ring.png

Tree-Ring Watermarks: Fingerprints for Diffusion Images that are Invisible and Robust

NeurIPS 2023

Yuxin Wen, John Kirchenbauer, Jonas Geiping, Tom Goldstein

[Paper][Code]

Screen Shot 2023-01-25 at 12.09.45 AM.png

A Watermark for Large Language Models

ICML 2023, Outstanding Paper Award

John Kirchenbauer*, Jonas Geiping*, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein

[Paper][Code][Demo]

Screen Shot 2023-02-09 at 9.06.32 PM.png

Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery

NeurIPS 2023

Yuxin Wen*, Neel Jain*, John Kirchenbauer, Micah Goldblum, Jonas Geiping, Tom Goldstein

[Paper][Code][Demo]

Screen Shot 2023-01-25 at 12.42.53 AM.png

GOAT: A Global Transformer on Large-scale Graphs

ICML 2023

Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Renkun Ni, C Bayan Bruss, Tom Goldstein

[Paper]

Screen Shot 2023-01-25 at 12.14.19 AM.png

How to Do a Vocab Swap? A Study of Embedding Replacement for Pre-trained Transformers

ArXiv 2022

Neel Jain*, John Kirchenbauer*, Jonas Geiping, Tom Goldstein

[Paper]

Screen Shot 2023-01-25 at 12.21.46 AM.png

What is Your Metric Telling You? Evaluating Classifier Calibration under Context-Specific Definitions of Reliability

ICLR 2022 Workshop on ML Evaluation Standards

John Kirchenbauer, Jacob R Oaks & Eric Heim

[Paper][Poster][Talk][Code]