@jwkirchenbauer | LinkedIn | Google Scholar | GitHub | [email protected]

MastersHeadshots2024_168_edited Large.png

Kirchenbauer_Academic_CV.pdf

PhD Student at University of Maryland, College Park

Advised by Professor Tom Goldstein.

Deep Learning &/in/for Language Modeling


Research

In Tom Goldstein’s lab at the University of Maryland, I spent the first part of my PhD working on techniques to discern whether the thing you’re currently reading or looking at was created by a human or generated by an AI system. With the release of ChatGPT in 2022, all of a sudden, that became a very practical challenge. More generally, my research has explored robustness, reliability, and safety in deep learning as well as understanding how training data impacts model behavior. I am predominantly motivated by a belief that attempting (and often failing) to teach machines to understand the world is a good way to learn more about what the real building blocks of general intelligence are along the way.


About

Before starting my PhD at UMD, I worked at the Software Engineering Institute at Carnegie Mellon University as a research engineer (FFRDC). I completed a BS and MS in Computer Science at Washington University in St. Louis in 2020 and received a diploma in Violin from Oberlin College and Conservatory of Music in 2017. When not doing research, I like being in the mountains and listening to Mahler


Papers

Screen Shot 2023-09-21 at 9.50.22 PM.png

Baseline Defenses for Adversarial Attacks Against Aligned Language Models

ArXiv 2023

Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping-yeh Chiang, Micah Goldblum, Aniruddha Saha, Jonas Geiping, Tom Goldstein

[Paper]

Screen Shot 2023-09-21 at 9.43.39 PM.png

Bring Your Own Data! Self-Supervised Evaluation for Large Language Models

ArXiv 2023

Neel Jain*, Khalid Saifullah*, Yuxin Wen, John Kirchenbauer, Manli Shu, Aniruddha Saha, Micah Goldblum, Jonas Geiping, Tom Goldstein

[Paper][Code]

Screen Shot 2023-09-21 at 9.28.53 PM.png

On the Reliability of Watermarks for Large Language Models

ArXiv 2023

John Kirchenbauer*, Jonas Geiping*, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, Tom Goldstein

[Paper][Code]

tree_ring.png

Tree-Ring Watermarks: Fingerprints for Diffusion Images that are Invisible and Robust

NeurIPS 2023

Yuxin Wen, John Kirchenbauer, Jonas Geiping, Tom Goldstein

[Paper][Code]

Screen Shot 2023-01-25 at 12.09.45 AM.png

A Watermark for Large Language Models

ICML 2023, Outstanding Paper Award

John Kirchenbauer*, Jonas Geiping*, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein

[Paper][Code][Demo]

Screen Shot 2023-02-09 at 9.06.32 PM.png

Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery

NeurIPS 2023

Yuxin Wen*, Neel Jain*, John Kirchenbauer, Micah Goldblum, Jonas Geiping, Tom Goldstein

[Paper][Code][Demo]

Screen Shot 2023-01-25 at 12.42.53 AM.png

GOAT: A Global Transformer on Large-scale Graphs

ICML 2023

Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Renkun Ni, C Bayan Bruss, Tom Goldstein

[Paper]

Screen Shot 2023-01-25 at 12.14.19 AM.png

How to Do a Vocab Swap? A Study of Embedding Replacement for Pre-trained Transformers

ArXiv 2022

Neel Jain*, John Kirchenbauer*, Jonas Geiping, Tom Goldstein

[Paper]

Screen Shot 2023-01-25 at 12.21.46 AM.png

What is Your Metric Telling You? Evaluating Classifier Calibration under Context-Specific Definitions of Reliability

ICLR 2022 Workshop on ML Evaluation Standards

John Kirchenbauer, Jacob R Oaks & Eric Heim

[Paper][Poster][Talk][Code]

Screen Shot 2023-01-25 at 12.32.49 AM.png

A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs

NeurIPS 2021 Workshop DistShift Spotlight

Mucong Ding*, Kezhi Kong*, Jiuhai Chen*, John Kirchenbauer, Micah Goldblum, David Wipf, Furong Huang, Tom Goldstein

[Paper][Code]