Portrait of Kenji Kawaguchi is unavailable

Kenji Kawaguchi

Alumni

Publications

Learning diverse attacks on large language models for robust red-teaming and safety tuning
Red-teaming, or identifying prompts that elicit harmful responses, is a critical step in ensuring the safe and responsible deployment of lar… (see more)ge language models (LLMs). Developing effective protection against many modes of attack prompts requires discovering diverse attacks. Automated red-teaming typically uses reinforcement learning to fine-tune an attacker language model to generate prompts that elicit undesirable responses from a target LLM, as measured, for example, by an auxiliary toxicity classifier. We show that even with explicit regularization to favor novelty and diversity, existing approaches suffer from mode collapse or fail to generate effective attacks. As a flexible and probabilistically principled alternative, we propose to use GFlowNet fine-tuning, followed by a secondary smoothing phase, to train the attacker model to generate diverse and effective attack prompts. We find that the attacks generated by our method are effective against a wide range of target LLMs, both with and without safety tuning, and transfer well between target LLMs. Finally, we demonstrate that models safety-tuned using a dataset of red-teaming prompts generated by our method are robust to attacks from other RL-based red-teaming approaches.
Unsupervised Concept Discovery Mitigates Spurious Correlations
Md Rifat Arefin
Yang Zhang
Francesco Locatello
Dianbo Liu
Learning diverse attacks on large language models for robust red-teaming and safety tuning
Red-teaming, or identifying prompts that elicit harmful responses, is a critical step in ensuring the safe and responsible deployment of lar… (see more)ge language models (LLMs). Developing effective protection against many modes of attack prompts requires discovering diverse attacks. Automated red-teaming typically uses reinforcement learning to fine-tune an attacker language model to generate prompts that elicit undesirable responses from a target LLM, as measured, for example, by an auxiliary toxicity classifier. We show that even with explicit regularization to favor novelty and diversity, existing approaches suffer from mode collapse or fail to generate effective attacks. As a flexible and probabilistically principled alternative, we propose to use GFlowNet fine-tuning, followed by a secondary smoothing phase, to train the attacker model to generate diverse and effective attack prompts. We find that the attacks generated by our method are effective against a wide range of target LLMs, both with and without safety tuning, and transfer well between target LLMs. Finally, we demonstrate that models safety-tuned using a dataset of red-teaming prompts generated by our method are robust to attacks from other RL-based red-teaming approaches.
Discrete Key-Value Bottleneck
Nasim Rahaman
Michael Curtis Mozer
Bernhard Schölkopf
Adaptive Discrete Communication Bottlenecks with Dynamic Vector Quantization
Dianbo Liu
Pascal Notsawo
Michael Curtis Mozer
Simplicial Embeddings in Self-Supervised Learning and Downstream Classification
Simplicial Embeddings (SEM) are representations learned through self-supervised learning (SSL), wherein a representation is projected into …
GFlowOut: Dropout with Generative Flow Networks
GFlowOut: Dropout with Generative Flow Networks
MixupE: Understanding and Improving Mixup from Directional Derivative Perspective
Yingtian Zou
Wai Hoh Tang
Hieu Pham
Juho Kannala
Arno Solin
Mixup is a popular data augmentation technique for training deep neural networks where additional samples are generated by linearly interpol… (see more)ating pairs of inputs and their labels. This technique is known to improve the generalization performance in many learning paradigms and applications. In this work, we first analyze Mixup and show that it implicitly regularizes infinitely many directional derivatives of all orders. Based on this new insight, we propose an improved version of Mixup, theoretically justified to deliver better generalization performance than the vanilla Mixup. To demonstrate the effectiveness of the proposed method, we conduct experiments across various domains such as images, tabular data, speech, and graphs. Our results show that the proposed method improves Mixup across multiple datasets using a variety of architectures, for instance, exhibiting an improvement over Mixup by 0.8% in ImageNet top-1 accuracy.
MixupE: Understanding and improving Mixup from directional derivative perspective
Yingtian Zou
Wai Hoh Tang
Hieu Pham
Juho Kannala
Arno Solin
Supplementary Material for MixupE
Yingtian Zou
Wai Hoh Tang
Hieu Pham
Juho Kannala
Arno Solin
We denote by z = (x,y) the input and output pair where x ∈ X ⊆ R and y ∈ Y ⊆ R . Let fθ(x) ∈ R be the output of the logits (i.e.,… (see more) the last layer before the softmax or sigmoid) of the model parameterized by θ. We use l(θ, z) = h(fθ(x)) − yfθ(x) to denote the loss function. Let g(·) be the activation function. We use x(i) to index i-th element of the vector x and xj to represent j-th variable in a set. The notation list is:
Discrete Factorial Representations as an Abstraction for Goal Conditioned Reinforcement Learning
Hongyu Zang
Xin Li
Romain Laroche
Remi Tachet des Combes