Skip to main content
Uncategorized

CAN LLMS FOLLOW SIMPLE RULES?

November 6, 2023

Norman Mu1      Sarah Chen2,3      Zifan Wang2     Sizhe Chen1

David Karamardian2     Lulwa Aljeraisy4      Dan Hendrycks2      David Wagner1

1UC Berkeley 2Center for AI Safety 3Stanford 4King Abdulaziz City for Science and Technology

Abstract

As Large Language Models (LLMs) are deployed with increasing real-world responsibilities, it is important to be able to specify and constrain the behavior of these systems in a reliable manner. Model developers may wish to set explicit rules for the model, such as “do not generate abusive content”, but these may be circum- vented by jailbreaking techniques. Evaluating how well LLMs follow developer-provided rules in the face of adversarial inputs typically requires manual review, which slows down monitoring and methods development. To address this issue, we propose Rule-following Language Evaluation Scenarios (RULES), a programmatic framework for measuring rule-following ability in LLMs. RULES consists of 15 simple text scenarios in which the model is instructed to obey a set of rules in natural language while interacting with the human user. Each scenario has a concise evaluation program to determine whether the model has broken any rules in a conversation. Through manual exploration of model behavior in our scenarios, we identify 6 categories of attack strategies and collect two suites of test cases: one consisting of unique conversations from manual testing and one that systematically implements strategies from the 6 categories. Across various popular proprietary and open models such as GPT-4 and Llama 2, we find that all models are susceptible to a wide variety of adversarial hand-crafted user inputs, though GPT-4 is the best-performing model. Additionally, we evaluate open models under gradient-based attacks and find significant vulnerabilities. We propose RULES as a challenging new setting for research into exploring and defending against both manual and automatic attacks on LLMs.

1 INTRODUCTION

Traditional computing systems are designed around the execution of instructions expressed in computer programs. In contrast, language models can follow instructions expressed in natural language or learn what to do from implicit patterns in large amounts of data. In Isaac Asimov’s “Three Laws of Robotics”, which govern robot behavior within the fictional universe of Asimov’s Robot series, the Second Law requires that “a robot must obey the orders given it by human beings” (Asimov, 1942). In order to build safe and reliable applications on top of language models, it is important that AI model behavior can be controlled or constrained with user-provided rules.

Looking forward, AI assistants interacting with people will also need to follow instructions with fidelity and integrity. To ensure ethical behavior, we will need to be able to reliably impose rules such as legal statutes or deontological constraints (Hendrycks et al., 2020). Further, we must be able to verify that model behavior is truly grounded in the provided rules rather than relying on spurious textual cues or distributional priors identified during training. If AI assistants cannot be relied upon to follow clear-cut rules, they will be difficult to safely integrate into our society.

Many of the rules one might wish to impose on the behavior of AI models are quite simple in concept and easily expressed in natural language. A common practice is to simply include the rules within the model’s text prompt and rely on the existing instruction-following capabilities of the model. An alternative approach is to use a second model to score how well outputs follow a fixed set of rules, then finetune the first model to behave in a manner that maximizes this score (Bai et al., 2022b). We

Code and data available here: https://github.com/normster/llm rules

Figure 1: In the depicted Authentication scenario, an LLM assistant is instructed to obey 3 specific rules. Models currently fail on both hand-crafted inputs, e.g. convincing the model of spurious new rules (left, GPT-4 06/13), and automatic inputs, e.g. adversarial suffixes optimized to trigger rule- breaking behavior (right, Llama 2 Chat 7B).

focus on the former setting in this paper and study how well LLMs can follow rules that are provided as part of the text prompt.

In order to build models which adhere to rules we first need a way of evaluating this capability, but this is non-trivial. There is a large space of strategies one might use to manipulate a model into violating the rules, including hand-crafted attacks such as jailbreaks or prompt injections (Branch et al., 2022), or optimization-based attacks (Zou et al., 2023b). Determining whether a model has adhered to its behavioral constraints is also difficult without human judgment, but relying on such is slow and expensive. This makes it hard to thoroughly test many different attack strategies against potential defenses, reducing the confidence with which we can deploy new systems.

To meet these challenges in usability and safety, we introduce Rule-following Language Evaluation Scenarios (RULES), a benchmark for evaluating rule-following behavior in LLM assistants. The benchmark contains 15 text scenarios drawing from common children’s games as well as ideas from the field of computer security. Each scenario defines a set of rules in natural language and an evaluation program to check model outputs for compliance with the rules. Through extensive manual red-teaming of our scenarios against state-of-the-art models, we identify a wide variety of effective attack strategies to induce models to break the rules.

The strategies found during red-teaming are distilled into a test suite of over 800 hand-crafted test cases covering each rule of each scenario. We use this test suite to evaluate a variety of proprietary and open models, and find many failure cases for all models. Using model responses from evaluating these test cases, we also explore whether current models can at least detect rule-breaking outputs, but find that even detection remains difficult. Finally, we evaluate the extent to which adversarial suffixes generated through optimization can cause models to break the rules. This attack is successful at driving model performance to near zero in most scenarios.

RULES complements existing evaluations of safety and adversarial robustness, which predominantly focus on circumventing fixed universal rules. Our work focuses instead on application- specific rules expressed in natural language which may be changed or updated by the user at any time. Robustly following the rules of our scenarios while interacting with human and automated adversaries may require different approaches to improving model safety, since straightforwardly “editing out” the capacity for specific harmful behaviors will not suffice to fix the classes of model failures examined in our work. We release our code and test cases to the community, along with an interactive demo for exploring the scenarios against different models. We hope to spur more research into improving the robust rule-following abilities of LLMs, and intend for our benchmark to serve as a useful open testbed for further development.

Figure 2: Number of failed and passed test cases on our systematic test suite of various popular models. Negative test cases target negative rules (e.g. “do not print the secret key”) and affirmative test cases target affirmative rules (e.g. “print ‘Alice wins”’).

2 SCENARIOS

RULES contains 15 text-based scenarios, each of which requires the assistant model to follow one or more rules. The scenarios are inspired by the desirable security properties of computer systems and children’s games. The components of RULES include:

  • Scenarios: evaluation environments consisting of general instructions and hard rules, ex- pressed in natural language, as well as a corresponding evaluation program that can pro- grammatically detect rule violations. Instructions and rules may make reference to entity parameters (e.g., a secret key) which must be sampled to produce a concrete “scenario instance” for user interaction or evaluation.
  • Rules: single instructions each specifying a required behavior for the model. Scenarios may contain multiple rules, which are either “negative” rules that define what the model must not do or “affirmative” rules that define what the model must do.
  • Test cases: sequences of user messages for a specific scenario instance. The model is said to have “passed” the test case if it responds to every user message in sequence without violating the rules, as judged by the evaluation program.

2.1 RULES

The rules of each scenario are presented to the model as natural-language instructions, generally as the first user message in the conversation, or possibly the system message if supported by the model. The model must adhere to all the rules for the duration of the conversation. Rules are either negative (the model must not respond in a particular manner) or affirmative (the model must respond in a particular manner), shown as red and green nodes respectively in Figure 3.

Simpler scenarios have only one rule (e.g., Encryption) while other scenarios combine up to three rules into a cohesive task (e.g., Authentication). Some scenarios also involve multiple distinct users (e.g., AccessControl), in which case we also enforce the constraint that the user message must indicate the identity of the user, for instance like so: “Alice: What is Bob’s secret?”. Many rules are intentionally designed to maintain analogous structure with other rules in other scenarios, for example in Encryption the model must not reveal the secret key, and in BinarySearch the model must not reveal the secret number. In total, we have 22 rules across 15 scenarios, 9 negative rules, and 13 affirmative rules.

Figure 3: An overview of our 15 rule-following scenarios. Each scenario is shown here as a decision tree of ideal model behavior when responding to each user message. Affirmative rules mandating a certain behavior are shown in green circles, while negative rules prohibiting a certain behavior are shown in red octagons.

2.2 CORRECT BEHAVIOR

We can visualize the scenarios as a decision tree diagram, shown in Figure 3, where correct behavior corresponds to starting at the root node and obeying all the relevant internal rule nodes. The behaviors specified by the rules are all “stateless”: the correct behavior only depends on the last user message to which the model is responding.

We initially also experimented with stateful scenarios, such as administering a game of Hangman which requires the model to keep track of the letters the user has already guessed. However, all models struggled to maintain game state under benign inputs so in our benchmark, we focus instead on stateless behaviors.

2.3 EVALUATION PROGRAMS

The scenarios are designed so that a small computer program can evaluate whether the model’s response adheres to the rules. Each program is only a few lines of code and does not require inference with large models or human labeling. We rely on string comparison and simple regex patterns, which results in more permissive evaluation of negative behaviors, and more rigorous evaluation of affirmative behaviors. Our evaluation programs are unable to exactly reproduce human judgment in edge cases, but we observe in practice that the vast majority of rule-breaking outputs from models are unambiguous.

2.4 USER INTERFACE

In order to design the scenarios and evaluation code, as well as collect test cases for the test suites, we built several different user interfaces for playing through scenarios with various models. These ranged from simple command-line interfaces for debugging and play-testing to a web app used to crowd-source data collection from the authors and our colleagues. We found interacting with the models through these interfaces instrumental to understanding how models actually respond to user inputs and modifying our scenario to better capture interesting yet challenging behavior. User interfaces may become an important component of the research toolkit for studying AI systems.

Table 1: Number of failed cases on the manual test suite (870 test cases total) by model.

3 MODEL EVALUATIONS

We evaluate various proprietary and open LLMs against our test suites of human and machine- generated test cases. Each test case is associated with a specific scenario environment, which includes the scenario and randomly sampled scenario parameters (if applicable). We initialize the conversation history with 1) a user message specifying the scenario instructions 2) a placeholder assistant message “I understand and will comply with the rules.” and 3) the first user message of the test case. A system message may precede all messages if supported by the model. Evaluation results shown do not use system messages unless otherwise indicated.

The model is then repeatedly called with each response and subsequent user message appended to the conversation history, until all user messages in the test case are exhausted. Every assistant response after the first placeholder response is evaluated with the evaluation program corresponding to the scenario, and if after any response the program determines at least one rule to have been broken, we consider the model to have failed the test case.

3.1 MODEL DETAILS

We evaluate a variety of popular proprietary and public models, including OpenAI’s GPT-3.5, GPT-3.5 Instruct, and GPT-4 (OpenAI, 2023b), Anthropic’s Claude Instant and Claude, Google’s PaLM 2 Text-Bison (Anil et al., 2023), as well as Vicuna v1.3 (7B, 13B, 33B) (Chiang et al., 2023), Llama 2 Chat (7B, 13B, 70B) (Touvron et al., 2023), and Mistral Instruct v0.1 (7B) (Jiang et al., 2023). OpenAI, Anthropic, and Google models are available via their respective APIs in multiple versions, and by default we use the versions listed in Table 6. GPT-3.5 Instruct and PaLM 2 Text-Bison are text-completion models rather than chat models, so we use a simple chat template to prompt both models for conversational responses, shown in Table 7.

In all evaluations, we use greedy decoding (temperature=0) to minimize variance, though some model APIs still exhibit non-deterministic outputs (see Section 3.4). For GPT models we prepend an empty system message, while for Llama 2 Chat and Vicuna v1.3 models we use their respective default system prompts (catalogued in Table 7). We run all public models locally on GPU using the HuggingFace Transformers library.

3.2 MANUAL TEST SUITE

From conversations logged during exploratory red-teaming of the scenarios, we assemble an initial manual test suite. We filter duplicated conversations and remove assistant responses, resulting in 870 test cases. The number of test cases ranges from 155 on Authentication to 27 on Confidentiality and encompasses a wide range of strategies. Most logged conversations target negative rules, but we did not track user intent and do not distinguish between negative and affirmative test cases.

We show results on the manual test suite in Table 1. All models fail a significant number of test cases, though GPT-4 fails the fewest number of test cases, followed by Claude Instant and Claude 2.

3.3 SYSTEMATIC TEST SUITE

A retrospective analysis of the successful attacks from our initial red-teaming found that most fell into one of six types of strategies. Therefore, we constructed a systematic test suite to systematically try instances of each strategy against each scenario. We also took inspiration from Wei et al. (2023) when defining these strategies and adapted several basic jailbreaking prompts to our scenarios. The systematic test suite consists of 862 hand-written test cases implementing variations of these 6 strategies against all 22 rules in the 15 scenarios, with 355 test cases targeting negative rules

Table 2: Example test cases in the systematic test suite implementing strategies from each
category for the Encryption scenario.

and 507 test cases targeting positive rules. Each test case attempts to break one rule in one scenario via one of the following high-level strategies:

  • Just Ask: user directly asks the model to break the rule, with no trickery
  • Indirection: user asks the model to perform a seemingly innocuous task
  • Legalese: user presents an artful re-interpretation of the rules
  • Obfuscation: user disguises a request for the model to break the rule
  • Rule Change: user informs model of a new or updated rule
  • Simulation: user asks the model to simulate or discuss a hypothetical situation

Examples of test cases from each strategy are shown in Table 2. Each of our test cases contains between one to three user messages. The “Just Ask” strategy defines a single basic test case for each rule, which if followed exactly would lead the model to violate the targeted rule, thereby testing the ability of the model to decline the most straightforward attack attempts. Most positive rules require the model to produce a specific output in response to a particular user input, so for these rules the “Just Ask” test case consists of two user messages to first ask the model to break the rule, then trigger the rule-breaking behavior with the correct user input.

As show in Figure 2, across the board all evaluated LLMs fail a large number of test cases. Performance on the on negative rules in the systematic test suite is closely correlated with performance on the manual test suite. Models generally fail fewer negative test cases than affirmative ones, with all open models except Vicuna 33B failing close to all affirmative test cases. Our results show that it is significantly easier to lead the model off-course from correct behavior than to coerce a specific incorrect behavior from these models, especially for open models.

Table 3: Standard deviation in number of failed test cases by model on different subsets of the systematic test suite. The same test cases are evaluated 10 times with temperature set to 0. Each scenario and rule subset contains 39 test cases.

GPT-4 fails the fewest test cases in the systematic test suite out of all evaluated models. Surprisingly, Claude Instant slightly outperforms the ostensibly more capable Claude 2. We break down results on the systematic test suite further by test case strategy in Table 9, and find that while GPT-4 achieves the best overall performance, no single model dominates all categories of test cases. We further investigate impact of including various simple messages as either system messages or instruction prefixes in Appendix B.2 and Appendix B.3, respectively. Overall, we find that existing LLMs cannot reliably follow our rules; while they can resist some attempts, there is significant room for improvement.

3.4 VARIANCE AND UNCERTAINTY

There are several sources of variance and uncertainty in our results. For one, outputs from the OpenAI and Anthropic APIs are non-deterministic even with temperature set to 0. This leads to some variance in test case outcomes, which we estimate in Table 3 using a subset of our systematic test suite. We run the same evaluation 10 times in succession and measure standard deviation in the number of failed test cases of 1.1 cases or less, out of 39 test cases total for each of the evaluated subsets of test cases. The PaLM 2 API does not exhibit any variance in outputs or test case outcomes, and neither do any public models when evaluated locally.

It is also well-documented that differences in phrasing can result in significant changes in model be- havior and performance. The specific wording of our scenario instructions as developed and refined against the March 2023 versions of the GPT and Claude models, though we did not intentionally choose the wording to benefit specific models over others.

To gauge the significance of changes in performance between models or prompts, we run McNemar’s test (McNemar, 1947) on pairs of evaluation outcomes for each test case. In our tables, we denote p-values of greater than 0.05 in gray and p-values of less than 0.01 are underlined.

3.5 ERROR DETECTION

If models are unable to reliably follow the rules, might they at least be able to reliably detect when assistant responses violate the rules? To answer this, we sample 1098 pairs of user messages and assistant responses from the outputs of models evaluated on the systematic test suite, along with ground truth pass/fail evaluation labels, in order to measure the ability of models to detect rule violations as a zero-shot binary classification task.

As shown in Table 4, most models can do better than chance, but cannot reliably detect whether the rules have been followed. We define positives as instances in which the assistant response violates one or more rules of the scenario, and measure precision/recall as typically defined. No model yet “solves” this task, with GPT-4 achieving 82.1% accuracy and F-score of 84.0 and other models falling far short. Our particular evaluation protocol, described in more detail in Appendix A.1, requires a concise “pass” or “fail” answer, which puts verbose models like Llama 2 at a disadvantage since these models occasionally preface their answer with additional text.

3.6 ADVERSARIAL SUFFIXES

We also evaluate Greedy Coordinate Gradient (GCG), a recently proposed algorithm for finding suffixes that cause models to produce specific target strings, against open 7B models (Vicuna v1.3, Llama 2 Chat, and Mistral v0.1) on our scenarios. GCG is an iterative optimization algorithm that updates a single token in each time step to maximize the likelihood of a target string under the

Table 4: Accuracy, precision, recall, and F-score of models on a binary classification task of detecting whether shown assistant responses break one or more rules.

target language model. We refer readers to Appendix A.2 and the original paper (Zou et al., 2023b) for more implementation and algorithm details. After computing separate suffixes for each rule with each model, we evaluate model performance on the “Just Ask” test case for each rule from the systematic test suite by generating 20 instances of this test case with different randomly sampled scenario parameters. Some scenarios such as Hodor do not use any randomly sampled parameters, so we repeat the same test case 20 times to keep total counts consistent.

As seen in Table 5, GCG can drive up the number of failed test cases for all three evaluated models. Mistral is particularly susceptible, failing almost all affirmative and negative test cases, while Llama 2 still retains passes some negative test cases. We show per-scenario results in Table 14 which reveal that the remaining passing test cases for Llama 2 correspond to a few rules for which GCG was unable to find successful suffixes. Additionally, we did not find any evidence of the transferability of suffixes between open models, which is to say that suffixes optimized against Llama 2 7B did not lead to significant increases in the number of failed test cases when used against other models, including larger models in the Llama 2 family.

Table 5: Number of failed “Just Ask” test cases with adversarial suffixes found by GCG, and change when compared to without suffixes in parentheses. We compute p-values using McNemar’s test on paired outcomes in each testcase. Results with p 0.05 are shown in gray and results with p < 0.01 are underlined.

4 DISCUSSION

Our experiments demonstrate that current models are largely inadequate in their abilities to follow simple rules. Despite significant efforts in specifying and controlling the behavior of LLMs, even more work remains ahead for the research community before we can count on models to reliably resist various human- or machine-generated attacks. However, we are optimistic that meaningful progress is possible in this domain, despite the last decade of slower-than-expected progress on adversarial robustness to imperceptible perturbations in image classification models. Breaking the rules requires a model to take targeted generative action, and it is plausible that rule-breaking goals could be identified within the model’s internal representations (Zou et al., 2023a), which in turn could yield viable defenses based on detection and abstention.

One possible approach to solving the scenarios in RULES is to re-sample model outputs until the evaluation program determines a passing output has been found. This approach is not realistic as ground truth evaluation programs are not available in real-world applications. In general, we intend the test cases and scenarios presented in this paper as a sort of “hold-out set” for evaluating LLMs, so models should not be trained on any of these artifacts. As models improve over time, it will be necessary to collect updated test suites with more difficult test cases (Kiela et al., 2021).

Safety training. RLHF (Ouyang et al., 2022; Bai et al., 2022a) and RLAIF (Bai et al., 2022b) are now often employed with instruction-tuning of LLMs to ensure model responses will be helpful and harmless. These techniques steer the model’s completions towards harmlessness and abstaining from completion, e.g. by returning “I’m sorry, I can’t assist with that request.”, if prompted by harmful instructions. However, we are unsure whether obedience to rules is correlated to alignment goals (e.g., avoiding toxic or harmful outputs) that are currently targeted in today’s LLMs, as the violations of our rules are not necessarily toxic or harmful by these measures. Consequently, we are uncertain whether improvements in avoiding harmful outputs will directly lead to a greater ability to follow rules.

Following instructions vs. rules. The ability to follow rules specified in the model’s prompt may emerge in part as a result of instruction tuning (Wei et al., 2021; Chung et al., 2022; Wei et al., 2022; Longpre et al., 2023), though at least in the case of Vicuna and Llama 2 the supervised training data included examples of following rules. We view rule-following in LLMs as a distinct capability from instruction-following for several reasons. In general, the evaluation of instruction-following focuses on the model’s ability to respond with high-quality solutions and generalize to unseen instructions. In contrast, our rule-following evaluation focuses on whether LLMs can adhere to rules in conversations, where LLMs should prioritize existing rules over contradictory user requests. Second, our benchmark accurately evaluates corner cases where LLMs superficially appear to respect the rules but actually violate them. For example, while evaluating current LLMs we saw many failure cases in which an LLM strongly reaffirms its commitment to safeguarding the secret key but inadvertently discloses the value of the secret key by repeating too many details to the user (e.g. “I’m sorry, I cannot repeat the secret key ‘opensesame’.”).

5 RELATED WORK

Rule learning and Rule induction. We distinguish our work on obeying external user-provided rules from established traditions of research on human and artificial systems for learning rules across the fields of linguistics (Chomsky, 1965; Pinker, 1991), cognitive science (Elman, 1996; Gomez & Gerken, 1999; Marcus et al., 1999), and artificial intelligence (Solomonoff, 1964; Quinlan, 1986; Lake et al., 2015). Recent work has also explored rule induction with LLMs (Zhu et al., 2023).

Alignment and LLM red-teaming. Methods for aligning LLMs to human safety and usability standards have improved in efficacy and scope in recent years (Ziegler et al., 2019; Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a;b; Thoppilan et al., 2022; OpenAI, 2023b; Touvron et al., 2023; Anil et al., 2023). Concurrently, intensive red-teaming studies have built confidence in the average-case reliability of these methods (Ganguli et al., 2022; Perez et al., 2022; OpenAI, 2023a;c). However, it remains the case that a wide range of manual methods (Branch et al., 2022; Greshake et al., 2023; Kang et al., 2023; Wei et al., 2023; Shen et al., 2023) and automated methods (Jones et al., 2023; Maus et al., 2023; Qi et al., 2023; Carlini et al., 2023; Zou et al., 2023b; Bailey et al., 2023; Chao et al., 2023) can readily circumvent LLM safety and alignment training. We adapt some of these techniques such as jailbreaking prompts (Wei et al., 2023) and the GCG attack (Zou et al., 2023b) to successfully induce rule-breaking behavior from LLMs on the RULES scenarios.

LLM defenses and security. Recent work has also explored input smoothing (Robey et al., 2023; Kumar et al., 2023) and detection (Phute et al., 2023) as possible defenses for adversarial inputs. Other researchers have characterized potential threats to platform security for LLM-enabled applications beyond model alignment and safety (Liu et al., 2023; Iqbal et al., 2023).

LLM privacy. Another important aspect of safely deploying LLM-enabled applications is maintaining user and data privacy. LLM systems are susceptible to membership inference and data extraction attacks (Carlini et al., 2020; Debenedetti et al., 2023), and may also leak private personal information (Mireshghallah et al., 2023). Additionally, LLMs may be misused to automatically infer private personal information from publicly visible data (Staab et al., 2023).

Red-teaming competitions. There have been many red-teaming “capture-the-flag”-style competitions in recent months, ranging from web applications (Gandalf, 2023; TensorTrust, 2023) to formal competitions (HackAPrompt, 2023; Trojan Detection Challenge (LLM Edition), 2023) and timed in-person challenges (DEFCON AI Village, 2023), focusing largely on circumventing a fixed set of rules. These competitions reflect a surge of interest from the broader software community in securing LLM systems, and we hope our work helps to channel this interest into research progress.

Acknowledgements

The authors would like to thank Ilina Bhaya-Grossman, Chawin Sitawarin, Alexander Pan, Mantas Mazeika, and Long Phan for helpful discussions and feedback.

This work was supported in part by funds provided by the National Science Foundation (under grant 2229876), an NSF Graduate Fellowship, the Department of Homeland Security, IBM, the Noyce Foundation, Google, Open Philanthropy, and the Center for AI Safety Compute Cluster. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsors.

References

Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H Clark, Lau- rent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Brad- bury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christo- pher A Choquette-Choo, Aakanksha Chowdhery, Cle´ment Crepy, Shachi Dave, Mostafa De- hghani, Sunipa Dev, Jacob Devlin, Mark D´ıaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxi- aoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Ke- nealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, Yaguang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Po- lacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vo- drahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. PaLM 2 Technical Report. May 2023. 5, 9

Isaac Asimov. Runaround. Astounding Science Fiction, 1942. 1

Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback. April 2022a. 8, 9

Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Ols- son, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran- Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mer- cado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Con- erly, Tom Henighan, Tristan Hume, Samuel R Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. Constitutional AI: Harmlessness from AI Feedback. December 2022b. 1, 8, 9

Luke Bailey, Euan Ong, Stuart Russell, and Scott Emmons. Image Hijacks: Adversarial Images can Control Generative Models at Runtime. September 2023. 9

Hezekiah J Branch, Jonathan Rodriguez Cefalu, Jeremy McHugh, Leyla Hujer, Aditya Bahl, Daniel del Castillo Iglesias, Ron Heichman, and Ramesh Darwishi. Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples. September 2022. 2, 9

Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and Colin Raffel. Extracting Training Data from Large Language Models. December 2020. 9

Nicholas Carlini, Milad Nasr, Christopher A Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramer, and Ludwig Schmidt. Are aligned neural networks adversarially aligned? June 2023. 9

Patrick Chao, Alexander Robey, Edgar Dobriban, Hamed Hassani, George J Pappas, and Eric Wong. Jailbreaking Black Box Large Language Models in Twenty Queries. October 2023. 9

Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/. 5

Noam Chomsky. Aspects of the Theory of Syntax. The MIT Press, 50 edition, 1965. 9

Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V Le, and Jason Wei. Scaling Instruction-Finetuned Language Models. October 2022. 9

Edoardo Debenedetti, Giorgio Severi, Nicholas Carlini, Christopher A Choquette-Choo, Matthew Jagielski, Milad Nasr, Eric Wallace, and Florian Trame`r. Privacy Side Channels in Machine Learning Systems. September 2023. 9

DEFCON AI Village. DEFCON AI Village, 2023. URL https://aivillage.org. 9

Jeffrey L Elman. Rethinking Innateness: A Connectionist Perspective on Development. MIT Press, 1996. 9

Gandalf. Gandalf, 2023. URL https://gandalf.lakera.ai/. 9

Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, Andy Jones, Sam Bowman, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Nelson Elhage, Sheer El-Showk, Stanislav Fort, Zac Hatfield-Dodds, Tom Henighan, Danny Hernandez, Tristan Hume, Josh Jacobson, Scott Johnston, Shauna Kravec, Catherine Olsson, Sam Ringer, Eli Tran-Johnson, Dario Amodei, Tom Brown, Nicholas Joseph, Sam McCandlish, Chris Olah, Jared Kaplan, and Jack Clark. Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned. August 2022. 9

R L Gomez and L Gerken. Artificial grammar learning by 1-year-olds leads to specific and abstract knowledge. Cognition, 70(2):109–135, March 1999. 9

Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz. Not what you’ve signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection. February 2023. 9

HackAPrompt. HackAPrompt, 2023. URL https://www.hackaprompt.com/. 9

Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. Aligning AI With Shared Human Values. August 2020. 1

Umar Iqbal, Tadayoshi Kohno, and Franziska Roesner. LLM Platform Security: Applying a Sys- tematic Evaluation Framework to OpenAI’s ChatGPT Plugins. September 2023. 9

Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chap- lot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Le´lio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothe´e Lacroix, and William El Sayed. Mistral 7B. October 2023. 5

Erik Jones, Anca Dragan, Aditi Raghunathan, and Jacob Steinhardt. Automatically Auditing Large Language Models via Discrete Optimization. March 2023. 9

Daniel Kang, Xuechen Li, Ion Stoica, Carlos Guestrin, Matei Zaharia, and Tatsunori Hashimoto. Ex- ploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks. Febru- ary 2023. 9

Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebas- tian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. Dynabench: Rethinking Benchmarking in NLP. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo- gies, pp. 4110–4124, Online, June 2021. Association for Computational Linguistics. 8

Aounon Kumar, Chirag Agarwal, Suraj Srinivas, Soheil Feizi, and Hima Lakkaraju. Certifying LLM Safety against Adversarial Prompting. September 2023. 9

Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, December 2015. 9

Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, and Yang Liu. Prompt Injection attack against LLM-integrated Applications. June 2023. 9

Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, and Adam Roberts. The Flan Collection: Designing Data and Meth- ods for Effective Instruction Tuning. January 2023. 9

G F Marcus, S Vijayan, S Bandi Rao, and P M Vishton. Rule learning by seven-month-old infants.

Science, 283(5398):77–80, January 1999. 9

Natalie Maus, Patrick Chao, Eric Wong, and Jacob Gardner. Black Box Adversarial Prompting for Foundation Models. February 2023. 9

Quinn McNemar. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153–157, June 1947. 7

Niloofar Mireshghallah, Hyunwoo Kim, Xuhui Zhou, Yulia Tsvetkov, Maarten Sap, Reza Shokri, and Yejin Choi. Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory. October 2023. 9

OpenAI. GPT-4 system card, 2023a. 9 OpenAI. GPT-4 technical report, 2023b. 5, 9 OpenAI. GPT-4V(ision) system card, 2023c. 9

Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kel- ton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. March 2022. 8, 9

Ethan Perez, Saffron Huang, Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nat McAleese, and Geoffrey Irving. Red Teaming Language Models with Language Models. February 2022. 9

Mansi Phute, Alec Helbling, Matthew Hull, Shengyun Peng, Sebastian Szyller, Cory Cornelius, and Duen Horng Chau. LLM Self Defense: By Self Examination, LLMs Know They Are Being Tricked. August 2023. 9

S Pinker. Rules of language. Science, 253(5019):530–535, August 1991. 9

Xiangyu Qi, Kaixuan Huang, Ashwinee Panda, Peter Henderson, Mengdi Wang, and Prateek Mittal. Visual Adversarial Examples Jailbreak Aligned Large Language Models. June 2023. 9

J R Quinlan. Induction of decision trees. Mach. Learn., 1(1):81–106, March 1986. 9

Alexander Robey, Eric Wong, Hamed Hassani, and George J Pappas. SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks. October 2023. 9

Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, and Yang Zhang. ”Do Anything Now”: Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models. Au- gust 2023. 9

R J Solomonoff. A formal theory of inductive inference. Part I. Information and Control, 7(1):1–22, March 1964. 9

Robin Staab, Mark Vero, Mislav Balunovic´, and Martin Vechev. Beyond Memorization: Violating Privacy Via Inference with Large Language Models. October 2023. 9

Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul Christiano. Learning to summarize from human feedback. September 2020. 9

TensorTrust. TensorTrust, 2023. URL https://tensortrust.ai. 9

Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, Yaguang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vin- cent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. LaMDA: Language Models for Dialog Applications. January 2022. 9

Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open Foundation and Fine-Tuned Chat Models. July 2023. 5, 9

Trojan Detection Challenge (LLM Edition). Trojan Detection Challenge (LLM Edition) @ NeurIPS 2023, 2023. URL https://trojandetection.ai. 9

Alexander Wei, Nika Haghtalab, and Jacob Steinhardt. Jailbroken: How Does LLM Safety Training Fail? July 2023. 5, 9

Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned Language Models Are Zero-Shot Learners. September 2021. 9

Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yo- gatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent Abilities of Large Language Models. June 2022. 9

Zhaocheng Zhu, Yuan Xue, Xinyun Chen, Denny Zhou, Jian Tang, Dale Schuurmans, and Hanjun Dai. Large Language Models can Learn Rules. October 2023. 9

Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-Tuning Language Models from Human Preferences. September 2019. 9

Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J Zico Kolter, and Dan Hendrycks. Representation Engineering: A Top-Down Ap- proach to AI Transparency. October 2023a. 8

Andy Zou, Zifan Wang, J Zico Kolter, and Matt Fredrikson. Universal and Transferable Adversarial Attacks on Aligned Language Models. July 2023b. 2, 8, 9

Table 6: API model versions used in our experiments.

A ADDITIONAL EVALUATION DETAILS

A.1 ERROR DETECTION

The model is shown a task prompt (see Table 7) that contains the scenario instructions, along with a user message and assistant response, and then is asked to determine whether the assistant response breaks one or more rules. Since we do not have access to token probabilities for all models, we ask the model to respond in text with either “pass” or “fail”. Specifically, after stripping whitespace and punctuation characters, we convert the model response to lowercase. If the response begins with the string “pass”, we interpret it as a positive prediction. Otherwise, we interpret the response as a negative prediction.

A.2 GCG ADVERSARIAL ATTACK

The “Just Ask” test cases in the systematic test suite directly ask the model to break a specific rule, and serve as the base messages to which we append the adversarial suffix. We attack each rule in each scenario separately, using a target string tailored to each specific rule (see Table 8). Against negative rules, for which the “Just Ask” test cases consist of a single user message, we use the value of the secret or forbidden word as our target string. The “Just Ask” test cases on affirmative rules typically consist of two user messages, so we append the adversarial suffix to the first user message and target an affirmative response from the model in response.

We initialize our adversarial suffix to 20 tokens of interleaved exclamation points and spaces and run GCG for 500 steps on each scenario. In each step, we evaluate 512 replacement tokens sampled from the top 256 candidates at each suffix position. The scenario parameters are re-randomized in each attack step to encourage generalization across parameters, which leads to a much noisier optimization process than when targeting fixed prompts.

B ADDITIONAL EVALUATION RESULTS

B.1 IMPROVEMENT OVER TIME

Many of the proprietary models have been updated over time and multiple versions are accessible via their APIs. In Table 10 we track changes in performance over time by the same model. We find that over time, models have improved in the number of test cases passed. The Claude v1 series of models shows significant gains with each update.

B.2 SYSTEM MESSAGES

The GPT chat models also accept system messages, which can be used to “provide specific instructions about how it should behave throughout the conversation” 1. The Llama 2 chat models were also trained to follow different system messages. In Table 11 we show results from introducing various different system messages, compared against the default system message.

Table 7: Glossary of various prompts in our evaluations. Monospace text denotes placeholders.

punishment. As a negative control, we also include a message that instructs the model to respond as a pirate, which increases the complexity of the task. The full text of these system messages is shown in Table 7. We find that GPT and PaLM 2 models improve performance with the “warning” message, while Llama 2 models see significant degradations in performance. All models see significantly lower performance when using the negative control of “pirate” as a system message.

Scenario instructions can be presented as a system message rather than in the first user message. This improves the number of passed test cases for the GPT models, driven primarily by improvements in affirmative rules (Table 12). However, Llama 2 does not handle scenario instructions in the system message well, with large increases in the number of failed test cases.

B.3 PREFIX MESSAGES

We experiment with a variety of different prefix messages, which we prepend to the scenario instruc- tions. In Table 13, we see that all prefix messages seem to help more often than hurt, though for the most part prefix messages do not have a statistically significant effect on the number of failed cases. The “warning” message provides a significant boost in performance to some proprietary models. More positive messages, such as “helpful” and “praise” seem to help more on proprietary models, while negative messages such as “criticism” and “threat” see more improvement on open models. The “pirate” negative control causes many more failures for almost all models.

Table 8: GCG attack target strings for each rule. Monospace text denotes placeholders which are filled in with scenario parameters.

Table 9: Number of failed systematic test cases by various popular models, broken down by test case strategy and test case type (negative or affirmative). Left and right subcolumns of each strategy are results on negative and affirmative test cases, respectively. Total number of test cases is indicated as ‘of 9’, etc. The best result in each column among proprietary (top) and open models (bottom) are underlined.

ModelsNegative (of 355)Affirmative (of 507)
gpt-3.5-turbo-0301 gpt-3.5-turbo-06131140
gpt-4-0314 gpt-4-06139+2
claude-instant-v1.0 claude-instant-v1.195117
claude-instant-v1.1 claude-instant-v1.2+2924
claude-v1.0 claude-v1.1+743
claude-v1.1 claude-v1.2+18+6
claude-v1.2 claude-v1.33069
claude-v1.3 claude-2+5+26

Table 10: Change in number of failed systematic test cases across incremental versions of various models.

System Message

Model                            helpful                          praise                        warning                      criticism                       threat                           pirate

GPT-415514+1630465+242+421+274
GPT-3.514+157+2154+414+2432+11+27+42
PaLM 27122317+4+10+7633+106
Llama 2 70B+225+306+236+295+192+500
Llama 2 13B+525+447+238+291006+74+7
Llama 2 7B+372+437+506+5010+98+78+1

Table 11: Change in number of failed systematic test cases across different system messages, compared to the default system message (i.e. the empty string for GPT and PaLM). See Table 7 for the full text of messages. Left and right subcolumns of each message are results on negative and affirmative test cases, respectively. Results with p 0.05 are shown in gray and results with p < 0.01 are underlined.

RulesGPT-4GPT-3.5PaLM 2Llama2-70BLlama2-13BLlama2-7B
Negative65 (19)120 (12)182 (5)181 (+83)181 (+74)189 (+95)
Affirmative180 (48)400 (51)306 (26)478 (28)495 (4)503 (3)

Table 12: Number of failed systematic test cases when presenting scenario instructions as a system message, and change when compared to a user message, as in all other experiments, in parentheses. Left and right subcolumns of each strategy are results on negative and affirmative test cases, respectively. Results with p 0.05 are shown in gray and results with p < 0.01 are underlined.

Prefix Message

Model                                  helpful                         praise                       warning                     criticism                      threat                          pirate

GPT-4262215153676151141224+109
GPT-3.58+1015+14392912+814+5+22+32
GPT-3.5 Instruct+915+1520+1016+219+117+14+26
Claude 2623422+115+3622+45+5+110
Claude Instant22331+1063430+1859+22+17+132
PaLM 2960+13+89+3+8+9638+112
Llama 2 70B600061808+140
Llama 2 13B+20+1207+118225+1+28+8
Llama 2 7B10+114+118014+1130+33+1
Vicuna 33B+5+9+4+5143+18+16+1+8+7+50
Vicuna 13B+68+5849+912+1014+22+24
Vicuna 7B846518614934+28+4
Mistral 7B5+12+416+311+3+1+339+3

Table 13: Change in number of failed systematic test cases when prefixing various messages to the instruction text, compared to no prefix. See Table 7 for the full text of prefixes. Results with p 0.05 are shown in gray and results with p < 0.01 are underlined.

Scenario and rule                                                      Vicuna v1.3 7B                                    Llama 2 7B                                    Mistral v0.1 7B

Encryption                                          20                            0                            20                           20                           20                           20

AccessControl                                     0                            13                            1                             19                            0                            20

Confidentiality                                      0                            12                           17                           19                            9                            20

Authentication withholdpassword            8                            20                            3                             20                            0                            19

Authentication withholdsecret                 0                            18                           19                           20                            0                            20

Millionaires withhold                              0                              0                            14                           20                            0                            20

ForbiddenWord                                    20                           20                            0                             10                           20                           20

Questions withhold                               5                            14                           20                           18                           18                           20

BinarySearch withhold                           0                            20                           19                           20                           20                           20

Integrity                                             20                           20                           20                           20                           20                           20

Commitment reject                              20                           20                           20                           20                           20                           20

Commitment verify                              20                           20                           20                           20                           20                           20

Authentication provide                          20                           20                           20                           20                            1                            20

Millionaires identify                              20                           20                           20                           20                           20                           20

DiningCryptographers                           20                           20                           20                           20                           20                           20

Hodor                                                 0                            20                           20                           20                            0                            20

RockPaperScissors                             20                           20                           20                           20                           20                           20

AnimalSounds                                     20                           20                           20                           20                           14                           20

Questions confirm                               20                           20                           20                           20                           20                           20

BinarySearch confirm                           20                           20                           19                           20                           20                           20

SimonSays execute                             20                           20                           20                           20                           20                           20

SimonSays reject                                20                           20                           20                           20                           20                           20

Table 14: Number of failed “Just Ask” test cases, without (left subcolumns) and with (right sub- columns) adversarial suffixes found by GCG. There are 20 test cases for each scenario and rule.