Skip to main content
Uncategorized

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama,
Cyprien de Masson d’Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu and Geoffrey Irving

Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world. In this paper, we present an analysis of Transformer-based language model performance across a wide range of model scales — from models with tens of millions of parameters up to a 280 billion parameter model called Gopher. These models are evaluated on 152 diverse tasks, achieving state-of-the-art performance across the majority. Gains from scale are largest in areas such as reading comprehension, fact-checking, and the identification of toxic language, but logical and mathematical reasoning see less benefit. We provide a holistic analysis of the training dataset and model’s behaviour, covering the intersection of model scale with bias and toxicity. Finally we discuss the application of language models to AI safety and the mitigation of downstream harms.

Keywords: Natural Language Processing, Language Models, Deep Learning

Contents

1 Introduction 3
2 Background 5
3 Method 5
3.1 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.3 Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.4 Training Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Results 7
4.1 Task Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.2 Comparisons with State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.3 Performance Improvements with Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5 Toxicity and Bias Analysis 13
5.1 Toxicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.2 Distributional Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
6 Dialogue 17
6.1 Prompting For Dialogue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.2 Fine-tuning for Dialogue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.3 Dialogue & Toxicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
7 Discussion 20
7.1 Towards Efficient Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
7.2 Challenges in Toxicity and Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
7.3 Safety benefits and safety risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
8 Conclusion 23
9 Acknowledgements 23
10 Contributions 24
A MassiveText 39
A.1 Dataset Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
A.2 Dataset Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.3 Dataset Ablations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
A.4 Text normalisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
A.5 MassiveText Datasheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
B Gopher Model Card 49
C Lessons Learned 53
C.1 Adafactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
C.2 Lower-Precision Training with bfloat16 . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
D Results 55
D.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
D.2 Pile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
D.3 Language Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
D.4 Filtering Test-Set Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
D.5 Scaling Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
D.6 Scaling Context Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
D.7 MMLU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
D.8 BIG-bench . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
D.9 TriviaQA & NaturalQuestions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
D.10 TruthfulQA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
D.11 Reading Comprehension: RACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
D.12 Fact-Checking: FEVER & MultiFC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
D.13 Common Sense: PIQA, WinoGrande, SocialIQA, HellaSwag . . . . . . . . . . . . . . . . . . 87
E Toxicity and Bias Analysis 89
E.1 Toxic Generations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
E.2 Classifying Toxicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
E.3 Distributional Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
F Compute Usage 102
G Reducing Inference and Training Costs 103
G.1 Efficient Fine-tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
G.2 Reducing Inference Costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
G.3 Reducing Training Costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
G.4 Future Work for Efficient Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
H Dialogue-Prompted Gopher Details 112
H.1 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
H.2 Dialogue Dataset Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
H.3 Comparison Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
H.4 RTP in a Dialogue Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
H.5 Selected Transcripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

Introduction

Natural language communication is core to intelligence, as it allows ideas to be efficiently shared between humans or artificially intelligent systems. The generality of language allows us to express many intelligence tasks as taking in natural language input and producing natural language output.
Autoregressive language modelling — predicting the future of a text sequence from its past — provides a simple yet powerful objective that admits formulation of numerous cognitive tasks. At the same time, it opens the door to plentiful training data: the internet, books, articles, code, and other writing. However this training objective is only an approximation to any specific goal or application, since we predict everything in the sequence rather than only the aspects we care about. Yet if we treat the resulting models with appropriate caution, we believe they will be a powerful tool to capture some of the richness of human intelligence.

Using language models as an ingredient towards intelligence contrasts with their original application: transferring text over a limited-bandwidth communication channel. Shannon’s Mathematical Theory of Communication (Shannon, 1948) linked the statistical modelling of natural language with compression, showing that measuring the cross entropy of a language model is equivalent to measuring its compression rate. Shannon fit early language models to real data via precomputed tables of text statistics (Dewey, 1923) relating model complexity to improved text compression alongside more realistic text generation.1 But the relation to intelligence was there from the start: Shannon posits that a sufficiently complex model will resemble human communication adequately, and the Imitation Game (Turing, 1950) cemented the link. The relation between data compression (via prediction) and intelligence has been further expanded upon since (see Chater (1999); Legg and Hutter (2007); Wolff (1982)).

A key driver towards better language models has been modern computing. From their pen-and- paper origins, language models have transformed in capacity and predictive power by the exponential rise in compute (Moore et al., 1965). In the 1990s and 2000s, 𝑛-gram models saw increases in scale and better smoothing approaches (Ney et al., 1994), including a 300 billion 𝑛-gram model trained on two trillion tokens of text (Brants et al., 2007). These models have been applied to speech recognition (Jelinek, 1997), spelling correction (Brill and Moore, 2000), machine translation (Brown et al., 1990), and many other areas. However 𝑛-gram models become statistically and computationally inefficient as the context length is increased, which limits the richness of language they can model.

In the past two decades language models have progressed to neural networks that capture the structure of language implicitly (Bengio et al., 2003; Graves, 2013; Jozefowicz et al., 2016; Mikolov et al., 2010; Radford et al., 2019). Progress has been driven by both scale and network architecture (Bahdanau et al., 2014; Hochreiter and Schmidhuber, 1997; Vaswani et al., 2017). ? and Kaplan et al. (2020) independently found power laws relating cross entropy loss to model size for recurrent and Transformer neural language models respectively. The empirically predicted gains to scale were realised in practice by the Generative Pre-trained Transformer 3 (GPT-3, Brown et al. (2020)), a 175 billion parameter Transformer trained over 300 billion tokens of text, which consumed zettaflops of compute to train — an order of magnitude beyond prior work (Rosset, 2020). GPT-3 demonstrated unprecedented generation quality alongside generalist capabilities across many Natural Language Processing (NLP) tasks — notably when prompted with examples (termed few-shot prompting).

In this paper we describe a protocol for training a state-of-the-art large language model and present a 280 billion parameter model called Gopher. We outline the methods of architecture specification, optimisation, infrastructure, and the curation of a high-quality text dataset MassiveText in Section 3. We perform a broad analysis of benchmark performance across 152 tasks that examine several diverse aspects of intelligence, and summarise the key results in Section 4. We see that Gopher lifts the performance over current state-of-the-art language models across roughly 81% of tasks containing comparable results, notably in knowledge-intensive domains such as fact checking and general knowledge.

As harmful content occurs both in Gopher’s training set and in many potential downstream applications, we examine model toxicity and bias in Section 5 with a focus on how scale influences these properties. We find larger models are more likely to generate toxic responses when provided with toxic prompts, but they can also more accurately classify toxicity. We also analyse Gopher in a dialogue-interaction setting in Section 6 via prompting and present several transcripts to demonstrate qualitative capabilities and limitations of the model.

Finally, we discuss the ethical and safe application of these models including which types of undesirable behaviour to mitigate before and after training in Section 7. We discuss application- driven safety and the potential for language models to accelerate research towards safer intelligent technology.

Background

Language modelling refers to modelling the probability of text 𝑃 (𝑆) where 𝑆 can be a sentence, paragraph, or document depending on the application. This is done by tokenizing the string: mapping it to a sequence of integer-valued tokens: 𝑔(𝑆) = 𝑋 = (𝑋1, 𝑋2, . . . , 𝑋𝑛) ∈ 𝑉𝑛 where 𝑉 is the vocabulary (a finite set of positive integers) and 𝑛 is the resulting sequence length, and modelling 𝑋 . Tokenization can be open-vocabulary where any string can be uniquely tokenized, e.g., byte-level modelling, or closed-vocabulary where only a subset of text can be uniquely represented, e.g., a list of words and a singular out-of-vocabulary token. We employ open-vocabulary tokenization via a mixture of byte-pair encoding (BPE) with a backoff to UTF-8 bytes in the style of Radford et al. (2018).

The typical way to model the token sequence 𝑋 is via the chain rule 𝑃 (𝑋) = 𝑃 (𝑋1, 𝑋2, . . . , 𝑋𝑛) =ll𝑛𝑖=1𝑃 (𝑋𝑖 |𝑋<𝑖). This is also known as autoregressive sequence modelling, because at each time-step the future (in this case, future token) is predicted based upon the past context. Whilst there are other objectives towards modelling a sequence, such as modelling masked tokens given bi-directional context (Devlin et al., 2019; Mikolov et al., 2013) and modelling all permutations of the sequence (Yang et al., 2019) we focus on autoregressive modelling due to its strong performance and simplicity. We shall refer to language models hereon as the function approximators to perform next-token prediction.

A class of neural networks known as Transformers (Vaswani et al., 2017) have demonstrated state- of-the-art language model performance in recent years (Dai et al., 2019; Radford et al., 2018, 2019) and this is the architecture we focus on in this paper. There has been a trend of scaling the combination of training data, model size (measured in parameters) and training computation to obtain models with improved performance across academic and industrial benchmarks. Notable models along this progression include the 345 million parameter BERT (Devlin et al., 2019) performing strongly across a wide benchmark of language classification tasks, the 1.5 billion parameter GPT-2 (Radford et al., 2018) and 8.3 billion parameter Megatron (Shoeybi et al., 2019) displaying progressively superior zero-shot language model performance, the 11 billion parameter T5 (Raffel et al., 2020a) which advanced transfer learning and performance on several closed-book question answering tasks, and the aforementioned 175 billion parameter GPT-3. The moniker Large Language Models (LLMs) has become popular to describe this generation of larger models.

Since GPT-3 there has been a 178B parameter Transformer language model Jurassic-1 (Lieber et al., 2021) which uses a diverse training set and a larger tokenizer vocabulary size, along with an announced 530B Megatron-Turing NLG (Kharya and Alvi, 2021) which trains on a released dataset (The Pile, Gao et al. (2020)) (which we evaluate on) and has reported some tentative performance numbers. There have also been Transformer variants which incorporate a sparse mixture of experts (Fedus et al., 2021; Roller et al., 2021b) to increase the model size (in some cases to trillions of parameters) with more modest compute budgets. Other recent LLMs include two models (FLAN and T0) fine-tuned on instructions for an array of down-stream tasks (Sanh et al., 2021; Wei et al., 2021) which improves performance to unseen tasks — these ideas are complementary to the initial task of building a powerful language model but we compare performance nonetheless where possible.

Method

3.1. Models
In this paper we present results on six Transformer language models ranging from 44 million to 280 billion parameters, with the architectural details displayed in Table 1. We refer to the largest as Gopher and the entire set of models as the Gopher family.

Table 1 | Model architecture details. For each model, we list the number of layers, the key/value size, the bottleneck activation size dmodel, the maximum learning rate, and the batch size. The feed-forward size is always 4 × dmodel.

We use the autoregressive Transformer architecture detailed in Radford et al. (2019) with two modifications: we use RMSNorm (Zhang and Sennrich, 2019) instead of LayerNorm (Ba et al., 2016), and we use the relative positional encoding scheme from Dai et al. (2019) rather than absolute positional encodings. Relative encodings permit us to evaluate on longer sequences than we trained on, which improves the modelling of articles and books as shown in Section D.6. We tokenize the text using SentencePiece (Kudo and Richardson, 2018) with a vocabulary of 32,000 and use a byte-level backoff to support open-vocabulary modelling. The Gopher model card (Mitchell et al., 2019) is included in Appendix B.

3.2. Training
We train all models for 300 billion tokens with a 2048 token context window, using the Adam (Kingma and Ba, 2014) optimiser. We warm-up the learning rate from 10−7 to the maximum learning rate over the first 1500 steps, and then decay it 10× using a cosine schedule. As we increase model size, we decrease the maximum learning rate and increase the number of tokens in each batch, as shown in Table 1. Furthermore, we increase Gopher’s batch size from three to six million tokens per batch during training. We clip gradients based on the global gradient norm using a clipping value of 1. However, for the 7.1B model and for Gopher we reduce this to 0.25 for improved stability.

We incorporate the bfloat16 numerical format to reduce memory and increase training through- put. Models smaller than 7.1B are trained with mixed precision float32 parameters and bfloat16 activations (Micikevicius et al., 2018), while 7.1B and 280B use bfloat16 activations and parameters. bfloat16 parameters are updated using stochastic rounding to maintain stability (Gupta et al., 2015). We subsequently found that stochastic rounding does not fully recover mixed precision training performance; more details can be found in Appendix C.

3.3. Infrastructure

We built our training and evaluation codebase with JAX (Bradbury et al., 2018) and Haiku (Hennigan et al., 2020). In particular, we use JAX’s pmap transformation to efficiently express both data and model parallelism. We trained and evaluated all models on TPUv3 chips (Jouppi et al., 2020).
The half-precision parameters and single-precision Adam state for Gopher occupy 2.5 TiB, which far exceeds the 16 GiB of memory available on each TPUv3 core. To address these memory concerns, we use optimiser state partitioning (Rajbhandari et al., 2020), model parallelism (Shoeybi et al., 2019), and rematerialisation (Griewank and Walther, 2000) to partition the model state and reduce the activations so that they fit in TPU memory.

Table 2 | MassiveText data makeup. For each subset of MassiveText, we list its total disk size, its number of documents, and its number of SentencePiece tokens. During training we sample from MassiveText non-uniformly, using the sampling proportion shown in the right-most column.

We find that both data and model parallelism are low-overhead on TPUv3s due to their fast cross-chip communication and only incur a 10% overhead when training Gopher. Therefore, we find that pipelining (Huang et al., 2019) is not necessary on TPUs until the training scale exceeds the 1024-chip “pod”, which greatly simplifies training mid-sized models. However, pipelining is an efficient parallelism method on commodity networks due to its low communication volume, so is well suited to connecting multiple TPU pods. In summary, we train Gopher by using model and data parallelism within TPU pods and pipelining across them. We verified through simulation that this topology was sensible for our hardware (Schaarschmidt et al., 2021); see Table A27 for details.

3.4. Training Dataset
We train the Gopher family of models on MassiveText, a collection of large English-language text datasets from multiple sources: web pages, books, news articles, and code. Table 2 details the constituent datasets. Our data pipeline (Section A.1.1) includes text quality filtering, removal of repetitious text, deduplication of similar documents, and removal of documents with significant test-set overlap. We find that successive stages of this pipeline improve language model downstream performance (Section A.3.2), emphasising the importance of dataset quality.

Overall, MassiveText contains 2.35 billion documents, or about 10.5 TB of text. Since we train Gopher on 300B tokens (12.8% of the tokens in the dataset), we sub-sample from MassiveText with sampling proportions specified per subset (books, news, etc.). We tune these sampling proportions to maximise downstream performance (see Section A.3.1 for details). The largest sampling subset is our curated web-text corpus MassiveWeb, which we find to improve downstream performance relative to existing web-text datasets such as C4 (Raffel et al., 2020b) in Figure A5. We give further details of MassiveText in Appendix A and provide the MassiveText datasheet in Table A3.

Results

We compile the performance of Gopher and its family of smaller models across 152 tasks. We compare these results to prior state-of-the-art (SOTA) performance for language models (124 tasks with published LM performance), supervised approaches which make use of task-specific data, and human performance where available. In this section we present a summary of key findings, and refer to Appendix D for the full set of results and task-specific methodology.

Table 3 | Evaluation Tasks. We compile results for the Gopher family of models on 152 tasks.

4.1. Task Selection
We build a profile of language model performance that spans mathematics, common sense, logical reasoning, general knowledge, scientific understanding, ethics, and reading comprehension — alongside conventional language modelling benchmarks. We include composite benchmarks (such as BIG-bench collaboration (2021)) which contain a mixture of tasks, alongside a number of established targeted benchmarks such as RACE for reading comprehension (Lai et al., 2017) and FEVER for fact-checking (Thorne et al., 2018), among others. We list our task sources in Table 3.

We select tasks that require the model to estimate the probability of target text as we find this to be a general interface that supports the probing of knowledge and reasoning capabilities. For language modelling tasks we calculate the bits per byte (BPB), a compression measure where a lower value indicates a higher probability placed on the correct continuation. All other tasks follow a multiple-choice format, where the model outputs a probability to each multiple-choice response given a context and question, and we select the response with the highest probability. Here, we measure the accuracy of a correct response.

We filter out training documents that are very similar to test-set instances for tasks that were created before MassiveText (November 2020) as described in Table A.1.1. Furthermore some tasks have been designed to use unique test-set problem statements that should not benefit from pre-existing text data — such as BIG-bench. However we caution that there may be test set leakage within our training set; we discuss the challenges of test-set leakage and generalisation in Section D.4.

4.2. Comparisons with State of the Art
In Figure 1 we present an overview of Gopher results with comparisons to state-of-the-art language model performance. Results are comparable across 124 tasks and we plot the percent change in performance metric (higher is better) of Gopher versus the current LM SOTA.2 Gopher outperforms the current state-of-the-art for 100 tasks (81% of all tasks). The baseline model includes LLMs such as GPT-3 (175B parameters) (Brown et al., 2020), Jurassic-1 (Lieber et al., 2021) (178B parameters), and Megatron-Turing NLG (530B parameters) (Kharya and Alvi, 2021); the exact baseline is specified per task in Figure A8.

We find that Gopher displays the most uniform improvement across reading comprehension, humanities, ethics, STEM and medicine categories. We see a general improvement on fact-checking. For common sense reasoning, logical reasoning, and maths we see much smaller performance im-

Figure 1 | Gopher (280B) vs LM SOTA. An overview of the percentage change in performance metric (higher is better) of Gopher versus state-of-the-art language model performance across 124 tasks. Each bar represents a task, here we clip the maximum relative improvement to 120%. In total Gopher shows an improvement across 100 / 124. The best-published results include (175B) GPT-3, (178B) Jurassic-1, and (530B) Megatron-Turing NLG. For full comparison including supervised and human performance see Figure A8.

provements and several tasks that have a deterioration in performance. The general trend is less improvement in reasoning-heavy tasks (e.g., Abstract Algebra) and a larger and more consistent improvement in knowledge-intensive tests (e.g., General Knowledge). Next is a discussion of a few specific sets of results.

For language model benchmarks, we expand the relative performance results of Gopher versus the current 178B SOTA model Jurassic-1 and 175B GPT-3 in Figure 2. Jurassic-1 is an LLM trained with an emphasis on large-vocabulary training and has generally outperformed GPT-3 at a very similar parameter size. We see Gopher does not outperform state-of-the-art on 8 of 19 tasks, under-performing on Ubuntu IRC and DM Mathematics in particular, possibly due to a poor tokenizer representation for numbers. Gopher demonstrates improved modelling on 11 of 19 tasks, in particular books and articles (Books3, PG-19, arXiv, etc.). This performance gain may be due to the heavy use of book data in MassiveText, with a sampling proportion of 27% in total (e.g., versus GPT-3’s 16%).

We highlight two reading comprehension tasks RACE-m and RACE-h, multiple-choice exams pitched at a middle-school and high-school level respectively. Inspecting the accuracy in Table 4 we see Gopher extend upon the current LM SOTA for high-school reading comprehension (47.9% Megatron-Turing NLG → 71.6% Gopher) and the middle-school comprehension accuracy (58.1% GPT-3 → 75.1% Gopher). The high-school reading comprehension level approaches human-rater performance. Smaller models from the Gopher family do not perform as well on these tasks, which suggests that data alone does not explain the performance difference — the combination of scale

Table 4 | RACE reading comprehension. Accuracy for few-shot models: Gopher, GPT-3 (Brown et al., 2020), Megatron-Turing (Kharya and Alvi, 2021). Gopher extends performance significantly. Comparison with supervised SOTA: ALBERT (ensemble) result from Jiang et al. (2020). Amazon Turk and Human Ceiling (obtained by restricting to unambiguous questions with correctly labeled answers) accuracy from Lai et al. (2017).
Figure 2 | Language Modelling Comparisons with SOTA. Comparison of Gopher to the current SOTA models on various language modelling tasks, including many from The Pile (Gao et al., 2020). The superscript (1) indicates the prior SOTA was Jurassic-1 and (2) indicates GPT-3. Gopher achieves state-of-the-art performance on 11 out of 19 datasets with the largest improvements on books and articles.

and data is crucial. All models are still far from human-ceiling performance (around 95%) and supervised state-of-the-art (>90%) which was obtained using a smaller 223M parameter ALBERT-XXL model fine-tuned on the dataset (Jiang et al., 2020). It is possible supervised fine-tuning leads to greater reading comprehension, but it is also plausible the datasets contain exploitable statistics which can lead to high accuracy — as has been recently discovered for several common-sense reasoning benchmarks (Li et al., 2021).

Figure 3 | Scaling curves for FEVER. In the claim-only setting (closed-book) there is a persistent trend in three-way classificaton accuracy with parameter scale. Breaking down the three classes into two pairs, scale benefits mostly the ability to distinguish SUPPORTED vs REFUTED, but not REFUTED versus NOTENOUGHINFO. When gold evidence is provided (open-book) there is a small benefit from 7.1B to 280B Gopher and performance slightly exceeds the supervised SOTA (Kruengkrai et al., 2021).

For some of the most well-studied common sense reasoning tasks: Winogrande, HellaSwag and PIQA, Gopher is outperformed by the larger Megatron-Turing NLG by a small amount (1.2%, 0.2% and 4.1% respectively), but all LM approaches trail human-level performance considerably (Section D.13). As with the mathematics tasks, this suggests that these models have limited reasoning capabilities.
We next highlight fact-checking. This is an important problem within the domain of tackling misinformation. We find that Gopher outperforms supervised SOTA approaches on the well-studied FEVER fact-checking benchmark when evidence is supplied. We see across model sizes in Figure 3 that

Table 5 | Massive Multitask Language Understanding (MMLU). Average accuracy over 57 tasks with model and human accuracy comparisons from 1: Hendrycks et al. (2020). Human rater performance is obtained using Mechanical Turk and average human expert performance is estimated per task based upon published exam results and averaged. Gopher improves over the prior supervised SOTA models by a considerable margin (>30%) however it is far from human expert. We also include the average prediction for SOTA accuracy in June 2022 and 2023 made by 73 competitive human forecasters (2: Steinhardt (2021)). Gopher is situated between the 2022 and 2023 forecast.

scale improves both the checking of facts given gold evidence alongside the ‘closed book’ checking of facts with a claim only. However, larger scale does not benefit the classification of facts which are unknown versus false, implying that larger models improve fact checking performance by knowing more facts versus forming a deeper understanding of misinformation at this stage.

Moving beyond per-task performance, we display the average accuracy across the 57 tasks in MMLU (Table 5). These tasks consist of real-world human exams covering a range of academic subjects. We have comparisons from GPT-3 (Brown et al., 2020), and a 11B T5 model fine-tuned on question tasks called UnifiedQA (Khashabi et al., 2020). These baseline model results along with human rater and expert performance were collected by Hendrycks et al. (2020). In Table 5 we see that Gopher achieves an overall accuracy of 60%, well above GPT-3’s 43.9% and UnifiedQA’s 48.9%. Whilst this lifts the known performance of the pure language-model approach, it still trails the estimated human expert performance of 89.8%. We also display how this performance contrasts with human expectations. From a competitive forecasting platform Hypermind3, human forecasters aim to predict the accuracy of machine learning systems on this benchmark by set dates for prizes — according to the September 2021 average forecast, Gopher-level performance was expected between June 2022 and June 2023.

We conclude that Gopher lifts the baseline performance of a language-model approach across a wide set of tasks. In some settings (e.g., RACE reading comprehension and FEVER fact-checking) Gopher nears human rater performance or the performance of supervised models designed for particular problem domains. However for a few categories of tasks (e.g., mathematical reasoning and common- sense) there is less of an improvement and this may indicate a limitation to the large-scale language model approach. Next, we consider the topic of model scale in isolation.

4.3. Performance Improvements with Scale

Next, we investigate which types of tasks benefit from scaling model size. In this section we compare the performance of Gopher (280B) to smaller models (≤ 7.1B). Because the Gopher family of models are all trained on the same dataset for the same number of tokens, this allows us to isolate the effect

Figure 4 | 280B vs best performance up to 7.1B across different tasks. We compare the performance of Gopher to the best performance of our smaller models up to 7.1B. In nearly every case, Gopher outperforms the best smaller model’s performance. Small gains come from either scale not improving results substantially or the smaller models already being very performant. Language modelling improvements are in BPB and the rest are in terms of accuracy.

of scaling parameters and training compute for each task.

We compute the relative performance improvement of Gopher (280B) versus the best performance up to 7.1B over all 152 tasks. The most performant smaller Gopher family model is usually, but not always, our 7.1B model. We find that Gopher demonstrates a performance improvement on the vast majority of tasks – only 16 (10.5%) had zero or no performance gains. In contrast, 57 (37.5%) tasks had small improvements, with relative performance increases of up to 25%, and 79 (51.2%) tasks had significant improvements of over 25%. We then visualise relative performance improvement by task category in Figure 4.

Some of the largest benefits of scale are seen in the Medicine, Science, Technology, Social Sciences, and the Humanities task categories. These same categories are also where we see the greatest performance improvement over LM SOTA, as described in the previous section. Highlighting some specific tasks: for Figure of Speech Detection from BIG-bench we obtain the largest gains– a 314% increase. Gopher achieved an impressive 52.7% accuracy whereas the 7.1B model achieved only 16.8% accuracy. Gopher also dramatically improves over the smaller models in Logical Args, Marketing, and Medical Genetics. For the TruthfulQA benchmark (Lin et al., 2021b) we find performance improvement with scale (from 1.4B to 280B), despite scale appearing to hurt performance for several other model families such as GPT-J, GPT-2, T5, GPT-3. Furthermore, 280B is the first model to demonstrate performance significantly beyond random guessing on the multiple-choice TruthfulQA task formulation (more details in Section D.10). These results highlight that on some tasks, scale seems to “unlock” the ability of a model to significantly improve performance on particular tasks.

On the other hand, we find that scale has a reduced benefit for tasks in the Maths, Logical Reasoning, and Common Sense categories. Our results suggest that for certain flavours of mathematical or logical reasoning tasks, it is unlikely that scale alone will lead to performance breakthroughs. In some cases Gopher has a lower performance than smaller models– examples of which include Abstract Algebra and Temporal Sequences from BIG-bench, and High School Mathematics from MMLU. On the other hand, the modest performance gains in common sense tasks largely come from relatively strong performance from the smaller models, limiting the room for relative improvement. While language modelling tasks see the smallest average improvements, this is due to the performance metric measured in BPB rather than accuracy and greatly limits the possible relative gains.

By comparing Gopher to our smaller models, we are able to specifically ask questions about the impact of model scale. We conclude that while model scale plays an important role for improvements across the vast majority of tasks, the gains are not equally distributed. Many academic subjects, along with general knowledge, see large improvements come from scale alone. However, this analysis also highlights areas where model scale alone is not enough, or where the gains from scale are more modest– specifically some mathematical and logical reasoning tasks. By combining these scaling results with the comparisons of Gopher to LM SOTA, we see that scale and the dataset are both contributing to Gopher’s strong performance in these domains. In the next section we investigate various properties of the model relating to toxic content generation and classification, the modelling of biases, and the representation of dialects.

Toxicity and Bias Analysis

Alongside the benefits of scaling language models, it is crucial to analyse how scale impacts potentially harmful behaviour. Here we study the behaviour of our language models with respect to problematic outputs and biases. We investigate the tendency of models to produce toxic output, to recognise toxic text, to display distributional bias in discourse about different groups of people, and to model subgroup dialects. For each question we consider variation across model scale.
We choose evaluations and metrics which are commonly used in the field. However, various work has discussed the limitations of current metrics and evaluations (Blodgett et al., 2020, 2021; Sheng et al., 2019; Welbl et al., 2021; Xu et al., 2021a) and our analysis has uncovered further caveats, which we highlight in the following sections and Section 7.2. We include these measures despite their shortcomings to underscore the importance of tackling these challenges and to highlight specific areas for future work, rather than to establish these particular approaches as best practice.

5.1. Toxicity
In the Sections 5.1.1 and 5.1.2, we rely on the widely used and commercially deployed Perspective API4 classifier to study the toxicity of text generated by LMs, and associated CivilComments dataset for studying models’ ability to detect toxic text. Accordingly, we adopt their definition of toxicity as “a rude, disrespectful or unreasonable comment that is likely to make someone leave a discussion.”5

5.1.1. Generation Analysis
Our toxicity analysis of text generated by LMs follows the methodology used in Gehman et al. (2020); Welbl et al. (2021). We use Perspective API to obtain toxicity scores for LM prompts and continuations. We analyse the toxicity of LM outputs when sampling is conditioned on a set of prompts and when it’s unconditional (i.e. unprompted), similar to Welbl et al. (2021). Conditional generation allows us to analyse how the model responds to prompts that have varying toxicity scores. Prompts are from the RealToxicityPrompts (RTP) dataset (Gehman et al., 2020), which contains 100k naturally occurring, sentence-level prompts derived from a large corpus of English web text. We sample 10% of the 100k RTP prompts for efficiency and generate 25 continuations per prompt.
The continuation toxicity of larger models is more consistent with prompt toxicity than for smaller models (Figure 5a). When prompted, as the input toxicity increases, larger models respond with greater toxicity, plateauing near 7.1B parameters. This suggests that more parameters increase the model’s ability to respond like-for-like to inputs.

Figure 5 | Toxicity analyses. (a) Toxicity of text generated by LMs, bucketed by prompt toxicity using the RTP dataset. Error bars show 99% confidence interval. (b) Few-shot toxicity classification on the CivilComments dataset. Larger models are better at classifying toxic text.

For unprompted samples, the toxicity is low and does not increase with model size. Levels are slightly lower than in the training data (Figure A22b), i.e. when unprompted, the LM does not amplify training data toxicity. More details on our toxicity evaluation methodology, results and metrics can be found in Section E.1.

5.1.2. Classification Analysis
We evaluate the models’ ability to detect toxic text in the few-shot setting, in a manner similar to Schick et al. (2021), on the CivilComments dataset (Borkan et al., 2019) (see Section E.2 for details). We observe that the model’s ability to classify text for toxicity increases with scale in few-shot settings (Figure 5b). The smaller models perform comparably or worse than a random classifier (which would achieve an AUC of 0.5). The largest model achieves an AUC of around 0.76 in the 20-shot setting, significantly improving on the smaller models (Figure 5b). We note that while the state-of-the-art for toxicity detection in the few-shot setting is not well established, our performance is well below that of state of the art classifiers trained specifically for toxicity detection (Borkan et al., 2019).

In Section E.2, we further explore whether large language models used for few-shot toxicity classification exhibit subgroup bias. We measure unintended classifier bias using the 280B model with metrics introduced in Borkan et al. (2019) and find that the model is prone to bias against subgroups in different ways. Thus, while language models can be a powerful tool for few-shot classification (especially important in tasks with data that is difficult to annotate), outcomes are not necessarily fair across subgroups. More work is needed to understand how to best mitigate these biases, and caution must be exercised when optimising for improvements in their toxicity classification capabilities.

5.2. Distributional Bias
We define distributional biases as biases which are not apparent in a single sample, but emerge over many samples. For example, whereas “The woman is a nurse” is not a problematic sentence, it can be problematic if the model disproportionately associates women with certain occupations. As discussed in Sheng et al. (2021), distributional biases in language models can have both negative representational impacts (e.g., Kay et al. (2015)) and allocational impacts (e.g., Dastin (2018)). To investigate distributional biases in our model, we measure stereotypical associations between gender and occupation, the distribution of sentiment in samples conditioned on different social groups, and perplexity on different dialects. Whereas performance across many language tasks increases with scale, we find that simply increasing model size does not remove biased language. Indeed, we expect

Figure 6 | Analysis of gender and occupation bias in our models. (a) Gender bias metric as a function of model size for two templates. A high value indicates higher overall bias. We do not see a consistent correlation between model size and bias. (b) Winogender accuracy as a function of model size for examples which oppose gender stereotypes (“gotcha” examples) and reinforce gender stereotypes (“not gotcha” examples). Compared to “not gotcha” examples, performance on “gotchas” remains lower and differs between male and female pronouns. Both results are indicators of bias.

models trained with a standard cross-entropy objective to reflect biases in our training data.
Progress in this space will require challenging cross-disciplinary work to outline desirable behaviour, measure and interpret model outputs, and design novel mitigations, as demonstrated by these results and the significant limitations of the following methods, discussed in Section 7.2.

5.2.1. Gender and Occupation Bias
We study gender and occupation bias via two different evaluations. First, we measure the probability of gendered words following different occupation contexts. Second, we evaluate on the Winogender coreference resolution dataset (Rudinger et al., 2018), where similar coreference accuracy across different pronouns indicates less gender bias. In our evaluation, we primarily compare performance across male and female gendered terms, but acknowledge these terms do not represent all possible gender identities (Cao and Daumé, 2021).

Gender Word Probability To measure how probable different gender words are in different occupation contexts, we follow a setup similar to Brown et al. (2020). We input an occupation prompt like “The {occupation} was a” into our model and compute a gender bias metric by comparing the probabilities of the prompt being followed by either male or female gendered terms.

Figure 6a reports our probability based gender bias metric as a function of model size for two different templates (“The {occupation} was a {gender}” and “The {occupation} is a {gender}”). Overall, we do not find a consistent correlation between model size and bias. Furthermore, we find that apparently unrelated choices in template (changing “was” to “is”) can alter the measured bias. Additionally, choice of gender words also impacts results; if we only use the terms “male” and “female,” gender bias is substantially lower than when summing over a large set of gendered terms (Figure A24a). Section E.3.1 contains further details of the implementation, metrics, and results.

Winogender We explore bias on a zero-shot coreference task using the Winogender dataset (Rudinger et al., 2018). Models are evaluated on whether they can correctly resolve a pronoun to either an occupation word or a relevant distractor word. We expect unbiased models to have similar coreference resolution performance regardless of the pronoun gender. This evaluation is similar

Figure 7 | Mean continuation sentiment score by group. Bars indicate 99% confidence intervals. High is positive sentiment, low is negative. See Figure A26 for religion and occupation results.

to the “disambiguation_q” ambiguous pronoun gender bias task reported in our BIG-bench results (Section D.8.3). However, here we are measuring performance in a zero-shot setting.

Similar to the BIG-bench analysis, we observe that overall performance increases with model size (Figure A24b). Following Rudinger et al. (2018), we also report performance on sentences which are likely to be hard for a gender biased model (called “gotchas”) in Figure 6b. A “gotcha” example is one where the correct coreference resolution is one that differs from stereotypes (based on labor statistics6).

Performance increases across both “gotchas” and “not gotchas” with model size, though performance on “gotchas” is considerably lower. On “gotcha” examples, there is a significant difference in performance for male and female pronouns. Thus, though performance on coreference resolution for the overall task increases considerably with size, our analysis suggests Gopher is still impacted by gender and occupation bias. Full details of our setup and results are in Section E.3.1.

5.2.2. Sentiment Bias towards Social Groups
Sentiment bias is one way to quantify how generated text describes different identities and social groups. In prior work, the difference in sentiment distributions is used to measure individual and group fairness in generative language models (Huang et al., 2020). For this work, we measure the sentiment of model outputs for different occupations, countries, races, and religions. Here we present an overview, with details of metric definition, term and template lists, and full results in Section E.3.2.

Metrics Following Huang et al. (2020), we sample completions based on templated prompts. In each prompt, a single modifier or noun is changed to refer to a different attribute. For example, the template “The {attribute} person could” could be filled in with “Christian,” “Jewish,” or “Muslim”. The samples for each prompt are scored between 0 (negative) to 1 (positive) by a sentiment classifier.

Selection of templates and terms Following Brown et al. (2020); Huang et al. (2020) we measure sentiment for race, religion, country, and occupation. We also extend the term set for religion and race to include an unspecified option without the attribute word (“The {attribute} person could” becomes “The person could”). We include this unspecified option because attributes that are assumed to be the default in a particular culture or context, such as a majority or higher-status attribute, are often left unmarked (unspecified) in language (Waugh, 1982).

Results In Figure 7 and Figure A26, we plot the distribution of normalized sentiment scores for all completions of all prompts for each attribute, and report an aggregated group fairness metric in Figure A25. As in gender and occupation bias, we see no clear trend with scale. This is particularly

Figure 8 | Perplexity by dialect. (Left) Perplexity on Tweets classified as African American and White-aligned English. (Right) The relative decrease in perplexity compared to the 44M model.

evident for countries and occupations, while further analysis is needed to understand why particular attributes within race and religion appear to follow a slight downward trend in mean sentiment.
For sentiment distribution, we observe that certain attributes have notably lower mean sentiment scores. To better understand this, we analyse word co-occurrences for pairs of attributes (Table A25). From this, we observe our models inherit features of historical and contemporary discourse about specific groups (Mohamed et al., 2020). Second, similar to the gender and occupation results, the choice of demographic terms requires careful thought. See Section E.3.2 for deeper discussion.

5.2.3. Perplexity on Dialects
Although Gopher has impressive performance on language benchmarks, it is only able to model text reflected in the training data. If certain dialects are underrepresented in a training corpus, there is likely to be disparate model performance in understanding such language. To test for this gap, we measure the perplexity of our models on Tweets from the African American (AA)-aligned corpus and White-aligned corpus curated by Blodgett et al. (2016). Our results show that perplexity on the AA-aligned corpus is higher for all model sizes. As the model scales, perplexity for both dialects improves, but it does so at roughly the same rate so the gap does not close with scale.

These results highlight a distinct way that bias manifests in the language models. The preceding metrics quantify how models’ outputs vary when different groups are the subject of the output, which can constitute a representational harm when it is more negative or stereotypical (Blodgett et al., 2020). However, the models also show disparate ability in modelling dialects, which could lead to allocational harms in applications with users with different dialects.

Dialogue

So far, we have explored the capabilities and limitations of Gopher through quantitative methods. In this section we investigate the model through direct interaction. We find that by conditionally sampling from a dialogue prompt similar to the few-shot method of Brown et al. (2020), our Dialogue-Prompted Gopher can emulate a conversational format to a decent quality. We provide example transcripts here, with more in Section H.5. We contrast this with the more conventional method of fine-tuning on dialogue data, finding that fine-tuning did not deliver significantly preferred responses in a small-scale human study. Unlike Section 5.1.1, toxicity of Dialogue-Prompted Gopher responses does not increase with model scale, even when prompted with toxic questions (Figure 9).

6.1. Prompting For Dialogue

Table 6 | Responses from Gopher when naively prompted with a question, for three seeds.

Language models are trained to reproduce their input distribution, not to engage in conversation. When prompted with a question, we can see that the model generates a first-person narrative, some text resembling a blog post, and a generic list of existential questions (Table 6). This behaviour is consistent with the content that Gopher has been trained on.

In order to produce a conversationalist, we use a prompt that describes Gopher’s role and starts a conversation between Gopher and a fictional User, including behaviours such as aversion to offensive language and an ability to opt out of certain question types; see Table A30 for the full prompt. Table 7 shows a transcript with Dialogue-Prompted Gopher on the topic of cell biology and bacteria. Here it remains on topic, discusses some technical details, and provides a correct citation link. However it actually provides subtle incorrect responses in some cases (prokaryotes are not the only single-cell organisms). Table 8 shows an unsuccessful transcript illustrating factual errors confidently expressed. See Section H.5 for more transcripts with interesting behaviours and failure modes, including more subtle plausible but factually incorrect dialogue with a claim of search (Table A32), generating harmful text (Table A35), or contradicting itself and showing a general lack of common sense (Table A37).

Anecdotally, we find both successes and failures to be common, but we emphasize that Dialogue- Prompted Gopher is still just a language model. The prompt conditions the model’s prior over responses but does not result in a consistently reliable or factual dialogue model. We refer the reader to Weidinger et al. (2021) for a detailed discussion on language model harms specific to dialogue and we discuss some ideas regarding building trustworthy systems in Section 7.3.

6.2. Fine-tuning for Dialogue
Recent work on dialogue often focuses on supervised training with dialogue-specific data (Chen et al., 2017), such as Google’s Meena (Adiwardana et al., 2020) and Facebook’s BlenderBot (Roller et al., 2021a). We explore this approach by creating a curated dialogue dataset from MassiveWeb and fine-tuning Gopher on this dataset for ∼5 billion tokens to produce Dialogue-Tuned Gopher. We then ask human raters for their preference over the response from Dialogue-Tuned Gopher and Dialogue-Prompted Gopher, using our dialogue prompt (Table A30) for both models. To our surprise, we find from 1400 ratings the preference is (50 ± 0.04)%: no significant difference. We describe the methodology in detail in Section H.3. We consider this an interesting initial result; future work would be valuable to rigorously examine the pros and cons of fine-tuning versus prompting for dialogue with large-scale models and compare Gopher to existing dialogue systems accounting for large differences in model size.

Figure 9 | Toxicity analyses of Dialogue-Prompted models. (Left) Toxicity of text generated by Dialogue-Prompted LMs given RTP questions, bucketed by prompt toxicity. Continuation toxicity does not increase with model scale. (Right) For “high” toxicity prompts (>66%), the toxicity of Dialogue-Prompted Gopher models on RTP-questions and Gopher models on RTP relative to 44M models.

6.3. Dialogue & Toxicity
We investigate the toxicity of Dialogue-Prompted Gopher. We adapt the RTP methodology to the dialogue setting (called RTP questions, details in Section H.4). In Figure 9 (left), we observe that Dialogue-Prompted Gopher does not follow the same trend (increased toxicity with model scale) as Gopher. Whilst we see a monotonic increase in continuation toxicity with model scale in the unprompted setting (Figure 5a), Dialogue-Prompted Gopher toxicity tends to slightly decrease with increased model scale (from 117M parameters, except for prompts in the most toxic bucket). Po- tentially, larger models can better account for the given prompt (which includes “to be respectful, polite, and inclusive”). Specifically, we compare the continuation toxicity between Gopher (tested on RTP) and Dialogue-Prompted Gopher (tested on RTP questions) models relative to 44M models for prompts with high toxicity in the right of Figure 9. Again, we observe that with dialogue prompting, continuation toxicity remains largely at levels similar to the 44M model, contrasting with the upward trend observed for unprompted language models.

RTP is quite a straightforward stress-test: the user utters a toxic statement and we observe how the system responds. In work parallel to this study, Perez et al. (2022) probes Dialogue-Prompted Gopher further via an adversarial attack generated by Gopher. This approach induces the model to recite discriminatory jokes from its training data, insult the user, and elaborate on inappropriate desires, among many other offenses. Occasionally, Dialogue-Prompted Gopher’s response refers to the fact that its instructions prohibit a behaviour before exhibiting that behaviour, such as by opening with “[Ignoring your request to not discuss political, social, and religious issues.]” To date, automatic adversarial attacks consistently elicit toxic language from models (Wallace et al., 2019) even after safety mitigations (Yu and Sagae, 2021), and serve as a useful complement to manual adversarial attacks such as Xu et al. (2021b).

The recent work of Askell et al. (2021) similarly found that prompting alone was sufficient to turn a language model into an interesting but non-robust assistant. They conduct a variety of human evaluations of their system, both for the prompt-only case and for stronger interventions such as learning from human demonstrations or preferences. In particular, they also found that prompting prevents toxicity from increasing with scale on RTP (Section 2.2.2 in their paper). This provides evidence that the effect is reliable across different language models and toxicity classifiers.

Table 7 | Example of Mixed Factuality. Here the information provided is correct for some responses (discussion of E. Coli) but incorrect for others (there are single-cell eukaryotes too). The model supports some statements by generating a correct Wikipedia link. The mixture of factual and non-factual responses can lead to subtle misinformation. See Table 8 and Appendix H for further transcripts.
Table 8 | Example of non-factual Dialogue. The model provides answers which are wrong but confidently stated. The correct answers are ‘Emma Raducanu’, ‘yes’ (French Guiana), and ‘0’, respectively.

Discussion

7.1. Towards Efficient Architectures
In this work we have taken a well established architecture and pushed model scale. To follow this scaling enquiry further, we have to either increase the amount of energy and compute to train larger transformers or move towards more efficient architectures.

We break down the computational cost from training Gopher in Table A26 and Appendix F and observe the majority is spent in the linear maps. This motivated an investigation into sparse- parameter training detailed in Appendix G, but did not yield an overall efficiency boost to date. An alternative approach to sparsifying the linear maps is to split them into separate, conditionally- activated experts (Fedus et al., 2021; Lepikhin et al., 2021; Lin et al., 2021a). This approach has been scaled up with the Switch Transformer which contains 1.7T parameters but a smaller compute cost to Gopher (Fedus et al., 2021) and the more recent 1.2T GLaM (?) which outperforms GPT-3 across 29 language tasks whilst requiring 3X fewer FLOPs to train.

We separately consider a retrieval mechanism searching over the training set for relevant extracts during pre-training (Borgeaud et al., 2021), partially avoiding the need to memorise knowledge into network weights. This approach reached GPT-3-level language model performance with a 7 billion parameter model and over a 10× reduction in training compute. Thus, whilst this paper focused on transformer models, this is likely a transitory stage as more efficient architectures are developed.

7.2. Challenges in Toxicity and Bias
We highlight some of the limitations we encountered in our evaluation metrics for toxicity and bias and motivate what properties would be desired from future evaluation benchmarks.

Challenges in using classifiers. While the Perspective API is a capable toxicity classifier (0.97 evaluation AUC7), toxicity classifiers can be subject to social bias, assigning higher toxicity to in- nocuous mentions of particular identity groups (Dixon et al., 2018; Röttger et al., 2021). While toxicity classifiers quantify one type of harm, overreliance on automatic evaluation can introduce unintended social biases (Welbl et al., 2021; Xu et al., 2021a). Sentiment classifiers are also subject to bias (Kiritchenko and Mohammad, 2018). Sheng et al. (2019) propose regard classifiers as an alternative to repurposing sentiment classifiers for bias analysis; these measure regard towards a particular demographic group, but are only available for certain groups.

Challenges in distributional bias. While we only consider a few possible evaluations (see Sheng et al. (2021) for an overview), we observe that distributional bias can be especially challenging to measure. Figure 6a illustrates the brittleness of template-based evaluation: simply changing the verb in the gender and occupation template from “was” to “is” impacts observed trends. However, collecting high quality, naturalistic datasets is challenging (Blodgett et al., 2021). We believe high quality data collection will be interdisciplinary and involve consulting experts on various language harms, as was done for HateCheck dataset (Röttger et al., 2021).

Challenges in defining context. Our toxicity and bias evaluations are not contextualised in applications or specific user groups, leaving the desired behaviour unclear. For example, we choose commonly studied subgroups for our analysis (adopted from Brown et al. (2020) and Huang et al. (2020)), but demographic groups such as race are highly contextual (Hanna et al., 2020). Our larger models produce more toxic outputs when prompted with toxic inputs; this may help models designed to detect toxicity (Section 5.1.2) but be problematic in other applications. In our sentiment analysis, our model frequently outputs negative words like “flee” and “escape” when describing Syria, but enforcing equal sentiment across countries might erase historical and political context.

The limitations above focus on measuring bias and toxicity as we do not explore mitigation strategies in this work. However, our limitations demonstrate important challenges in measuring and defining criteria for language models, and we emphasize the importance of careful model analysis and understanding in language research. Robust metrics are essential for effective mitigation, and we posit that work which outlines desirable behaviour, designs reliable metrics, and builds analysis tools is as important as methods developed for mitigation.

7.3. Safety benefits and safety risks
We believe language models are a powerful tool for the development of safe artificial intelligence, and this is a central motivation of our work. However language models risk causing significant harm if used poorly, and the benefits cannot be realised unless the harms are mitigated.

On the benefit side, language is the primary human communication medium for subtle ideas. If we want ML models which do what humans want, including in subtle cases where correct behaviour requires detailed discussion, we need machines capable of engaging in that discussion. Both directions of communication will be required: humans telling machines what we want, and machines explaining their behaviour to humans. In the near-term, natural language explanations can make models more trustworthy (Camburu et al., 2018) and more performant Coyle and Weller (2020); Kasirzadeh (2021); Rajani et al. (2019) survey some of the benefits and subtleties of explanations. Safety methods focused on interactive communication with humans include cooperative inverse reinforcement learning (Hadfield-Menell et al., 2016); see Russell (2020) for a broader discussion.

To extend the benefits of communication to advanced agents, several recursive safety proposals use language to break down tasks into smaller pieces that are easier to supervise by humans, including iterated amplification (Christiano et al., 2018), debate (Irving and Askell, 2019; Irving et al., 2018), and recursive reward modelling (Leike et al., 2018). Realizing these schemes require language models to follow human discussion and reasoning, motivating work on highly capable models. Experimental work is nascent: Wu et al. (2021) uses recursive reward modelling to summarise books hierarchically, building on earlier work using human feedback for simpler tasks such as summarisation (Böhm et al., 2019; Stiennon et al., 2020; Ziegler et al., 2019). Perez et al. (2019) simulates debate using a frozen question-answering model as judge. Human preference learning has been applied to many other NLP tasks including dialogue (Jaques et al., 2020); see Wang et al. (2021) for a survey.

On the harm side, Bender et al. (2021) highlights many dangers of large language models such as memorisation of training data (Abubakar, 2021; Carlini et al., 2021), high training cost (Section G.3), distributional shift due to static training data (Lazaridou et al., 2021), amplification of inherent biases, and generation of toxic language (Gehman et al., 2020) — which we consider in Section 5. See Weidinger et al. (2021) for an over-arching taxonomy of harms.

After assessing the landscape of potential harms, it is natural to question how and when to miti- gate them. Some harms can be tackled during pre-training, such as leaks of private information and reduced performance for some languages and social groups. Privacy-preserving training algorithms such as Abadi et al. (2016) have been applied only at small scale, such as in Anil et al. (2021) to pre-train a 340M parameter BERT model and in Yu et al. (2021) to fine-tune LMs with up to 1.5B parameters. English-only datasets should be broadened to more languages (Xue et al., 2020). We have begun this process for MassiveWeb: Borgeaud et al. (2021) trains on a version with 10 languages.

However, we believe many harms due to LMs may be better addressed downstream, via both technical means (e.g. fine-tuning and monitoring) and sociotechnical means (e.g. multi-stakeholder en- gagement, controlled or staged release strategies, and establishment of application specific guidelines and benchmarks). Focusing safety and fairness efforts downstream has several benefits:
Faster iteration cycles. LLMs are trained infrequently due to their expense, so mistakes are slow to correct during pre-training but fast to correct if mitigations are applied downstream. Fast iteration is critical when factual information changes (Lazaridou et al., 2021), societal values change (Weidinger et al., 2021), or our knowledge about how to mitigate harms changes. In particular, accidental censoring of data can damage performance for language by or about marginalized groups (Dodge et al., 2021; Welbl et al., 2021; Xu et al., 2021a).

Safety depends on the application. Language models reflect the statistics of their training data rather than alignment to human values, and it is unclear what it means to align a language model without knowing the downstream application. Selbst et al. (2019) emphasize the non-portability of fairness between social contexts and applications. Model cards (Mitchell et al., 2019) include primary intended use and out-of-scope uses, and datasheets for datasets (Gebru et al., 2018) include recommended uses. As an example, a dialogue agent should avoid toxic language, while a translation model may need to preserve toxicity to ensure accuracy.

LMs can serve multiple roles within one application. A single LM might be used both as a classifier for good vs. bad output and as a policy generating that output (Stiennon et al., 2020). As a policy we may want no toxic output, but as a classifier the LM should be familiar with toxic text to classify it accurately (Buckman). Downstream mitigation allows separate fine-tuning for each role, but mitigating toxicity by filtering during pre-training can harm classifier performance (Welbl et al., 2021). Figure 5 shows a correlation between generation and recognition of toxic language in the Gopher family. In some cases, toxicity is the goal: Perez et al. (2022) uses Gopher to generate questions which cause Dialogue-Prompted Gopher to behave poorly. This classifier vs. policy split applies to other harms: we may want an accurate policy and a good lie detector.

However, any particular claim that a harm is best mitigated downstream is empirical: if we cannot mitigate downstream in practice, mistakes will be locked in until the next LM is retrained. We also emphasize that even if some mitigations are best applied downstream, we share responsibility for ensuring the necessary mitigations occur in applications where Gopher is deployed, both by influencing those deployments and by conducting applicable safety research. We have started some of this research, including both harm taxonomies (Kenton et al., 2021; Weidinger et al., 2021) and mitigations (Perez et al., 2022; Welbl et al., 2021). Much more is required, and is left to future work.

Conclusion

The landscape of language technologies with general capabilities is progressing rapidly. Language models are a key driver of this progress, and we have shown that an emphasis on data quality and scale still yields interesting performance advances over existing work. However, the benefits of scale are nonuniform: some tasks which require more complex mathematical or logical reasoning observe little benefit up to the scale of Gopher. This may be an inherent property of the language modelling objective — it is hard to compress mathematics and easier to learn many associative facts about the world. However it is possible that a sufficiently complex model may become bottlenecked by its poor understanding (and thus compression) of reasoning and new reasoning capabilities will emerge beyond the scale reached here. Alongside the development of more powerful language models, we advocate broad development of analysis and interpretability tools to better understand model behaviour and fairness, both to guide mitigation of harms and to better inform the use of these models as a tool to scalably align artificial intelligence to societal benefit.

Acknowledgements

We would like to thank Dominik Grewe, Dimitrios Vytiniotis, Tamara Norman, and Dan Belov for their help verifying the final training topology; Peter Hawkins and Skye Wanderman-Milne for their help understanding the JAX runtime; Loren Maggiore for input on mixed-precision training;

Alexandre Fréchette for advice on dataset collection; Siim Poder, Alexey Guseynov, Alban Rrustemi, Eric Noland, Bogdan Damoc, Damion Yates, Bryan Chiang, Christoph Dittmann, Roberto Lupi, and Michael Vorburger for their help in reliable experiment scheduling and uptime; Shakir Mohamed and Sims Witherspoon for advice on compute reporting; Tyler Liechty, Mira Lutfi, Richard Ives, Elspeth White, and Tom Lue for dataset guidance; Ben Coppin, Kirsty Anderson, John Jumper, Andy Brock, Julian Schrittweiser, Greg Wayne, Max Jaderberg, and Phil Blunsom for research advice and assistance during this project; alongside our DeepMind colleagues for insights and encouragement.

Contributions

Design of model and training strategies Jack Rae, Sebastian Borgeaud, Trevor Cai, John Aslanides, Jordan Hoffmann, Geoffrey Irving
Implementation of training infrastructure
Model parallelism Trevor Cai, Roman Ring, Jacob Menick, Sebastian Borgeaud
Pipelining Albin Cassirer, Richard Powell, Trevor Cai, George van den Driessche, Tom Hennigan, Roman Ring, Ed Lockhart
Hardware efficiency Trevor Cai, Blake Hechtman, James Bradbury, Matthew Johnson,8 Chris Jones, Erich Elsen, David Budden, Tom Hennigan
Checkpointing George van den Driessche, Sebastian Borgeaud, Richard Powell, Jacob Menick, Trevor Cai
Low-precision training Geoffrey Irving, Trevor Cai
Library design and maintenance Sebastian Borgeaud, John Aslanides, Roman Ring, Aidan Clark, Diego de las Casas, Aurelia Guy, Jacob Menick, Igor Babuschkin,9 Mia Glaese, Jack Rae, Trevor Cai
Dataset development Katie Millican, Sebastian Borgeaud, Zhitao Gong, Daniel Toyama, Alexandre Fréchette, Cyprien de Masson d’Autume, Yujia Li, Jack Rae
Model serving Nikolai Grigorev, Katie Millican, Toby Pohlen, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Trevor Cai, John Aslanides
Fine-tuning Jean-Baptiste Lespiau, Jordan Hoffmann, Maria Tsimpoukelli,10 Sebastian Borgeaud, Roman Ring, Saffron Huang, Trevor Cai, Francis Song, John Aslanides, Jacob Menick, Jack Rae
Results and analyses
Coordination of results Jack Rae, Jordan Hoffmann, Eliza Rutherford, Susannah Young Coordination of model analyses Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Mia Glaese, Jack Rae
MMLU Francis Song, Nat McAleese
BIG-bench Irina Higgins & Antonia Creswell
Reading comprehension (RACE) Francis Song, Nat McAleese
Common-sense Xiang Lorraine Li,11 Jordan Hoffmann, Aida Nematzadeh, Adhiguna Kuncoro
Fact-checking Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou
Dataset analyses Katie Millican, Sebastian Borgeaud
Dataset toxicity Johannes Welbl

Model toxic generation Saffron Huang, Mia Glaese, Po-Sen Huang
Model toxicity classification Sumanth Dathathri, Mia Glaese, Lisa Anne Hendricks
Closed-book QA Arthur Mensch, Jacob Menick
TruthfulQA Francis Song, Jack Rae
Pile LM Jack Rae, Sebastian Borgeaud, Jordan Hoffmann
Distributional bias Maribeth Rauh, Lisa Anne Hendricks, Po-Sen Huang, Jonathan Uesato, Laura Rimell, William Isaac
Perplexity on dialects Maribeth Rauh, Mia Glaese, Lisa Anne Hendricks
Dialogue John Aslanides, Nat McAleese, Saffron Huang, Po-Sen Huang, Amy Wu, Katie Millican, Tayfun Terzi, Vladimir Mikulik, Maribeth Rauh, Geoffrey Irving, Jack Rae
Lessons learned: AdaFactor Saffron Huang, Jordan Hoffmann, Trevor Cai
Lessons learned: Lower-precision training Jordan Hoffmann, Trevor Cai, Erich Elsen
Efficient training and inference Compute usage Trevor Cai
Distillation and warm starting Jordan Hoffmann, Laurent Sifre, Erich Elsen, Trevor Cai, Sebastian Borgeaud, Simon Osindero, Karen Simonyan
Sparse training Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, Lena Martens, Michela Paganini, Jordan Hoffmann, David Budden, Simon Osindero, Karen Simonyan
Model and data cards Sebastian Borgeaud, Lisa Anne Hendricks, Laura Weidinger
Discussion Geoffrey Irving, Jack Rae, Jordan Hoffmann, Laura Weidinger, William Isaac, Iason Gabriel, Maribeth Rauh, Lisa Anne Hendricks, Johannes Welbl, Saffron Huang, Po-Sen Huang
Abstract, introduction, background, and conclusion Jack Rae, Geoffrey Irving, Chris Dyer, Laura Rimell, Oriol Vinyals, Koray Kavukcuoglu
Project management Susannah Young, Eliza Rutherford, Sarah Henderson, Amy Wu, Esme Suther- land, Kirsty Anderson12
Resource management Lorrayne Bennett, Jeff Stanway, Kareem Ayoub
Research Advisors Koray Kavukcuoglu, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Ben Coppin, Karen Simonyan, Chris Dyer, Laura Rimell, Demis Hassabis

References

M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pages 308–318, 2016.
M. Abubakar. GitHub Copilot AI is generating and giving out functional API keys. https://fo ssbytes.com/github-copilot-generating-functional-api-keys, 2021. Accessed:
2021-10-7.
D. Adiwardana, M.-T. Luong, D. R. So, J. Hall, N. Fiedel, R. Thoppilan, Z. Yang, A. Kulshreshtha, G. Ne- made, Y. Lu, et al. Towards a human-like open-domain chatbot. arXiv preprint arXiv:2001.09977, 2020.
R. Anil, B. Ghazi, V. Gupta, R. Kumar, and P. Manurangsi. Large-scale differentially private BERT. arXiv preprint arXiv:2108.01624, 2021.

A. Askell, Y. Bai, A. Chen, D. Drain, D. Ganguli, T. Henighan, A. Jones, N. Joseph, B. Mann, N. DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021.
I. Augenstein, C. Lioma, D. Wang, L. C. Lima, C. Hansen, and J. G. S. Christian Hansen. Multifc: a real- world multi-domain dataset for evidence-based fact checking. In Transactions of the Association for Computational Linguistics (TACL), 2021, Online, Nov. 2019. Association for Computational Linguistics. URL https://arxiv.org/abs/1909.03242.
J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
E. Ben Zaken, S. Ravfogel, and Y. Goldberg. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. arXiv e-prints, pages arXiv–2106, 2021.
E. M. Bender, T. Gebru, A. McMillan-Major, and S. Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 610–623, 2021.
Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155, 2003.
BIG-bench collaboration. Beyond the imitation game: Measuring and extrapolating the capabilities of language models. In preparation, 2021. URL https://github.com/google/BIG-bench/.
Y. Bisk, R. Zellers, J. Gao, Y. Choi, et al. PIQA: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7432–7439, 2020.
D. W. Blalock, J. J. G. Ortiz, J. Frankle, and J. V. Guttag. What is the state of neural network pruning? CoRR, abs/2003.03033, 2020. URL https://arxiv.org/abs/2003.03033.
S. L. Blodgett, L. Green, and B. O’Connor. Demographic dialectal variation in social media: A case study of african-american english. CoRR, abs/1608.08868, 2016. URL http://arxiv.org/ab s/1608.08868.
S. L. Blodgett, S. Barocas, H. Daumé III, and H. Wallach. Language (technology) is power: A critical survey of “bias” in NLP. ACL, 2020.
S. L. Blodgett, G. Lopez, A. Olteanu, R. Sim, and H. Wallach. Stereotyping norwegian salmon: an inventory of pitfalls in fairness benchmark datasets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1004–1015, 2021.
F. Böhm, Y. Gao, C. M. Meyer, O. Shapira, I. Dagan, and I. Gurevych. Better rewards yield better summaries: Learning to summarise without references. arXiv preprint arXiv:1909.01214, 2019.
S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. van den Driessche, J.-B. Lespiau, B. Damoc, A. Clark, D. de Las Casas, A. Guy, J. Menick, R. Ring, T. Hennigan, S. Huang,
L. Maggiore, C. Jones, A. Cassirer, A. Brock, M. Paganini, G. Irving, O. Vinyals, S. Osindero,
K. Simonyan, J. W. Rae, E. Elsen, and L. Sifre. Improving language models by retrieving from trillions of tokens. arXiv submission, 2021.

D. Borkan, L. Dixon, J. Sorensen, N. Thain, and L. Vasserman. Nuanced metrics for measuring unintended bias with real data for text classification. CoRR, abs/1903.04561, 2019. URL http:
//arxiv.org/abs/1903.04561.
J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. Van- derPlas, S. Wanderman-Milne, and Q. Zhang. JAX: composable transformations of Python+NumPy programs. 2018. URL http://github.com/google/jax.
T. Brants, A. C. Popat, P. Xu, F. J. Och, and J. Dean. Large language models in machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 858–867, Prague, Czech Republic, June 2007. Association for Computational Linguistics. URL https://aclanthology
.org/D07-1090.
E. Brill and R. C. Moore. An improved error model for noisy channel spelling correction. In Proceedings of the 38th annual meeting of the association for computational linguistics, pages 286–293, 2000.
P. F. Brown, J. Cocke, S. A. Della Pietra, V. J. Della Pietra, F. Jelinek, J. Lafferty, R. L. Mercer, and P. S. Roossin. A statistical approach to machine translation. Computational linguistics, 16(2):79–85, 1990.
T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam,
G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh,
D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark,
C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb49674 18bfb8ac142f64a-Paper.pdf.
J. Buckman. Fair ML tools require problematic ML models. https://jacobbuckman.com/2021- 02-15-fair-ml-tools-require-problematic-ml-models. Accessed: 2021-10-7.
N. Burgess, J. Milanovic, N. Stephens, K. Monachopoulos, and D. Mansell. Bfloat16 processing for neural networks. In 2019 IEEE 26th Symposium on Computer Arithmetic (ARITH), pages 88–91. IEEE, 2019.
A. Caliskan, J. J. Bryson, and A. Narayanan. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186, 2017.
O.-M. Camburu, T. Rocktäschel, T. Lukasiewicz, and P. Blunsom. e-SNLI: Natural language inference with natural language explanations. arXiv preprint arXiv:1812.01193, 2018.
Y. T. Cao and H. Daumé. Toward gender-inclusive coreference resolution: An analysis of gender and bias throughout the machine learning lifecyle. Computational Linguistics, pages 1–47, 2021.
N. Carlini, C. Liu, Ú. Erlingsson, J. Kos, and D. Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th USENIX Security Symposium (USENIX Security 19), pages 267–284, 2019.
N. Carlini, F. Tramer, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. Brown, D. Song,
U. Erlingsson, A. Oprea, and C. Raffel. Extracting training data from large language models, 2021.
N. Chater. The search for simplicity: A fundamental cognitive principle? The Quarterly Journal of Experimental Psychology Section A, 52(2):273–302, 1999.

H. Chen, X. Liu, D. Yin, and J. Tang. A survey on dialogue systems: Recent advances and new frontiers. Acm Sigkdd Explorations Newsletter, 19(2):25–35, 2017.
T. Chen, I. Goodfellow, and J. Shlens. Net2net: Accelerating learning via knowledge transfer. arXiv preprint arXiv:1511.05641, 2015.
P. Christiano, B. Shlegeris, and D. Amodei. Supervising strong learners by amplifying weak experts. arXiv preprint arXiv:1810.08575, 2018.
D. Coyle and A. Weller. “Explaining” machine learning reveals policy challenges. Science, 368(6498): 1433–1434, 2020.
Curation. Curation corpus base, 2020. URL https://github.com/CurationCorp/curation-c orpus.
I. Dagan, O. Glickman, and B. Magnini. The PASCAL recognising textual entailment challenge. In Proceedings of the PASCAL Challenges Workshop on Recognising Textual Entailment, 2005. URL http://www.cs.biu.ac.il/~glikmao/rte05/.
Z. Dai, Z. Yang, Y. Yang, J. Carbonell, Q. V. Le, and R. Salakhutdinov. Transformer-XL: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019.
J. Dastin. Amazon scraps secret AI recruiting tool that showed bias against women. 2018.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. IEEE, 2009.
J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT (1), 2019.
G. Dewey. Relativ [sic] frequency of English speech sounds. Harvard UP, 1923.
L. Dixon, J. Li, J. Sorensen, N. Thain, and L. Vasserman. Measuring and mitigating unintended bias in text classification. 2018.
J. Dodge, M. Sap, A. Marasovic, W. Agnew, G. Ilharco, D. Groeneveld, and M. Gardner. Documenting the english colossal clean crawled corpus. CoRR, abs/2104.08758, 2021. URL https://arxiv. org/abs/2104.08758.
E. Elsen, M. Dukhan, T. Gale, and K. Simonyan. Fast sparse convnets. CoRR, abs/1911.09723, 2019. URL http://arxiv.org/abs/1911.09723.
U. Evci, T. Gale, J. Menick, P. S. Castro, and E. Elsen. Rigging the lottery: Making all tickets winners. In International Conference on Machine Learning, pages 2943–2952. PMLR, 2020.
W. Fedus, B. Zoph, and N. Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. arXiv preprint arXiv:2101.03961, 2021.
J. Frankle and M. Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations, 2019. URL https://openreview.net
/forum?id=rJl-b3RcF7.
T. Gale, E. Elsen, and S. Hooker. The state of sparsity in deep neural networks. CoRR, abs/1902.09574, 2019. URL http://arxiv.org/abs/1902.09574.

T. Gale, M. Zaharia, C. Young, and E. Elsen. Sparse GPU kernels for deep learning. CoRR, abs/2006.10901, 2020. URL https://arxiv.org/abs/2006.10901.
L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite, N. Nabeshima,
S. Presser, and C. Leahy. The Pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020.
T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. Daumé III, and K. Crawford. Datasheets for datasets. arXiv preprint arXiv:1803.09010, 2018.
S. Gehman, S. Gururangan, M. Sap, Y. Choi, and N. A. Smith. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3356–3369, Online, Nov. 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.301. URL https://aclanthology.org/2 020.findings-emnlp.301.
A. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
A. Griewank and A. Walther. Algorithm 799: revolve: an implementation of checkpointing for the reverse or adjoint mode of computational differentiation. ACM Transactions on Mathematical Software (TOMS), 26(1):19–45, 2000.
S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan. Deep learning with limited numerical precision. In International conference on machine learning, pages 1737–1746. PMLR, 2015.
K. Guu, K. Lee, Z. Tung, P. Pasupat, and M.-W. Chang. REALM: Retrieval-augmented language model pre-training, 2020.
D. Hadfield-Menell, S. J. Russell, P. Abbeel, and A. Dragan. Cooperative inverse reinforcement learning. Advances in neural information processing systems, 29:3909–3917, 2016.
S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. In Y. Bengio and Y. LeCun, editors, 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. URL http://arxiv.org/abs/1510.00149.
A. Hanna, E. Denton, A. Smart, and J. Smith-Loud. Towards a critical race methodology in algorithmic fairness. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Jan 2020. doi: 10.1145/3351095.3372826. URL http://dx.doi.org/10.1145/3351095.33728
26.
D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020.
T. Hennigan, T. Cai, T. Norman, and I. Babuschkin. Haiku: Sonnet for JAX. 2020. URL http:
//github.com/deepmind/dm-haiku.
G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.

A. Holtzman, J. Buys, L. Du, M. Forbes, and Y. Choi. The curious case of neural text degeneration. In International Conference on Learning Representations, 2019.
P.-S. Huang, H. Zhang, R. Jiang, R. Stanforth, J. Welbl, J. Rae, V. Maini, D. Yogatama, and P. Kohli. Reducing sentiment bias in language models via counterfactual evaluation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 65–83, Online, Nov. 2020. As- sociation for Computational Linguistics. doi: 10.18653/v1/2020.findings- emnlp.7. URL https://aclanthology.org/2020.findings-emnlp.7.
Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. Advances in neural information processing systems, 32:103–112, 2019.
G. Irving and A. Askell. AI safety needs social scientists. Distill, 2019. doi: 10.23915/distill.00014. https://distill.pub/2019/safety-needs-social-scientists.
G. Irving, P. Christiano, and D. Amodei. AI safety via debate. arXiv preprint arXiv:1805.00899, 2018.
N. Jaques, J. H. Shen, A. Ghandeharioun, C. Ferguson, A. Lapedriza, N. Jones, S. S. Gu, and R. Picard. Human-centric dialog training via offline reinforcement learning. arXiv preprint arXiv:2010.05848, 2020.
S. Jayakumar, R. Pascanu, J. Rae, S. Osindero, and E. Elsen. Top-KAST: Top-K Always Sparse Training. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 20744–20754. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ee76626ee11ada502d5db f1fb5aae4d2-Paper.pdf.
F. Jelinek. Statistical methods for speech recognition. MIT press, 1997.
Y. Jiang, S. Wu, J. Gong, Y. Cheng, P. Meng, W. Lin, Z. Chen, and M. Li. Improving machine reading comprehension with single-choice decision and transfer learning. arXiv preprint arXiv:2011.03292, 2020. URL https://arxiv.org/abs/2011.03292.
X. Jiao, Y. Yin, L. Shang, X. Jiang, X. Chen, L. Li, F. Wang, and Q. Liu. TinyBERT: Distilling BERT for natural language understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4163–4174, Online, Nov. 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.372. URL https://aclanthology.org/2020.find ings-emnlp.372.
M. Joshi, E. Choi, D. Weld, and L. Zettlemoyer. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. arXiv e-prints, art. arXiv:1705.03551, 2017.
N. P. Jouppi, D. H. Yoon, G. Kurian, S. Li, N. Patil, J. Laudon, C. Young, and D. Patterson. A domain- specific supercomputer for training deep neural networks. Commun. ACM, 63(7):67–78, June 2020. ISSN 0001-0782. doi: 10.1145/3360307. URL https://doi.org/10.1145/3360307.
R. Jozefowicz, O. Vinyals, M. Schuster, N. Shazeer, and Y. Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.
A. Kasirzadeh. Reasons, values, stakeholders: a philosophical framework for explainable artificial intelligence. arXiv preprint arXiv:2103.00752, 2021.

M. Kay, C. Matuszek, and S. A. Munson. Unequal representation and gender stereotypes in image search results for occupations. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pages 3819–3828, 2015.
Z. Kenton, T. Everitt, L. Weidinger, I. Gabriel, V. Mikulik, and G. Irving. Alignment of language agents. arXiv preprint arXiv:2103.14659, 2021.
U. Khandelwal, O. Levy, D. Jurafsky, L. Zettlemoyer, and M. Lewis. Generalization through memoriza- tion: Nearest neighbor language models. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HklBjCEKvH.
P. Kharya and A. Alvi. Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, the World’s Largest and Most Powerful Generative Language Model. https://developer.nvidia
.com/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b
-the-worlds-largest-and-most-powerful-generative-language-model/, 2021.
D. Khashabi, S. Min, T. Khot, A. Sabharwal, O. Tafjord, P. Clark, and H. Hajishirzi. UnifiedQA: Crossing format boundaries with a single QA system. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1896–1907, Online, Nov. 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.171. URL https://aclanthology.org/2 020.findings-emnlp.171.
Y. J. Kim, A. A. Awan, A. Muzio, A. F. C. Salinas, L. Lu, A. Hendy, S. Rajbhandari, Y. He, and H. H. Awadalla. Scalable and efficient moe training for multitask multilingual models, 2021.
D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
S. Kiritchenko and S. M. Mohammad. Examining gender and race bias in two hundred sentiment analysis systems. CoRR, abs/1805.04508, 2018. URL http://arxiv.org/abs/1805.04508.
C. Kruengkrai, J. Yamagishi, and X. Wang. A multi-level attention model for evidence-based fact checking. arXiv preprint arXiv:2106.00950, 2021.
T. Kudo and J. Richardson. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226, 2018.
T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin,
M. Kelcey, J. Devlin, K. Lee, K. N. Toutanova, L. Jones, M.-W. Chang, A. Dai, J. Uszkoreit, Q. Le, and
S. Petrov. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics, 2019.
G. Lai, Q. Xie, H. Liu, Y. Yang, and E. Hovy. RACE: Large-scale ReAding comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785–794, Copenhagen, Denmark, Sept. 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1082. URL https://aclanthology.org/D17-1082.
A. Lazaridou, A. Kuncoro, E. Gribovskaya, D. Agrawal, A. Liska, T. Terzi, M. Gimenez, C. d. M. d’Autume, S. Ruder, D. Yogatama, et al. Pitfalls of static language modelling. arXiv preprint arXiv:2102.01951, 2021.
K. Lee, M.-W. Chang, and K. Toutanova. Latent Retrieval for Weakly Supervised Open Domain Question Answering. In ACL, 2019.

K. Lee, D. Ippolito, A. Nystrom, C. Zhang, D. Eck, C. Callison-Burch, and N. Carlini. Deduplicating training data makes language models better. CoRR, abs/2107.06499, 2021a. URL https:
//arxiv.org/abs/2107.06499.
N. Lee, B. Z. Li, S. Wang, W. tau Yih, H. Ma, and M. Khabsa. Language models as fact checkers?, 2020.
N. Lee, Y. Bang, A. Madotto, and P. Fung. Towards few-shot fact-checking via perplexity. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1971–1981, Online, June 2021b. Associa- tion for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.158. URL https:
//aclanthology.org/2021.naacl-main.158.
S. Legg and M. Hutter. Universal intelligence: A definition of machine intelligence. Minds and machines, 17(4):391–444, 2007.
J. Leike, D. Krueger, T. Everitt, M. Martic, V. Maini, and S. Legg. Scalable agent alignment via reward modeling: a research direction. arXiv preprint arXiv:1811.07871, 2018.
D. Lepikhin, H. Lee, Y. Xu, D. Chen, O. Firat, Y. Huang, M. Krikun, N. Shazeer, and Z. Chen. {GS}hard: Scaling giant models with conditional computation and automatic sharding. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id= qrwe7XHTmYb.
M. Lewis, S. Bhosale, T. Dettmers, N. Goyal, and L. Zettlemoyer. BASE layers: Simplifying training of large, sparse models. In M. Meila and T. Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 6265–6274. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/lewis21 a.html.
X. L. Li, A. Kuncoro, C. d. M. d’Autume, P. Blunsom, and A. Nematzadeh. A systematic investigation of commonsense understanding in large language models. arXiv preprint arXiv:2111.00607, 2021.
O. Lieber, O. Sharir, B. Lenz, and Y. Shoham. Jurassic-1: Technical details and evaluation. White Paper. AI21 Labs, 2021.
J. Lin, A. Yang, J. Bai, C. Zhou, L. Jiang, X. Jia, A. Wang, J. Zhang, Y. Li, W. Lin, J. Zhou, and H. Yang. M6-10t: A sharing-delinking paradigm for efficient multi-trillion parameter pretraining. 2021a.
S. Lin, J. Hilton, and O. Evans. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958, 2021b.
W. Liu, P. Zhou, Z. Wang, Z. Zhao, H. Deng, and Q. Ju. Fastbert: a self-distilling BERT with adaptive inference time. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6035–6044, 2020.
E. Loper and S. Bird. NLTK: The natural language toolkit. ArXiv, abs/0205028, 2002.
M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330, 1993. URL https://aclantho logy.org/J93-2004.
S. Merity, C. Xiong, J. Bradbury, and R. Socher. Pointer sentinel mixture models. International Conference on Learning Representations, 2017.

P. Micikevicius, S. Narang, J. Alben, G. Diamos, E. Elsen, D. Garcia, B. Ginsburg, M. Houston,
O. Kuchaiev, G. Venkatesh, and H. Wu. Mixed precision training. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=r1gs9JgRZ.
T. Mikolov, M. Karafiát, L. Burget, J. Cernock`y, and S. Khudanpur. Recurrent neural network based language model. In Interspeech, volume 2, pages 1045–1048. Makuhari, 2010.
T. Mikolov, A. Deoras, S. Kombrink, L. Burget, and J. H. Černocký. Empirical evaluation and combina- tion of advanced language modeling techniques. In Interspeech, 2011.
T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
A. Mishra, J. A. Latorre, J. Pool, D. Stosic, D. Stosic, G. Venkatesh, C. Yu, and P. Micikevicius. Accelerating sparse deep neural networks, 2021.
M. Mitchell, S. Wu, A. Zaldivar, P. Barnes, L. Vasserman, B. Hutchinson, E. Spitzer, I. D. Raji, and T. Ge- bru. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency, pages 220–229, 2019.
S. Mohamed, M. Png, and W. Isaac. Decolonial AI: decolonial theory as sociotechnical foresight in artificial intelligence. CoRR, abs/2007.04068, 2020. URL https://arxiv.org/abs/2007.040 68.
G. E. Moore et al. Cramming more components onto integrated circuits, 1965.
S. Narang, G. F. Diamos, S. Sengupta, and E. Elsen. Exploring sparsity in recurrent neural networks. CoRR, abs/1704.05119, 2017. URL http://arxiv.org/abs/1704.05119.
H. Ney, U. Essen, and R. Kneser. On structuring probabilistic dependences in stochastic language modelling. Computer Speech & Language, 8(1):1–38, 1994.
D. Paperno, G. Kruszewski, A. Lazaridou, Q. N. Pham, R. Bernardi, S. Pezzelle, M. Baroni, G. Boleda, and R. Fernández. The LAMBADA dataset: Word prediction requiring a broad discourse context, 2016.
A. Parikh, O. Täckström, D. Das, and J. Uszkoreit. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249–2255, Austin, Texas, Nov. 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1244. URL https://aclanthology.org/D16-1244.
D. A. Patterson, J. Gonzalez, Q. V. Le, C. Liang, L. Munguia, D. Rothchild, D. R. So, M. Texier, and
J. Dean. Carbon emissions and large neural network training. CoRR, abs/2104.10350, 2021. URL
https://arxiv.org/abs/2104.10350.
E. Perez, S. Karamcheti, R. Fergus, J. Weston, D. Kiela, and K. Cho. Finding generalizable evidence by learning to convince Q&A models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2402–2411, Hong Kong, China, Nov. 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1244. URL https://aclanthology.org/D 19-1244.
E. Perez, S. Huang, F. Song, T. Cai, R. Ring, J. Aslanides, A. Glaese, N. McAleese, and G. Irving. Red teaming language models with language models. To appear, 2022.

A. Peste, E. Iofinova, A. Vladu, and D. Alistarh. AC/DC: alternating compressed/decompressed training of deep neural networks. CoRR, abs/2106.12379, 2021. URL https://arxiv.org/ab s/2106.12379.
O. Press, N. A. Smith, and M. Lewis. Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409, 2021.
A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. Improving language understanding by generative pre-training. 2018.
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. 2019.
J. W. Rae, A. Potapenko, S. M. Jayakumar, T. P. Lillicrap, K. Choromanski, V. Likhosherstov, D. Dohan,
X. Song, A. Gane, T. Sarlos, et al. Compressive transformers for long-range sequence modelling. Advances in Neural Information Processing Systems, 33:6154–6158, 2020.
C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67, 2020a. URL http://jmlr.org/papers/v21/20-074.html.
C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67, 2020b.
N. F. Rajani, B. McCann, C. Xiong, and R. Socher. Explain yourself! leveraging language models for commonsense reasoning. arXiv preprint arXiv:1906.02361, 2019.
S. Rajbhandari, J. Rasley, O. Ruwase, and Y. He. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1–16. IEEE, 2020.
P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. SQuAD: 100,000+ questions for machine comprehen- sion of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas, Nov. 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1264. URL https://aclanthology.org/D16-1264.
P. Rajpurkar, R. Jia, and P. Liang. Know what you don’t know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-2124. URL https://aclanthology.org/P18-2124.
S. Roller, E. Dinan, N. Goyal, D. Ju, M. Williamson, Y. Liu, J. Xu, M. Ott, E. M. Smith, Y.-L. Boureau, and J. Weston. Recipes for building an open-domain chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 300–325, Online, Apr. 2021a. Association for Computational Linguistics. doi: 10.18653/v1/2021.e acl-main.24. URL https://aclanthology.org/2021.eacl-main.24.
S. Roller, S. Sukhbaatar, A. Szlam, and J. E. Weston. Hash layers for large sparse models. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021b. URL https://openreview.net
/forum?id=lMgDDWb1ULW.
C. Rosset. Turing-NLG: A 17-billion-parameter language model by Microsoft. Microsoft Blog, 1:2, 2020.

R. Rudinger, J. Naradowsky, B. Leonard, and B. Van Durme. Gender bias in coreference resolu- tion. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Louisiana, June 2018. Association for Computational Linguistics.
S. Russell. Human Compatible. Penguin, 2020.
P. Röttger, B. Vidgen, D. Nguyen, Z. Waseem, H. Margetts, and J. Pierrehumbert. Hatecheck: Functional tests for hate speech detection models. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021. doi: 10.18653/v1/2021.acl-long.4. URL http:
//dx.doi.org/10.18653/v1/2021.acl-long.4.
K. Sakaguchi, R. Le Bras, C. Bhagavatula, and Y. Choi. Winogrande: An adversarial winograd schema challenge at scale. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8732–8740, 2020.
V. Sanh, L. Debut, J. Chaumond, and T. Wolf. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108, 2019. URL http://arxiv.org/abs/1910.01108.
V. Sanh, T. Wolf, and A. M. Rush. Movement pruning: Adaptive sparsity by fine-tuning. arXiv preprint arXiv:2005.07683, 2020.
V. Sanh, A. Webson, C. Raffel, S. H. Bach, L. Sutawika, Z. Alyafeai, A. Chaffin, A. Stiegler, T. L. Scao,
A. Raja, M. Dey, M. S. Bari, C. Xu, U. Thakker, S. S. Sharma, E. Szczechla, T. Kim, G. Chhablani,
N. Nayak, D. Datta, J. Chang, M. T.-J. Jiang, H. Wang, M. Manica, S. Shen, Z. X. Yong, H. Pandey,
R. Bawden, T. Wang, T. Neeraj, J. Rozen, A. Sharma, A. Santilli, T. Fevry, J. A. Fries, R. Teehan,
S. Biderman, L. Gao, T. Bers, T. Wolf, and A. M. Rush. Multitask prompted training enables zero-shot task generalization, 2021.
M. Sap, H. Rashkin, D. Chen, R. LeBras, and Y. Choi. SocialIQA: Commonsense reasoning about social interactions. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, 2019.
M. Schaarschmidt, D. Grewe, D. Vytiniotis, A. Paszke, G. Schmid, T. Norman, J. Molloy, J. Godwin,
N. A. Rink, V. Nair, and D. Belov. Automap: Towards ergonomic automated parallelism for ml models. In ML for Systems Workshop at NeurIPS 2021, 2021.
T. Schick, S. Udupa, and H. Schütze. Self-diagnosis and self-debiasing: A proposal for reducing corpus- based bias in NLP. CoRR, abs/2103.00453, 2021. URL https://arxiv.org/abs/2103.00453.
A. See, M. Luong, and C. D. Manning. Compression of neural machine translation models via pruning. CoRR, abs/1606.09274, 2016. URL http://arxiv.org/abs/1606.09274.
A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi. Fairness and abstrac- tion in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency, pages 59–68, 2019.
C. E. Shannon. A mathematical theory of communication. The Bell system technical journal, 27(3): 379–423, 1948.
N. Shazeer and M. Stern. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pages 4596–4604. PMLR, 2018.

E. Sheng, K.-W. Chang, P. Natarajan, and N. Peng. The woman worked as a babysitter: On biases in language generation. EMNLP, 2019.
E. Sheng, K.-W. Chang, P. Natarajan, and N. Peng. Societal biases in language generation: Progress and challenges. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4275–4293, Online, Aug. 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.330. URL https://aclanthology.org/2021.acl-long.3 30.
M. Shoeybi, M. Patwary, R. Puri, P. LeGresley, J. Casper, and B. Catanzaro. Megatron-LM: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053, 2019.
S. P. Singh and D. Alistarh. Woodfisher: Efficient second-order approximations for model compression. CoRR, abs/2004.14340, 2020. URL https://arxiv.org/abs/2004.14340.
D. So, Q. Le, and C. Liang. The evolved transformer. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 5877–5886. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/so19a.html.
D. R. So, W. Mańke, H. Liu, Z. Dai, N. Shazeer, and Q. V. Le. Primer: Searching for efficient transformers for language modeling, 2021.
A. Soleimani, C. Monz, and M. Worring. BERT for evidence retrieval and claim verification. In J. M. Jose, E. Yilmaz, J. Magalhães, P. Castells, N. Ferro, M. J. Silva, and F. Martins, editors, Advances in Information Retrieval, pages 359–366, Cham, 2020. Springer International Publishing. ISBN 978-3-030-45442-5.
J. Steinhardt. Updates and lessons from AI forecasting, 2021. URL https://bounded-regret.g host.io/ai-forecasting/.
N. Stiennon, L. Ouyang, J. Wu, D. Ziegler, R. Lowe, C. Voss, A. Radford, D. Amodei, and P. F. Christiano. Learning to summarize with human feedback. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 3008–3021. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper
/2020/file/1f89885d556929e98d3ef9b86448f951-Paper.pdf.
S. Sun, K. Krishna, A. Mattarella-Micke, and M. Iyyer. Do long-range language models actually use long-range context? arXiv preprint arXiv:2109.09115, 2021.
J. Thorne, A. Vlachos, C. Christodoulopoulos, and A. Mittal. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809–819. Association for Computational Linguistics, June 2018. URL https://aclanthology.org/N18-1074.
A. Turing. Computing machinery and intelligence. Mind, 59(236):433–460, 1950.
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017.

E. Wallace, S. Feng, N. Kandpal, M. Gardner, and S. Singh. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153–2162, Hong Kong, China, Nov. 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1221. URL https://aclanthology.org/D19-1221.
Z. J. Wang, D. Choi, S. Xu, and D. Yang. Putting humans in the natural language processing loop: A survey. arXiv preprint arXiv:2103.04044, 2021.

L. R. Waugh. Marked and unmarked: A choice between unequals in semiotic structure. 1982.
J. Wei, M. Bosma, V. Y. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.

L. Weidinger, J. Mellor, M. Rauh, C. Griffin, J. Uesato, P.-S. Huang, M. Cheng, M. Glaese, B. Balle,
A. Kasirzadeh, Z. Kenton, S. Brown, W. Hawkins, T. Stepleton, C. Biles, A. Birhane, J. Haas, L. Rimell,
L. A. Hendricks, W. Isaac, S. Legassick, G. Irving, and I. Gabriel. Ethical and social risks of harm from language models. arXiv submission, 2021.
J. Welbl, A. Glaese, J. Uesato, S. Dathathri, J. Mellor, L. A. Hendricks, K. Anderson, P. Kohli, B. Coppin, and P.-S. Huang. Challenges in detoxifying language models. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2447–2469, Punta Cana, Dominican Republic, Nov. 2021. Association for Computational Linguistics. URL https://aclanthology.org/2021. findings-emnlp.210.
A. Williams, N. Nangia, and S. Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-1101. URL https://aclanthology.org/N18-1101.
J. G. Wolff. Language acquisition, data compression and generalization. Language & Communication, 2(1):57–89, 1982.
J. Wu, L. Ouyang, D. M. Ziegler, N. Stiennon, R. Lowe, J. Leike, and P. Christiano. Recursively summarizing books with human feedback. arXiv preprint arXiv:2109.10862, 2021.
A. Xu, E. Pathak, E. Wallace, S. Gururangan, M. Sap, and D. Klein. Detoxifying language models risks marginalizing minority voices. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2390–2397, Online, June 2021a. Association for Computational Linguistics. doi: 10.18653/v1/20 21.naacl-main.190. URL https://aclanthology.org/2021.naacl-main.190.
J. Xu, D. Ju, M. Li, Y.-L. Boureau, J. Weston, and E. Dinan. Bot-adversarial dialogue for safe con- versational agents. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2950–2968, On- line, June 2021b. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.

  1. URL https://aclanthology.org/2021.naacl-main.235.
    L. Xue, N. Constant, A. Roberts, M. Kale, R. Al-Rfou, A. Siddhant, A. Barua, and C. Raffel. MT5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934, 2020.

Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le. XLNet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32, 2019.
T. Young, E. Cambria, I. Chaturvedi, H. Zhou, S. Biswas, and M. Huang. Augmenting end-to-end dialogue systems with commonsense knowledge. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1), Apr. 2018. URL https://ojs.aaai.org/index.php/AAAI/article/vi ew/11923.
D. Yu and K. Sagae. Automatically exposing problems with neural dialog models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 456–470, Online and Punta Cana, Dominican Republic, Nov. 2021. Association for Computational Linguistics. URL https://aclanthology.org/2021.emnlp-main.37.
D. Yu, S. Naik, A. Backurs, S. Gopi, H. A. Inan, G. Kamath, J. Kulkarni, Y. T. Lee, A. Manoel, L. Wutschitz, et al. Differentially private fine-tuning of language models. arXiv preprint arXiv:2110.06500, 2021.
O. Zafrir, G. Boudoukh, P. Izsak, and M. Wasserblat. Q8BERT: quantized 8bit BERT. CoRR, abs/1910.06188, 2019. URL http://arxiv.org/abs/1910.06188.
R. Zellers, A. Holtzman, Y. Bisk, A. Farhadi, and Y. Choi. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019.
B. Zhang and R. Sennrich. Root mean square layer normalization. arXiv preprint arXiv:1910.07467, 2019.
W. Zhong, J. Xu, D. Tang, Z. Xu, N. Duan, M. Zhou, J. Wang, and J. Yin. Reasoning over semantic- level graph for fact checking. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6170–6180, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.549. URL https://aclanthology.org/2020. acl-main.549.
H. Zhou, T. Young, M. Huang, H. Zhao, J. Xu, and X. Zhu. Commonsense knowledge aware con- versation generation with graph attention. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 4623–4629. International Joint Con- ferences on Artificial Intelligence Organization, 7 2018. doi: 10.24963/ijcai.2018/643. URL https://doi.org/10.24963/ijcai.2018/643.
M. Zhu and S. Gupta. To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878, 2017.
D. M. Ziegler, N. Stiennon, J. Wu, T. B. Brown, A. Radford, D. Amodei, P. Christiano, and G. Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019.

A. MassiveText

We describe our data collection procedure for MassiveText, analyse the resulting dataset, and justify key design choices. We include the datasheet (Gebru et al., 2018) for MassiveText in Section A.5.

We believe that dataset diversity is crucial for training powerful and general large language models, and thus include data from a diverse range of sources (Table 2): web pages (from custom dataset MassiveWeb, C4, and Wikipedia), books, news articles, and code (GitHub). Existing text datasets created for training large language models are typically based solely on web pages, such as the C4 and mC4 datasets (Raffel et al., 2020b; Xue et al., 2020). Similar to our work, The Pile (Gao et al., 2020) dataset also includes many text sources such as web pages, books, and academic papers.

When collecting MassiveText, we decide to use only simple heuristics for filtering out low quality text. In particular, we do not attempt to filter out low quality documents by training a classifier based on a “gold” set of text, such as English Wikipedia or pages linked from Reddit (Radford et al., 2019), as this could inadvertently bias towards a certain demographic or erase certain dialects or sociolects from representation. Filtering text for quality, while preserving coverage of dialects and avoiding biases, is an important direction for future research.

A.1. Dataset Pipeline
In this section we detail the pipeline stages we use to collect the various subsets of MassiveText. We also include a brief description of our algorithm to extract fixed-size training chunks from our dataset of documents.

A.1.1. Pipeline stages
For all MassiveText subsets, we filter out non-English documents, process data into a homogeneous text-only format, deduplicate documents, and filter out documents too similar to those in our test sets. Additionally, for our curated web-text corpus (MassiveWeb) we obtain the web data in text-only format using a custom HTML scraper, we apply an extra filter to remove explicit content at the initial stages, and we apply a series of simple heuristics to filter out low-quality text. Figure A1 gives an overview of all data processing stages, which we will discuss in detail for the remainder of this section.

Figure A1 | Diagram of dataset processing stages. All stages are applied to MassiveWeb, our curated dataset of web-text comprising 48% of training data. For the other MassiveText subsets (Books, News, Code, C4, and Wikipedia), we apply content filtering, document deduplication, and test-set filtering.

Content Filtering (All subsets) We start by filtering out non-English documents. At this stage, we also remove pages from MassiveWeb that do not pass Google’s SafeSearch filter, which incorporates various web signals to identify explicit content.13 We use SafeSearch rather than manual word- list filters, because the latter have been found to disproportionately filter out inoffensive content associated with minority groups (Dodge et al., 2021).

Text Extraction (MassiveWeb only) We extract text from web pages using the tree structure of the HTML markup. For high-quality web pages, we observe that self-contained coherent blocks of salient text tend to occur in groups of semantic tags at the same level in the tree. We find such sets of tags and convert them to plain text, taking care to preserve any meaningful formatting, such as indentation, newlines and bullet points. This yields a large volume of text documents, and the resulting diversity in formatting style translates effectively to the generative capabilities of the Gopher models.

Quality Filtering (MassiveWeb only) The vast majority of text found on the web is of insufficient quality to be useful for language model training. For example, many web pages contain primarily automatically generated content, or text that is not intended for human consumption (such as keywords for search-engine optimisation). Much of the web also comprises social media content, which can variously lack context, coherence, or substance. To remove low-quality data while minimising potential for bias, we apply a number of simple, easily understood heuristic filters: we remove any document that does not contain between 50 and 100,000 words, or whose mean word length is outside the range of 3 to 10 characters; we remove any document with a symbol-to-word ratio greater than 0.1 for either the hash symbol or the ellipsis; and we remove any document with more than 90% of lines starting with a bullet point, or more than 30% ending with an ellipsis. We also require that 80% of words in a document contain at least one alphabetic character, and apply a “stop word” filter, to remove documents that do not contain at least two of the following English words: the, be, to, of, and, that, have, with; this adequately deals with ostensibly English documents that contain no coherent English text.

Repetition Removal (MassiveWeb only) Another indicator of poor quality data is excessive repeti- tion of certain words or phrases within a document. Qualitatively we observe that excessive repetition is often linked with uninformative content. Furthermore a well-studied failure mode of current language models is to repeat themselves during sampling (Holtzman et al., 2019) which may be partially attributed to repetitous training data.

We address this by removing documents with a high proportion of repeated lines, paragraphs, or 𝑛-grams. We remove documents containing many short duplicate passages, as well as those with fewer, larger sections of duplicate content, and we make sure to identify both types by using multiple approaches to calculate the proportion of duplicate content. For lines and paragraphs separately, we calculate over the document both the fraction that are duplicates, and the fraction of characters contained within those duplicates; for each 𝑛 ∈ {2, . . . , 4}, we calculate the fraction of characters contained within the most frequently-occurring 𝑛-gram; and for each 𝑛 ∈ {5, . . . , 10}, we calculate the fraction of characters contained within all duplicate 𝑛-grams, taking care not to count characters that occur in overlapping 𝑛-grams more than once. We then filter out documents whose duplicate content surpasses any of the thresholds detailed in Table A1.

An alternative approach to data filtering that we consider is to use an existing model to rank documents by likelihood. However, samples that are assigned high likelihood by a model are not

Table A1 | Thresholds for repetitious text. For each measurement of text repetition, we show the limit above which a document containing such repetition is filtered out.

necessarily high quality, even if the data used to train the model was high quality — repetitious text falls under this category. Furthermore it can also be costly, as it requires inferring likelihoods for a large number of documents, and carries an increased risk of introducing unintentional bias. However we consider this an interesting area for future work.

Document Deduplication (All subsets)14 Many web pages contain text that is duplicated on other pages across the internet. We remove all exact duplicates to obtain a set of unique documents. In addition to exact duplicates, there are many documents with significant 𝑛-gram overlap. We use the MinHash algorithm to compute 13-gram Jaccard similarities to determine which documents are near-duplicates of each other (Lee et al., 2021a). To further increase recall, we normalize white spaces and ignore punctuation when constructing the 𝑛-grams. We define two documents to be too similar when their Jaccard similarity exceeds 0.8, and randomly remove one of them.

Test-set Filtering (All subsets) We use a similar approach to remove training documents that resemble documents from our test datasets (Wikitext103, C4, Curation Corpus, and LAMBADA). Specifically, we compute the 13-gram Jaccard similarity between train and test documents, and remove train documents that have a Jaccard similarity exceeding 0.8 with a test set document.

Additionally, we remove the Wikipedia pages used in the Wikitext103 validation and test sets from our Wikipedia training dataset. This ensures that we do not leak Wikipedia pages from Wikitext103 which might have been missed in the previous procedure due to edits made to those pages since the Wikitext103 dataset was collected.

We apply this 𝑛-gram based filtering strategy to all subsets of MassiveText but note that some of our test datasets (such as the Pile) were created after we trained Gopher and thus may be leaked in our training dataset.

A.1.2. Constructing Token Sequences

We describe our algorithm for extracting training sequences from the set of documents in MassiveText. The algorithm is designed to have good shuffling properties and to avoid unnecessary PAD tokens which waste compute resources. Informally, we follow the following steps:

  1. Uniformly choose a document of 𝐵 bytes from one of our MassiveText subsets.
  2. Crop out 𝐶=15 × 𝑛 UTF-8 bytes, where 𝑛 is the training token sequence length. Uniformly choosing a start index for the crop would skew the distribution in such a way that we would almost never see the first token in a document. We therefore first uniformly sample a start index
    𝑠 in U [−𝐶/4 , 𝐵 − 𝐶/4] and extract the crop from [max(0, 𝑠), min( 𝐵, 𝑠 + 𝐶)].
  3. Tokenize the extracted bytes, and add the BOS and EOS tokens.
  4. Since most documents are shorter than our sequence length 𝑛=2048, we concatenate 10 such tokenized byte crops.
  5. We split the concatenation into sequences of 𝑛=2048 tokens, and discard the final chunk if it’s shorter than the sequence length. This avoids wasting compute by training on PAD tokens.
  6. Merge data from the various MassiveText subsets by sampling individual training sequences according the weights given in Table 2.
  7. Shuffle and batch the data for training.

A.2. Dataset Analysis
Understanding the performance of the Gopher family of models is one angle of insight into the complete methodology. However, we can also understand the strengths and limitations of these models by analysing their training dataset. In this section we analyse MassiveText, breaking it down by document lengths, toxicity, languages, contents (such as web domains), and tokenizer compression rate.

Document Lengths We show the distribution of document length measured in tokens in Figure A2a. MassiveWeb, C4, News, and Wikipedia documents contain on average fewer than 1,000 tokens. A majority of documents from those datasets can be fully included in the 2,048 sequence length of our models. For GitHub, the average document contains 2,946 tokens. Only the Books dataset contains extremely long documents—an average book contains 120,000 tokens and the longest book has over 1.3M tokens.

Training Data Toxicity We evaluate the toxicity of the MassiveText subsets, again using Perspective API. To this end, we select random text spans up to 100 tokens from 200k documents sampled from each training data subset, truncating incomplete sentences, and sub-sample the resulting text spans to match the respective subset sampling weights used during Gopher training. We sub-sample based on total token count rather than document count, to avoid giving long documents (e.g., Books) more weight than during training. Despite the light data filtering, we observe generally low toxicity scores in the toxicity histogram in Figure A2b. Across all training subsets mean and median toxicity scores are at 0.10 and 0.07, respectively, and the 95% percentile toxicity score is 0.3. Considering the threshold of 0.5, at which a toxic label is the more likely prediction of the Perspective API classifier, 0.8% of the texts fall above this score. This is markedly lower than the corresponding proportion of 4.3% reported by Gehman et al. (2020) for the GPT-2 training data, potentially reflecting the different principles for training data selection.15 As not all MassiveText subsets are sampled with equal weight

Figure A2 | MassiveText Statistics. (a) Only GitHub and books have on average more than 1,000 tokens per document. GitHub pages contain on average 3,000 tokens. Books are much longer, with on average 120,000 tokens. (b) Statistics calculated on a representative subsample of MassiveText, with a mean toxicity value at 10%.

during training, we provide a per-dataset breakdown in Figure A22a. Overall toxicity levels are lowest on Wikipedia, while the increased levels for Github can potentially be explained with out-of-domain application of the toxicity classifier, resulting in more prediction uncertainty.

Language Distribution The vast majority — 99% — of text in MassiveText is English. The distribu- tion of the top 10 remaining languages is shown in Figure A3a. We exclude the GitHub dataset from this analysis as it mostly comprises code. The majority of the non-English text is in Hindi, followed by European languages: French, Spanish, German, and Italian. Chinese and Japanese make up for 5% and 4% of the non-English tokens respectively.

MassiveWeb URL Breakdown To better understand the contents of MassiveWeb, we show the top 20 domains by token count in Figure A3b. A majority of domains in the top 20 are academic journals, presentation websites, question answering websites, or social media. Despite not explicitly constructing or biasing the contents towards scientific content, we find that 4 of the top 6 domains are of academic or scientific nature. We also note that 0.33% of MassiveWeb tokens come from GitHub and 0.28% from Stack Overflow.

Figure A3 | Dataset statistics (a) Distribution of languages (non-English) in MassiveText, excluding GitHub. Over 99% of MassiveText is English. The remaining text is mostly Hindi followed by European languages. (b) Top 20 domains of MassiveWeb with the most number of tokens. Four of the top six domains are of academic or scientific nature, despite not explicitly biasing MassiveWeb towards these.

Tokenizer Compression Rate Table A2 shows the compression rate of our 32,000 BPE vocabulary on the MassiveText subsets, measured in UTF-8 bytes per SentencePiece token. Note that SentencePiece tokens never cross word boundaries. We compare with the larger GPT-2/3 BPE vocabulary of 50,000 tokens. Using a larger vocabulary provides a small increase in compression rate: between 1% to 3% for text datasets and over 13% for GitHub.

Table A2 | Dataset Compression Rate of our tokenizer measured in UTF-8 bytes per (tokenized) token (higher implies better compression), compared to the GPT-2 tokenizer. GitHub is the least compressible subset, whereas C4 is the most. The larger GPT-2 vocabulary provides a relative increase of 1%-3% for text and a 13% increase for code.

A.3. Dataset Ablations
In this section we ablate two key design choices: the relative weighting of each MassiveText subset during training, and the pre-processing steps for collecting MassiveWeb.

A.3.1. MassiveText Subsets Weighting
We first analyse how different weightings of the MassiveText subsets affect the downstream performance on Wikitext103, LAMBADA, C4, and Curation Corpus. To reduce the size of the sweep, we fix the sampling weights for Wikipedia and GitHub. For Wikipedia, we require a full epoch over the training data, and thus fix the sampling weight to 2%. For GitHub, we set the sampling weight to 3%, as we want our models to train primarily on text but still be exposed to code. We thus consider the relative contribution of the remaining 95% of text between the remaining four subsets (MassiveWeb, News, Books, and C4). We sweep over 7 different combinations and show the downstream loss in Figure A4. We find that using a high proportion of Books reduces the loss on LAMBADA, whilst using a higher proportion of C4 helps on the C4 validation set. The configuration with 10% C4, 50% MassiveWeb, 30% Books, and 10% News performs well across all tasks and achieves the best performance on Curation Corpus—we therefore choose those sampling weights (multiplied by 95%) in our main Gopher training experiments.

Figure A4 | Downstream performance for different MassiveText subset sampling weights. The configuration (in green) with 10% C4, 50% MassiveWeb, 30% Books, and 10% News performs well across all tasks and achieves the best performance on Curation Corpus—we therefore choose those sampling weights in our main Gopher training experiments.

A.3.2. Iterative Refinement of MassiveWeb
We construct MassiveWeb by iteratively refining several key processing stages (described in Section A.1), all of which lead to improvements in model performance.
We validate the impact of the processing stages by training 1.4B parameter models at each stage. We sub-sample all datasets to 5GB of text, in order to run this ablation in a reasonable amount of time. We report the validation loss on three downstream tasks as a proxy for dataset quality in Figure A5. Compared with the extracted text in its raw unfiltered form, adding the simple heuristic quality filters described in Section A.1 dramatically improves downstream performance across the board, and deduplicating documents brings further substantial improvements. With all processing stages combined, a model trained on our dataset significantly outperforms models trained on OpenWebText (Radford et al., 2018) or C4 on all three datasets. We also note that the effect of deduplication is likely underestimated on the sub-sampled datasets as larger datasets are expected to contain more duplicates.

Figure A5 | MassiveWeb Ablations. Performance of 1.4B parameter models (lower is better) trained on OpenWebText, C4, and versions of MassiveWeb with progressively more pre-processing stages added. Downstream performance from the unfiltered MassiveWeb input is clearly worse for Curation Corpus summarisation and LAMBADA book-level word prediction. Applying a quality filter and de-duplication stages significantly improves quality. The final version of MassiveWeb consistently outperforms the two baseline datasets considered.

A.4. Text normalisation

Our tokenizer performs NKFC16 normalization as a pre-processing step. This normalization form is not fully lossless. For example, exponents are brought down: 25 is normalized to 2 5. This reduces the expressivity of the model and also changes the evaluation and test datasets. We therefore will use lossless normalization forms in future work and recommend this more generally to anyone using open-domain vocabularies.

A.5. MassiveText Datasheet
We follow the framework defined by Gebru et al. (2018) and give the datasheet for MassiveText in Table A3.

Table A3 | MassiveText Datasheet. We follow the framework as presented in Gebru et al. (2018).

B. Gopher Model Card

We present the Gopher model card in Table A4, following the framework presented by Mitchell et al. (2019).

Table A4 | Gopher Model Card. We follow the framework presented in Mitchell et al. (2019).
Figure A6 | 7.1B model train with Adafactor and Adam. We found that training with Adafactor resulted in increased training instabilities at larger scales. This resulted in unhealthy training curves even at smaller learning rates and increased probability of a divergence.

C. Lessons Learned

C.1. Adafactor
We investigated using the Adafactor (Shazeer and Stern, 2018) optimiser instead of Adam as it provides a reduced memory footprint, potentially allowing for a larger model to be trained or fine- tuned given particular resources. While at smaller scales we found pre-training with Adafactor to be stable and performant, at large scales we found that Adafactor resulted in reduced performance compared to Adam along with increased number of instabilities. Notably, when training a 7.1B parameter model with Adafactor we start to see minor loss divergences when compared to an Adam baseline (see Figure A6), unlike what we observed at the 1.4B parameter scale. Larger models were also prone to increased instabilities which we attempted to mitigate by lowering the learning rate. In Figure A6, the Adam run used a maximum learning rate of 1.2 × 10−4 whereas the Adafactor run used a maximum learning rate of 6 × 10−5 and still showed instabilities. Fine-tuning with Adafactor is also prone to divergence and is brittle to hyperparameter settings such as the learning rate and batch size. However, as discussed in Section G.1, we used Adafactor for fine-tuning Gopher as it reduced the hardware requirements considerably.

C.2. Lower-Precision Training with bfloat16
While training with activations and model parameters in half precision (float16) can have known instabilities due to the restricted numerical range, it has been suggested that the numbers represented by bfloat16 allow for training of models without a degradation in performance compared to full float32 training (Burgess et al., 2019). While Gopher was trained using bfloat16, both in its parameters and its activations, subsequent analysis showed that this resulted in many layers becoming stale. Due to the small learning rate and the size of the parameter updates, many parameters did not register updates over many steps hampering model performance.

We investigated this, focusing on a 417 million parameter model for our testing. The impact of

Figure A7 | bfloat16 Training. For four different combinations of float32 and bfloat16 pa- rameters (detailed below) we show performance on three different downstream tasks using a 417M parameter model. While bfloat16 without random rounding is clearly the least performant (blue), bfloat16 with random rounding (orange) unexpectedly under-performs full-precision training. Storing a float32 copy of the parameters in the optimiser state alleviates this issue.

bfloat16 versus full precision had clear impact at all scales during subsequent testing, as shown in Figure A7 on a 417M model. We encourage future groups to consider adding float32 parameters to a partitioned optimiser state when possible, as we found this mitigated any loss in performance. Our headline finding was:

We found it best to maintain float32 parameters purely for the optimiser update. One can partition the set of float32 parameters for optimisation updates alone along with the optimiser state as in Rajbhandari et al. (2020). The float32 parameters are used for the update and again cast to bfloat16 for the forward pass. This matches performance of full float32 training, improves the speed, and has only a slightly increased memory footprint compared to bfloat16 training.

A more detailed description of the four tested configurations is given below:

• fp32 Everywhere: Both parameters and activations are stored in float32. Of the options, this uses the most memory but is the most precise.
• bloat16 parameters without Random Rounding: The parameters and activations are cast to
bfloat16. During the parameter update, no randomised rounding is used.
• bloat16 parameters with Random Rounding: The parameters and activations are cast to bfloat16. During the parameter update, randomised rounding is used. The parameter is randomly rounded up or down proportional to the distance (in bfloat16 space) to either value.
• bloat16 parameters with a float32 copy in the partitioned optimiser state: The parame- ters and activations are cast to bfloat16. However, a copy of the parameters are stored in float32 in the optimiser state and used for the update. The parameters are randomly rounded to bfloat16 for the forward pass.

In all configurations, we use fp32 for computing the attention softmax and the softmax cross- entropy in the loss. This stabilizes low-precision training with almost zero runtime cost on TPU. All methods using bfloat16 offer a similar 1.4× speed improvement over fp32 everywhere.

We find that using bfloat16 parameters without random rounding performs the worst of the five tested methods– the green curve in Figure A7. fp32 everywhere acts as a baseline– while it has the largest memory footprint, no compromises are made in numerical representation relative to the other methods. We find that bfloat16 parameters with a float32 copy stored in the partitioned optimiser state is indistinguishable in performance yet offers a reduced memory footprint and a 1.4× speed improvement.

D. Results

D.1. Overview
We provide a results overview in Figure A8 which encapsulates the raw performance of Gopher along with known language model state-of-the-art performance, supervised state-of-the-art performance and human expert performance. Here supervised approaches imply the use of task-specific data for model fine-tuning or even architecture design.

For each task category, the datasets in Figure A8 are arranged from in order of increasing Gopher performance, from top to bottom. In each category it can be seen that Gopher (blue) generally equals or outperforms the language modelling state of the art (green), with human performance (red) better still, often with large gaps, indicating room for improvement. We also report the raw numerical results in Table A5.


Figure A8 | Results Overview. A performance overview of Gopher versus state-of-the-art performance from existing language models, supervised models and human performance where available.
Table A5 | Table of results. For the tasks considered, we show the performance of Gopher, and when available language model SOTA, supervised fine-tuned (SFT) SOTA, and Human Expert performance. A value of ‘-’ denotes that the value was not present. Language modelling results are in BPB (lower is better), the rest are in accuracy (higher is better). We show with how many shot Gopher was evaluated in parentheses after the value.

D.2. Pile
We evaluate Gopher and its family of smaller models on The Pile, which is a suite of language model benchmarks (Gao et al., 2020). The Pile compiles a set of published language model benchmarks spanning books (PG-19, Books2-3), web-based text (OpenWebText2, Pile-CC), mathematics (DM Mathematics), code (Github, StackExchange), conversational data (Ubuntu IRC, Enron), academic texts (arXiv, PubMed, Philpapers), subtitles (YouTube Subtitles, OpenSubtitles) and several other data-sources. We evaluate on a subset of these datasets, as some contain licensing restrictions. For all subsets we evaluate the model’s loss per UTF-8 byte (versus loss per token, which is model specific). We report this as ‘bits per byte’ which is the total log loss (base 2) divided by the number of UTF-8 bytes in the text. We display the raw values in Table A7. For 10/18 tasks Gopher achieves SOTA performance, with the largest relative gains on Gutenberg, GitHub, PubMed, arXiv, and Stackexchange. Gopher performs relatively worse on Ubuntu IRC, DM_Mathematics, and OpenWebText. Compared to Jurassic-1 (Lieber et al., 2021), Gopher performs better on 8/16 tasks, identical on one, and worse on the remaining 7/16. GPT-3 achieves the best performance on OpenWebText2, a value not reported by Jurassic-1.

D.3. Language Modelling
We first display evaluation curves calculated periodically during training in Figure A9. The evaluation curves are for four language model benchmarks that we explicitly filtered from the training set. These include Wikitext103 (Merity et al., 2017), LAMBADA (Paperno et al., 2016), and Curation Corpus (Curation, 2020) and C4 (Raffel et al., 2020a). We see the natural ordering of data efficiency and better performance (via lower log-loss) with model scale. In Figure A9 and Table A6 we contrast the final performance to published results.

Figure A9 | Online Evaluation curves. Zero-shot performance on the C4, Curation Corpus, LAMBADA, and WikiText-103 evaluation sets during training. The largest models did not have an evaluator running during the entirety of training. A more detailed summary can be found in Table A6.
Table A6 | Zero-shot performance of our models on downstream tasks. We show Wikitext103 and Curation Corpus validation perplexity along with LAMBADA accuracy.

D.4. Filtering Test-Set Documents
Comparing the performance of language models trained on different data is challenging. One of the main reasons is that memorisation can aid language model performance (Carlini et al., 2019), and different training datasets means different memorisation potential. Fundamentally we want to use language models for applications where novel text or communication can arise, and thus be able to track the generalisation ability of models via our selected benchmarks.

One response to this memorisation-generalisation ambiguity is to refrain from reporting language model performance: e.g., Brown et al. (2020) discuss the decision to withhold the majority of results
— they report numbers only on the Mikolov-processed version of Penn Treebank (PTB) (Marcus et al., 1993; Mikolov et al., 2011).17 However it is possible that language modelling is simply an easier task to measure train-test leakage (via 𝑛-gram overlap). For question-answering or translation, the existence of a paraphrased context in the training set can be enough for the test instance to be more trivially solved. Whilst Brown et al. (2020) do refrain from reporting language modelling, they do report performance numbers on question answering, translation, and even simple arithmetic tasks that all could draw heavily on training-set memorisation in ways that an n-gram filter may not easily detect.

We take the approach of filtering training documents that have a high similarity to test-set documents using a filter based on Jaccard similarity of 𝑛-grams (Table A.1.1). This includes WikiText- 103, Curation Corpus summarisation, LAMBADA. For test sets that have been built since MassiveText was constructed (November 2020), such as the Pile, MMLU, and BIG-bench this has not been applied. In this setting, we decide to report numbers versus train a new model on an updated dataset. This is partly a pragmatic decision — new evaluation benchmarks will frequently arise over time and re-training is expensive. Furthermore many new benchmarks are constructed to be resilient to test-set leakage such as BIG-bench, which relies on human-curated test examples and has mechanisms to avoid being scraped from the web. We take the approach of reporting a wide set of performance numbers with the principle that aggregate findings across several benchmark tasks to be sufficient for robust conclusions.

D.5. Scaling Curves
We display the scaling curves over a number of downstream language model benchmarks. We plot the evaluation loss, measured in terms of bits per byte, versus model parameters excluding embeddings on a log-log scale. A straight line indicates the existence of a power law as discovered by Kaplan et al.

Table A7 | The Pile. The BPB for GPT-3 and Jurassic are taken from the Jurassic paper (Lieber et al., 2021) when applicable, otherwise they are from (Gao et al., 2020).

(2020). We see an approximately linear fit from 417M → 7.1B parameters however Gopher noticeably deviates from this power law fit indicating it is either under-trained or the trend deviates from a power law at this scale. It is worth noting the scaling law does appear to hold for PG-19 however for many other datasets, notably Curation Corpus (summarisation) the trend is far off.

D.6. Scaling Context Length
Alongside the scaling of parameters, we investigate the effect on increasing the context length used during evaluation time. We plot the relative percent increase in performance (measured by the ratios of BPB as described in Section D.2) of Gopher provided with a context window of 𝐿 versus Gopher provided with a context window of 1000 in Figure A11. Because we evaluate the model with a sliding window, where we shift the model along by 𝐿/2 tokens, this means the model’s predictions have a variable context length from 𝐿/2 to 𝐿. Because Gopher was trained with a sequence length of 2048 it does not generalise well to relative positional encodings that exceed this boundary. We observe (although do not report) a sharp degradation in performance via naive context length scaling. However we can clamp the maximum time position to 2048 and extend the context length with either an improvement in performance — notably for articles and code (arXiv, GitHub, PubMed, PhilPapers) or no improvement — notably for PubMed Abstracts. Interestingly we see a smaller performance improvement for Books (BookCorpus2, Books3, PG-19 (Rae et al., 2020)) which could suggest many of these books do not contain long-range dependencies, despite being long, or that Gopher is not yet sufficiently powerful to condition on them.
The result on books is surprising — e.g. PG-19 was developed specifically to test long-range language modelling capability — but it appears to be echoed with recent contemporary work. Sun et al. (2021) investigate whether language models learn interesting long-range dependencies on

Figure A10 | Scaling Curves. Plotting parameters versus evaluation loss, in bits per byte. Both axes are log-scale to inspect the presence of a power-law. Whilst this appears to hold at smaller scale, the 280B Gopher model has notably deviated from this trend.
Figure A11 | Context Length Scaling. Relative performance improvement of increasing the evaluation sequence length of Gopher (trained with 2048) versus a model evaluated with a sequence length of 1024. We observe the largest gains for articles and code: ArXiv, GitHub, PubMed and PhilPapers. Reassuringly, we see no gains for PubMed Abstracts.

book data. One finding from this work is that these book collections can contain texts which are compendiums of magazine articles (which do not greatly benefit from large contexts) along with fiction texts (which do continue to benefit from longer contexts). Thus part of the story is in extracting more granular evaluation sets.

The ability to extrapolate to a larger context length at evaluation time is a useful property because training with very long contexts can be computationally expensive. In this study, this extrapolation property motivated the use of the relative positional encodings scheme from (Dai et al., 2019) versus the more conventional absolute positional encoding scheme (Brown et al., 2020; Vaswani et al., 2017). The reason the positional encodings can extrapolate well is because we can clamp the maximum relative time — whereas it is not possible to clamp the absolute positions. Contemporary work has also verified that absolute positional time encodings extrapolate poorly to longer sequence lengths and has proposed an alternative temporal encoding scheme ALiBi (Press et al., 2021). It would be interesting to compare the extrapolation capabilities between these two temporal representation approaches.

At present, there is a side effect via maximum time-step clamping of preventing the model from understanding the relative positions of distant text. For tokens beyond 2048 timesteps ago, all relative times are equal and thus ablation experiments to shuffle the distant past (as performed by Sun et al. (2021) for example) will not yield any performance improvement. An interesting challenge will be to determine a strong scheme for temporal extrapolation that still respects the understanding of absolute and relative time.

D.7. MMLU
The Massive Multitask Language Understanding (MMLU) benchmark is a set of 57 multiple-choice problems proposed by Hendrycks et al. (2020) that emulate human exams. Whilst this is dubbed language understanding, it is not aimed at probing linguistic capabilities such as co-reference resolution but is instead aimed at testing a model’s ability across a wide range of academic subjects — from computer science to history to law. Having world knowledge is beneficial to many of the tasks, but logical and mathematical reasoning is also tested. An example problem is displayed below (we

Figure A12 | MMLU Model Comparison. (a) Average accuracy over 57 multiple-choice prob- lems (Hendrycks et al., 2020). The family of Gopher and GPT-3 models are evaluated 5-shot with no additional fine-tuning. GPT-2 and RoBERTa and UnifiedQA (a fine-tuned T5 model) are fine-tuned on tailored QA data. (b) 5-shot Gopher and GPT-3 performance on a scale ranging from average human rater performance (34.5%) to estimated per-task human expert performance (89.8%) (Hendrycks et al., 2020). The forecasted distribution of SOTA performance on MMLU for June 2022 (Steinhardt, 2021) is also shown.

evaluated in the 5-shot setting but show the 1-shot case for simplicity):

We scored the immediate completions ‘ (A)’, ‘ (B)’, etc. and selected the response with the highest probability.

We see a breakdown of performance across the family of Gopher models per MMLU task in Figure A14a. For 55 tasks of 57 Gopher outperforms smaller-scale models, and in most cases we see a significant leap in performance. For Abstract Algebra and High School Mathematics there is not a positive trend in terms of performance with scale, suggesting larger models are unlikely to spontaneously understand these topics. When comparing Gopher to the SOTA unsupervised model on this benchmark, GPT-3, we see a significant improvement on all tasks except the aforementioned Abstract Algebra

Figure A13 | Gopher calibration on MMLU. Each point represents a topic.

and High School Mathematics (where both models perform with very low performance). Some of the largest performance gain is obtained for knowledge-intensive tasks such as medicine, history, politics, world religions and sociology. Alongside a strong performance, we find in Figure A13 that Gopher produces a calibrated prediction.

Although pairwise model comparisons can be illustrative, it can be sometimes useful to pitch them against human performance and predicted future performance to gauge progress. In Figure A12b we plot the overall average performance of 5-shot prompted Gopher (60.0%) and GPT-3 (43.9%) against human-rater performance (34.5%) and the estimated human expert performance per task (89.8%), where the comparison values are obtained from (Hendrycks et al., 2020). We also compare to the distribution of 77 professional forecasters, who are attempting to estimate the state-of-the-art performance on this task by June 2022 who on average estimate a 57.1% accuracy (see Steinhardt (2021) for further details of the methodology). We find Gopher almost halves the accuracy gap from GPT-3 to human expert performance and exceeds forecaster expectations.

We display the raw results on the Massive Multitask Language Understanding (MMLU) suite of tasks.

Figure A14 | MMLU Task Breakdown. Accuracy across 57 MMLU tasks spanning STEM, humanities, legal and business domains. Tasks consist of multiple choice questions, each with four responses — 25% indicates chance. Gopher provides a significant improvement over smaller models for most tasks, notable exceptions being Abstract Algebra and High School Mathematics where scale appears to hurt. A comparison with GPT-3 175B is displayed in (b) where Gopher improves accuracy on 55 of the tasks. Gopher is also well-calibrated on this task, see Figure A13.
Table A8 | 5-Shot MMLU Accuracy by Model Size.

D.8. BIG-bench
The Beyond the Imitation Game Benchmark (BIG-bench) (BIG-bench collaboration, 2021) is a collection of evaluation tasks intended to probe the abilities of large language models. Tasks include traditional natural language processing tasks, for example reading comprehension and question answering, as well as tasks that require other capabilities, such as (1) logical and mathematical reasoning, (2) an understanding of the world, for example, causal and physical reasoning, (3) an understanding of humans, for example, social reasoning and theory of mind or (4) scientific understanding among others.

There are two ways that LMs can be evaluated on a BIG-bench task: either in a generative setting, where the LM must predict a response to the prompt; or in a multiple choice setting, where the LM must evaluate the log-probability of a collection of possible answers, selecting the one with the highest log-probability as the answer. In this work we concentrate on the multiple choice setting without fine-tuning. This is because we aim to focus on the most direct capability of language models — which is to score the probability of text. The multiple-choice formulation simply requires scoring the prompt and responses, and selecting the argmax. Open-ended generative tasks rely on both good language model estimation but also good “decoding” techniques — e.g., appropriate sampling approaches, the use of search, reward models etc. which can conflate a mixture of model capability and decoding sophistication. We next detail which tasks we focus on.

D.8.1. Task Selection
BIG-bench currently contains over 160 tasks split into over 974 sub-tasks. We select a set of 63 tasks for evaluation, considering multiple-choice JSON tasks. We also remove tasks that are not in English since our models are trained principally on English text only. Additionally, we remove tasks that test the ability of the models to deal with long contexts or the tokenisation properties of the models, since we are interested in evaluating the semantic capabilities of our models.

Concretely we exclude BIG-bench tasks that contain one or more of the following keywords: translation, low-resource language, non-English, multilingual, example task, programmatic, non-language, context length, tokenization. We also manually filter out the tasks, entailed_polarity_hindi, dyck_languages and persian_multiple_choice, since they are not in English, and suicide_risk, since we do not consider this task to be an appropriate application of language models. The 62 tasks that we restrict to are detailed in Table A9, this is broken down by category in Table A10 and the distribution of task categories is detailed in Table A11.

The final 62 tasks selected from BIG-bench for our analysis are listed below:

Table A9 – continued from previous page

Table A9 – continued from previous page

Table A9 – continued from previous page

Table A9 – continued from previous page

D.8.2. Multiple Choice Evaluation
Our prompts consist of five examples of the input (or question), followed by optional choices (depend- ing on the dataset settings) and targets followed by the current input (or question) and the choices that the LM should select from.18 Below is an example five-shot prompt:

Determine whether a given sentence asserts a causal, correlative, or neutral relation between two events. If the sentence asserts a causal relation respond causal, if the sentence asserts a correlative relation respond correlative, if the sentence asserts neither a causal nor a correlative relation between two events respond neutral.

Table A9 | BIG-bench Selected Tasks. A set of 62 English-language multiple choice tasks.

Sentence: If I plant these seeds, tulips grow.

Relation:

We compute the likelihood of each of the choices as the sum of log-probabilities under the model of each token in the choice. We consider the model’s selection to be the choice with the highest log-probability and compute the accuracy based on this choice.

D.8.3. BIG-bench 5-Shot Results
The five-shot multiple-choice accuracy by task category is displayed in Figure A15a.19 Note that different categories contain between 1 and 49 tasks each, and the same tasks may appear in multiple categories, we simply take the category average. The per-task accuracy is displayed in Figure A15a.

Figure A15a demonstrates a clear benefit of model size on performance, with a step change between the 7.1B parameter model and Gopher on 41/51 task categories. The same result holds in Figure A15b, which shows the results on each of the 62 evaluated tasks individually, with Gopher outperforming other models on 41 tasks.

Consistent with the MMLU results, scale appears to make little difference to mathematical reasoning tasks (see Algebra, Arithmetic, Mathematics and Probabilistic Reasoning categories). Scale also does not appear to help for Multi-Step Tasks and related Decomposition categories, where tasks require the model to decompose the solution into multiple steps and perform them sequentially in order to output the correct answer. Some language tasks on Paraphrasing, Summarization, or Negation also appear to be hard regardless of the models’ scale.

We see the largest improvements on the Alignment and Social Bias tasks, suggesting that Gopher is beginning to understand implicit human preferences better, including those based on different social contexts. The large improvements for Gopher on Memorization and Numerical Response tasks also indicate that scale helps on tasks that require recalling factual information or recognising numeric

Table A11 | BIG-bench distribution of task types. Note that some tasks may belong to multiple task types.

characters.

Below are examples of questions that Gopher was able to answer correctly. We omit the 5-shot examples for brevity and only show the prompt, followed by the multiple choices with their log probability scores produced by the model printed in brackets, and the correct target:

D.8.4. Relative vs absolute accuracy
Alongside computing the average accuracy per task (or task category) we can also plot the relative accuracy. Here, we subtract the random-chance baseline from the accuracy to better reflect the task difficulty. Specifically the random chance accuracy (𝑥𝑐) is calculated for each individual question, and is subtracted from the score 𝑥 achieved by the model on that question (𝑥ˆ = 𝑥 − 𝑥𝑐). The final plots contain the means over all 𝑥ˆ scores across the dataset in Figure A16b or further averaged across all datasets in a category in Figure A16a. Comparing to Figure A15, which presents equivalent results without such normalisation by random baseline, it can be seen that the normalisation does not change the broad results pattern. However, for some datasets different questions have different number of choices, which means that some questions are “harder” than others. When we calculate Pearson correlation between the log of model size and the average accuracy per task or category for normalised and unnormalised scores, we see that the normalised scores do correlate better with size than the unnormalised scores (see Table A12).

Figure A15 | BIG-bench Accuracy by Task. Accuracy across 62 BIG-bench JSON multiple choice tasks. Tasks consist of multiple choice questions with between two and thirty four possible responses. (a) Accuracy across 62 BIG-bench tasks grouped by keyword or broad category. (b) Accuracy across 62 BIG-bench tasks plotted individually.

D.8.5. Comparing Gopher family models to models from the T0 family
We compare 0-shot performance of Gopher family models to the recently published models from the T0 family Sanh et al. (2021) on the intersection of BIG-bench tasks used in both papers. Table A13 demonstrates that overall, Gopher 0-shot performance is the best among all the models evaluated. Gopher outperforms all models from the T0 family on Hindu Knowledge and Known Unknown tasks, it performs similarly to T0++ on the Misconceptions dataset, and worse than all but the T0 model on the Novel Concepts dataset.

D.8.6. Raw accuracy details
We display the raw results on the BIG-bench suite of tasks for 5-shot prompting.

Figure A16 | BIG-bench Relative Accuracy by Task. The relative accuracy equals the accuracy subtracting random-chance accuracy (e.g., 25% for a 1-in-4 multiple choice task). Dots indicate average relative accuracy performance over random baseline.
Table A12 | Multiple choice accuracy scores calculated relative to the random baseline for each question (shown in Figure A16) are correlated with model size better than raw (unnormalised) accuracy scores (shown in Figure A15). Spearman correlation scores are presented.
Table A13 | Zero-Shot BIG-bench Accuracy per Task. Comparing task accuracy of Gopher family models to the models from the T0 family by Sanh et al. (2021). Gopher performs the best overall.
Table A14 | 5-Shot BIG-bench Accuracy per Task. Raw results corresponding to Figure A15b.

D.9. TriviaQA & NaturalQuestions
To quantify the amount of factual knowledge that is recorded in the weights of our language models, we evaluate their performance on closed-book question answering. For this, we consider the Natural Question dataset (Kwiatkowski et al., 2019), using the test splits from Lee et al. (2019), and TriviaQA (Joshi et al., 2017), using the standard splits. We use beam search with a beam size of 5, and post- process examples by taking the first element before a comma, final dot or line break. Performances increases with model size, suggesting that some model capacity is used for factual memorisation; this is in-line with observations from (Brown et al., 2020). The performance of our largest model is slightly lower than the performance of GPT-3 model on Natural Questions, which we suspect is due to differences in the data corpora (e.g., GPT-3 uses 50% more examples than us from Wikipedia in their data mixture).

We show five examples of prompts and generated answers from Gopher below, and compare them to the target answers from the Natural Questions dataset (Kwiatkowski et al., 2019). The two first examples are classified as correct responses. Despite the few-shot conditioning, Gopher tends to give extra information (see Example 3), and produces many correct answers that are not scored as such.

Table A15 | Closed-book question answering accuracy. Our largest 280B model performs com- parable to the GPT-3 model in the few-shot setting. Performance increases smoothly with model size.

D.10. TruthfulQA
TruthfulQA is a set of 817 questions on subjects spanning 38 categories intended to measure whether language models can be truthful when answering questions (Lin et al., 2021b). Because the questions were crafted explicitly to target questions that some humans would answer falsely, Lin et al. (2021b) hypothesised — and found — that larger language models, which are better at imitating the training distribution, are more prone to giving false answers to questions in the benchmark. The dataset was collected adversarially against GPT-3 175B, so there will naturally be lower performance for this particular model. However, the anti-scaling pattern appears consistent across the GPT-J, GPT-2, T5 model families alongside GPT-3.

We evaluated Gopher on the multiple-choice variant of the task, called MC1. In this variant there are a number of potential answers but only one is correct. The number of possible answers vary between 2 and 13, so that a random baseline would achieve 22.6%. We adopt the same setup for this task as we do for other multiple choice problems: the model receives a stock prompt (“A highly knowledgeable and intelligent AI answers multiple-choice questions”) and is presented with the question and choices. An example prompt is displayed below (we evaluated in the zero-shot, 5-shot, 10-shot and 20-shot settings but show the 1-shot case for illustration):


We scored the immediate completions ‘ (A)’, ‘ (B)’, etc. and selected the response with the highest probability. Note that we randomized the ordering of the answers in the dataset.

We see in Figure A17 that for the zero-shot version of the task, Gopher-family models obtain a better accuracy at larger scale unlike prior baselines. It is worth noting the dataset that Gopher was trained on, MassiveText, was constructed approximately one year before this benchmark was published and so we do not believe this is a degenerate result of train-test leakage. There are some differences in the exact setup to prior baselines from Lin et al. (2021b), where a different prompt is used and the answer choices are not presented. We ablate these in Table A16 and find scale consistently improves performance in all settings.

We hypothesise that having a representative dataset allows us to observe the benefits of scale from
1.4B to 7.1B and then furthermore up to 280B. We would conjecture that for many of the presented model families, there would be an uptick in performance with a further increase in scale. The fact that GPT-3 175B performs poorly is likely due to the model being used adversarially to curate the

Figure A17 | TruthfulQA Multiple-Choice (MC1). Left: Comparison of zero-shot accuracy across model families and scales (baselines from Lin et al. (2021b)). Accuracy improves with model scale for the Gopher family. This is not the case for prior baselines GPT-J, GPT-2, T5 and GPT-3 — however there are slight differences in task setup which we ablate in Table A16. We also see a large-boost from few-shot prompting. Right: Few-shot prompting only consistently improves performance at the 280B scale, and ten-shot appears to be optimal.

dataset. Alternatively, there may be differences in the multiple-choice setup (e.g., because we present the choices) which changes the scaling trend. Naturally the true answer will become clearer with further benchmarking on this task from other large models. However, we make the observation that it is generally difficult to draw conclusions on the limitations of better language models; the influence of optimisation and training data can enable new capabilities over time.

We also evaluate the Gopher family few-shot. We do this by evaluating the first 𝑘 questions zero-shot and then the remaining questions 𝑘-shot. We see that few-shot evaluation does not provide a consistent improvement to performance for any models except Gopher (280B). This is consistent with the findings from Brown et al. (2020) — successful few-shot learning emerges at a particular model scale for different tasks. In this case we see 10-shot prompting is optimal, lifting performance from 29.5% to 43.7%.

An example of a question which 10-shot Gopher answers incorrectly is displayed below. The model incorrectly predicts (D) that Austrian is the language of Austria whereas the correct answer is (A) German. This may also be a result of (D) better fitting the template of the question.

D.11. Reading Comprehension: RACE
RACE (Lai et al., 2017) is a dataset of multiple-choice reading comprehension questions from middle
(m) and high (h) school English exams covering a broad range of domains. We evaluated on the dataset using a standard multiple-choice prompt that includes the options20, in the few-shot setting. Gopher advances state-of-the-art performance of autoregressive language models without fine-tuning

Table A16 | TruthfulQA MC1 Task Formulations. Percent accuracy across different task formulations. This includes the setup from Lin et al. (2021b): zero-shot using a QA prompt (?) and no presentation of available answer choices. We contrast this to the simple prompt we use for all multiple-choice problems, plus the presentation of answer choices as part of the prompt, and finally the ten-shot performance. In all setups accuracy trends higher with scale.

to 71.6% accuracy on RACE-h, compared to GPT-3’s 46.8% (Brown et al., 2020) and 47.9% for Megatron-Turing (Kharya and Alvi, 2021). However, there is still a substantial gap from the 90.5% achieved by state-of-the-art methods based on ALBERT-XXL which has 223M parameters (Jiang et al., 2020), and the estimated 94.2% ceiling for human accuracy on the task (Lai et al., 2017).21 The raw numbers are given in Table 4. It remains to be fully understood whether the supervised state-of-the art approaches are truly better at reading comprehension or are able to take advantage of statistics in these types of benchmarks, given these models are much smaller (e.g., 223M parameters for ALBERT-XXL). Clearly humans learn to achieve a high reading comprehension performance via a more general objective rather than training over thousands of questions and we would like to bridge this gap in a similarly general approach.

Figure A18 | Model comparison on the RACE reading comprehension dataset. Accuracy of differ- ent models on the RACE multiple-choice reading comprehension question dataset (Lai et al., 2017). See also Table 4.

An example prompt for the RACE evaluation is shown below (although we evaluated with as many examples as fit in the 2048-token context length, we show the one-shot case here for simplicity):

We scored the immediate completions ‘ (A)’, ‘ (B)’, etc. and selected the response with the highest probability. Figure A19 shows the calibration for Gopher. We see the model has a consistent trend of over-confidence but is otherwise reasonably calibrated.

D.12. Fact-Checking: FEVER & MultiFC
We now turn to evaluating the factuality of the largest Gopher model. With a massive amount of information about the world that the model sees during training, intuitively we expect the model to have acquired information that would allow it to distinguish between misinformation and valid claims Lee et al. (2020). We evaluate this ability using two established benchmarks: FEVER (Thorne et al., 2018) and MultiFC (Augenstein et al., 2019).

FEVER presents fact-checking as a classification task of the text claims into three categories: SUPPORTED, REFUTED or NOTENOUGHINFO. The claims are manually constructed from Wikipedia

Figure A19 | Gopher calibration on RACE-h. The model is reasonably well calibrated but generally slightly overconfident.

sentences and annotated with evidence supporting or refuting them, where the annotators couldn’t find relevant evidence in Wikipedia, the claim is labeled as NOTENOUGHINFO.

Since we are interested in stress-testing the factuality of a general- purpose language model, we do not perform fine-tuning but, instead, use few-shot prompting. Specifically, we cast fact-checking as a classification task and use the prompted language model to compute the probabilities of each class label conditioned either on claim only or on claim and evidence. While we can use these probabilities for assigning labels directly, in practice we consider them as features and learn a classification model using multi-class logistic regression. For the scaling experiments we use the same prompt that is constructed by sampling 15 training examples at random, hence mirroring the (balanced) class distribution found in the dataset. The results are summarized in Figure 3.

Closed-book setup: leveraging implicit knowledge in the weights. We start by assessing how well the model can classify the claims relying solely on the knowledge in its weights. Our 15-shot prompt for this experiment takes the form Claim: {claim}\n Answer: {label}. Perfor- mance improves monotonously with the model size reaching 50% for the largest model (Figure 3 left hand-side). Also, interestingly, Gopher manages to separate SUPPORTED vs REFUTED claims with a reasonably high performance of 78%, with scale improving performance (Figure 3 right hand-side).

However separating REFUTED from NOTENOUGHINFO claims proves a more challenging task and a one where increasing the scale alone does not seem to help, with performance plateauing after 1 billion parameters. Worse performance here highlights a more general (and nuanced) problem relating to “knowing what you do not know” (Rajpurkar et al., 2018): the language models do not reliably recognize that they lack information to provide an answer, hence conflating lack of information with contradiction of a claim.

Open-book oracle setup: recognition of textual entailment (RTE). Beyond a closed-book setup, another important task is the one of predicting veracity relation of a claim based on some provided evidence, a task that takes the form of entailment recognition. Various tailored approaches for veracity assessment have been proposed in response to publication of the FEVER dataset (Kru- engkrai et al., 2021; Soleimani et al., 2020; Zhong et al., 2020). Concretely, we adopt the Oracle setup of Thorne et al. (2018) which uses gold evidence for the claims belonging to SUPPORTED and REFUTED classes and randomly samples evidence sentences from Wikipedia for the claims belonging to the NOTENOUGHINFO class. We prompt language models using the same 15-shot prompt, but now prepend the evidence to the claim, i.e., Evidence: {evidence}\n Claim: {claim}\n Answer: {label}. All models perform above the baseline, with the few-shot prompt- ing models above a billion parameters performing comparable to the trained Decomposable Attention model (Parikh et al., 2016) which achieves 88% on FEVER (Thorne et al., 2018). Interestingly, Gopher not only builds internal representations that enable it to distinguish entailements without fine-tuning, but it is also able to understand this task from only a handful of few-shot demonstrations, i.e., 5 for each class for a total of 15.

Comparison to previous work on few-shot fact-checking Lee et al. (2021b) followed a similar few-shot approach, but combined REFUTED and NOTENOUGHINFO into one class and performed binary instead of three-way classification. We run this experiment using our largest Gopher model: we observe that Gopher improves absolute performance by 18% bringing macro-F1 to 89% (versus 71% reported by Lee et al. (2021b) for 1.5B GPT-2).

D.12.1. MultiFC
MultiFC (Augenstein et al., 2019) contains real-world claims collected from multiple fact-checking websites with scraped web snippets as evidence. Because the dataset is constructed from the actual fact-checking websites, the original target labels are website-specific, which result in 165 “soft” labels (e.g., “accurate”, “misleading”, “mostly correct”, “pants on fire!”). To make few-shot perplexity-based classification possible, we remap these labels to SUPPORTED or REFUTED. We observe that even on this dataset of naturally occurring claims covering a broad range of topics, Gopher manages to achieve a competitive performance using only few-shot demonstrations, achieving macro-F1 of 64% in the claim-only condition and 67% in the claim and evidence condition – well above a random baseline. Because we cast the task into a binary classification, the results of Augenstein et al. (2019) (i.e., 49.2% macro-F1 and 62.5% micro-F1) are not directly comparable to ours.

It would be an interesting future work to better characterise and understand what forms of facts Gopher or other large language models incorrectly predict to be true, how robust they are to adversarial paraphrasing, whether they truly understand logical entailment between evidence and claims, and, whether these models can be swayed to predict mis-truths if these occur with a sufficient frequency.

D.13. Common Sense: PIQA, WinoGrande, SocialIQA, HellaSwag
We now evaluate Gopher on its ability to capture common sense knowledge. Indeed, acquiring such common sense knowledge is an important prerequisite for many downstream natural language processing applications that leverage pretrained language models, such as dialogue systems (Young et al., 2018; Zhou et al., 2018)—where users would expect the model to have the same degree of common sense knowledge as a human listener—in addition to other applications like textual entailment (Dagan et al., 2005). Both the 175B GPT-3 (Brown et al., 2020) and the 530B Megatron-Turing NLG (Kharya and Alvi, 2021) have compared results on this dataet which allows us to investigate the influence of scale from the Gopher family of models with several reference points to other LLMs.

To better understand what kinds of common sense understanding are trivial or challenging for current large language models, we cover the physical, temporal, and social aspects of common sense knowledge. Following prior work, we put a sole emphasis on common sense understanding benchmarks with multiple choice formats, where the language model scores each answer choice conditional on the context and the question in a zero-shot fashion; we then select the highest-scoring answer choice as the language model’s prediction. We leave the extension to generative, non-multiple- choice common sense evaluation benchmarks to future work. A summary of the key statistics of each common sense understanding benchmark is provided in Table A17.

Table A17 | Summary of the four common sense understanding benchmarks that we use for LM evaluation.

In Figure A20, we report the performance of Gopher on the validation set of these common sense understanding benchmarks, and compare its performance with prior work. Based on the findings, we now remark on three key observations. First, despite their varying sizes—from 175 billion to 530 billion, translating to a 3× difference—the three models achieve similar performance on HellaSwag and PIQA, with performance differences of less than 1.5% across different models. This finding indicates that increasing model size beyond the current largest models may not substantially improve language model performance on these common sense benchmarks, although further investigation is necessary to firmly establish whether this is the case.

We remark that Gopher (280B) outperforms the smaller GPT-3 model with 175 billion parameters on PIQA, and performs nearly on par with the larger Megatron-Turing model on this benchmark, although the performance difference between Gopher and GPT-3 is much smaller for HellaSwag and WinoGrande.
Second, in all common sense datasets, there is still a substantial gap between the best zero-shot language model performance and the current state-of-the-art and human performance — indicating a large room for potential improvement. Third, the Gopher model particularly lags far behind the

Figure A20 | Scaling Curves for Common Sense Reasoning. In all cases the common sense reasoning ability increased with model size. The performance gap between Gopher, GPT-3, and Megatron-Turing is quite small.

fine-tuned state-of-the-art on SocialIQA, where Gopher achieves a 50.6% accuracy under the zero-shot setup; this finding suggests that the model struggles the most with social common sense. Given the challenging nature of the SocialIQA benchmark—even for the largest Gopher model—we encourage future language modelling work to additionally evaluate on this dataset, above and beyond other commonly evaluated common sense understanding datasets like HellaSwag, PIQA, and WinoGrande.

Despite the considerable gap between the zero-shot performance of large language models and the fine-tuned state-of-the-art models on common sense reasoning datasets, curating supervised common sense reasoning datasets presents a unique challenge due to the vast and varied nature of common sense knowledge. Hence, how we can design language agents that can acquire a wide variety of common sense knowledge—without relying on fine-tuning to a specific common sense understanding benchmark, which requires lots of manually-annotated common sense labels—remains an important avenue for future work. Finally, we note that we focus our comparisons with other similarly large language models. To better understand the common sense reasoning capacity of these models, we need to compare them with strong baselines, which lies outside of the scope of this work. We refer interested readers to recent work that systematically investigates language model performance on

Figure A21 | Continuation toxicity vs. prompt toxicity. Larger models produce more toxic responses when given toxic prompts. Continuation toxicity is almost uniformly below prompt toxicity.

common sense benchmarks by Li et al. (2021).

E. Toxicity and Bias Analysis

E.1. Toxic Generations
This section provides additional details for the methodology and results of our toxicity and bias analysis of LM samples in Section 5.1.1.

E.1.1. Methodology
In the unconditional setting, we sample 25k continuations from each model. In the conditional setting, we select a smaller subset (10%) of the 100k RealToxicityPrompts (RTP) prompts for efficiency, and generate 25 continuations per prompt. We sample up to 100 tokens for each continuation, and truncate incomplete sentences. Nucleus sampling with 𝑝 = 0.9 is used for all models (Holtzman et al., 2019).

The Perspective API classifier outputs a TOXICITY score between 0 and 1. While in Gehman et al. (2020) and Welbl et al. (2021) prompts are labelled toxic if TOXICITY ≥ 0.5 and non-toxic otherwise, in parts of our analysis we separate the text into bins (very low, low, medium and high toxicity) for clearer trend decomposition.

E.1.2. Results
Figure A21 shows the average prompt vs. continuation toxicity for different model sizes. Continuation toxicity increases with prompt toxicity in general, with a steeper increase for larger models, suggesting that larger models tend to be more ‘faithful’ to the toxicity of their input. Continuation toxicity is consistently lower than prompt toxicity, suggesting that models tend not to reach the same level of toxicity as the prompt they are given. Table A20 contains examples of how differently-sized models respond to the same prompt.

Beyond results in average toxicity levels, we also report two additional aggregate metrics to evaluate continuation toxicity, both of which are used in the RealToxicityPrompts benchmark: expected maximum toxicity and probability of toxicity. Expected maximum toxicity estimates the largest toxicity score one can expect in 25 generated samples. Probability of toxicity is an empirical estimate of the

Table A18 | Toxicity metrics. (Left) Expected maximum toxicity over 25 samples. (Right) The empirical probability of generating toxic text at least once over 25 samples. Conditioned samples are evaluated on a 10k-sized subset of RealToxicityPrompts, and split into “Toxic” and “Non-Toxic” (where a continuation is toxic if TOXICITY ≥ 0.5). Un-prompted/un-conditioned samples are evaluated over 25k samples. Evaluating the models trained on different datasets, our 1.4B model trained on C4, and the GPT-2 model trained on WebText, result in the lowest and highest toxicity scores respectively, across the board.

probability of generating at least one continuation with a probability score TOXICITY ≥ 0.5, over 25 samples for a given prompt.

Table A18 records these two metrics for each of our models, and comparisons to other models we evaluated using the same method: our 1.4B model trained on the C4 dataset (Raffel et al., 2020b) rather than MassiveText, and the open-sourced GPT-2 model (Radford et al., 2019). As our models scale, both unprompted expected maximum toxicity and toxicity probability decrease. For prompted samples, the metrics do not reflect a clear trend with scale.

The model trained on C4 records lower toxicity than all models trained on MassiveText, suggesting that dataset construction has a large impact on model toxicity, likely larger than that of scale. Conversely, the GPT-2 model records the highest scores for toxicity across all entries in the table. As discussed in Figure A.2, the difference here could also be attributed to the amount of toxic content in the training dataset.

Comparing toxicity scores relative to the training distribution using unprompted LM generation, we observe a moderate reduction overall, as reflected e.g. in slightly lower mean toxicity scores (0.1 vs. 0.08, for train distribution vs. the 280B LM), and analogous results also for other aggregate metrics (cf. Figure A22b, Table A19). This holds true across LM sizes, and suggests that, in the absence of prompting context, existing levels of toxicity in the training corpus are not amplified by the LM.

E.2. Classifying Toxicity
E.2.1. Prompt Templates
We use a template similar to Schick et al. (2021) for the few-shot classification setting, and do not optimise the template or the examples for better performance. The template is as follows:

Table A19 | Training data vs. LM-generated text: toxicity score statistics.
Figure A22 | Toxicity analyses. (a) Score distribution per training data subset. Wikipedia has the lowest scores whereas Books and GitHub have the highest; the latter potentially reflects classifier uncertainty given the different type of text. (b) Toxicity of unconditional model samples is not amplified in comparison to training data toxicity.

The example label is set to be one amongst ‘ yes’ or ‘ no’, depending on the example being used. To obtain the few-shot prediction of toxicity we look at the log-likelihood of the next token being ‘ yes’ or ‘ no’ under the language model, and normalize the log-likelihoods using the softmax function. The demonstrations are randomly sampled from the CivilComments (Borkan et al., 2019) training set to have an equal number of positive and negative samples. For evaluation, we use 10,000 unseen examples randomly sampled from the CivilComments test-set, as evaluating on the entire test-set is computationally expensive.

Table A20 | Samples from models in response to a RealToxicityPrompts prompt. The toxicity of the prompt and samples are listed after the text.

E.2.2. Subgroup Bias Metrics
We also perform evaluation on 10,000 randomly chosen samples from the CivilComments-Identities test-set (Borkan et al., 2019) for the 280B model in the 20-shot setting, and measure bias metrics proposed in Borkan et al. (2019) for the various subgroups. Measuring these metrics provides a nuanced view of the unintended bias arising from disparities in the distributional behaviour of the classifier for different subgroups. We consider samples in the dataset that have a score greater than zero for the subgroup identity as belonging to the subgroup.

In Figure A23, we report the following, for each subgroup:

(a) The area under the ROC curve (AUC)
(b) Background Positive Subgroup Negative (BPSN) AUC,
(c) Background Negative Subgroup Positive (BNSP) AUC.

We find that for certain subgroups, such as Muslims, the BPSN AUC is low, indicating that the model is less effective at distinguishing between non-toxic text related to Muslims, and toxic text from the background. This indicates a model bias towards marking Muslim related text as toxic. On the other hand, for Atheists, we see a low BNSP AUC, indicating that the model is biased towards marking texts related to Atheists as non-toxic.

Figure A23 | Subgroup fairness metrics for few-shot toxicity classification with Gopher. Subgroup area under the receiver operating characteristic (ROC) curve (AUC), Background Negative Subgroup Positive (BNSP) AUC, and Background Positive Subgroup Negative (BPSN) AUC are metrics introduced in (Borkan et al., 2019) for measuring unintended bias with respect to specific subgroups. A high BPSN and a low BNSP indicates a bias for the model to classify text related to the subgroup as toxic (e.g., for muslims), and vice versa (e.g., for atheists). A low subgroup AUC for a specific subgroup indicates a bias where the model underperforms on the subgroup relative to the background distribution.

E.3. Distributional Bias
This section provides implementation details, additional results, and in depth discussion for the distributional bias evaluations in Section 5.2.

E.3.1. Gender and Occupation
Gender Word Probability We measure bias for a given occupation as

where 𝐺 𝑓 is a set of female gendered terms and 𝐺𝑚 is a set of male gendered terms. If 𝐵(𝑜𝑐𝑐𝑢𝑝𝑎𝑡𝑖𝑜𝑛) is negative, the occupation is biased towards men and if 𝐵(𝑜𝑐𝑐𝑢𝑝𝑎𝑡𝑖𝑜𝑛) is positive, the occupation is biased towards women. As an aggregate metric, we report the average absolute value across all occupations. We take the average absolute value because we expect some occupations to be biased towards men (negative) and some to be biased towards women (positive). Thus, the average bias could be 0 even if individual occupation biases are heavily skewed towards men or women. In contrast, an absolute average of 0 suggests gender and occupation are not correlated whereas a higher value indicates higher average bias (either towards men or women).

We combine the occupation words from Huang et al. (2020), Caliskan et al. (2017), and Rudinger et al. (2018) and consider the following 76 occupations and 28 gendered terms:

• Occupations: CEO, accountant, administrator, advisor, analyst, appraiser, architect, assistant, attendant, auditor, baker, bartender, broker, carpenter, cashier, chef, chemist, chief, cleaner, clerk, cook, counselor, designer, developer, dietitian, dispatcher, doctor, driver, editor, educator, electri- cian, engineer, examiner, farmer, firefighter, guard, hairdresser, hygienist, inspector, instructor, investigator, janitor, lawyer, librarian, machinist, manager, mechanic, nurse, nutritionist, officer, painter, paralegal, paramedic, pathologist, pharmacist, physician, planner, plumber, practitioner, programmer, psychologist, receptionist, salesperson, scientist, secretary, sheriff, specialist, supervisor, surgeon, tailor, teacher, technician, therapist, veterinarian, worker, writer
• Male gendered terms: boy, brother, dad, husband, man, groom, male, guy, father, son, gentleman, boyfriend
• Female gendered terms: girl, sister, mom, wife, woman, bride, female, gal, lady, mother, daughter, girlfriend

Bias as a function of model size is reported in Figure 6a and Figure A24a. We explore how different experimental choices impact our results, including how results change when we modify our template from “The {occupation} was a {gender}” to “The {occupation} is a {gender}” (Figure 6a). Interestingly, whereas bias decreases slightly with model size when using the template including “was”, this pattern is not seen when using the word “is”. We also explore how the choice of gendered terms impacts the bias. Figure A24a demonstrates the impact of different choices in gender word. If we only use the gendered terms “male” and “female”, gender bias is substantially lower than when summing over all gendered terms listed above. Overall, when considering different prompts and gender terms, there is no consistent correlation between model size and gender bias.

Qualitatively, we see that Gopher tends to learn stereotypical associations between gender and occupation. When examining biases using the template “The {occupation} was a {gender}” and summing over all gender words listed above, 34.2% of our occupations are biased towards women, whereas the remaining occupations are biased towards men. Of occupations that are biased towards

Figure A24 | Analysis of gender and occupation bias in our models. (a) Gender and occupation bias, as measured by our co-occurrence test, for different choices in gender words. Overall, gender bias does not seem to increase or decrease consistently with model size and choices in template and gender terms can impact results. (b) Overall coreference accuracy on Winogender as well as accuracy for male, female, and neutral terms.
Table A21 | Raw numbers for our experiments on Winogender.

men, the average bias is -0.74 whereas for the occupations that are biased towards women, the average bias is 0.52. The occupations that are most biased towards men include “carpenter”, “sheriff”, “chief”, “tailor”, and “technician” and the most biased towards women include “dietitian”, “receptionist”, “secretary”, “nurse”, and “paralegal”. Only 13 occupations have an absolute bias less than 0.1: “accountant”, “advisor”, “assistant”, “baker”, “chemist”, “cleaner”, “designer”, “doctor”, “educator”, “hairdresser”, “investigator”, “scientist”, and “teacher”.

Winogender We test our models by inputting a sentence from Winogender and measuring whether the model gives a higher probability to the correct coreference resolution. For example, given the sentence “The technician told the customer he had completed the repair.”, we input “The technician told the customer he had completed the repair. ‘He’ refers to the {technician/customer}” into our model. We expect the model to assign a higher probability to the sentence ending with ‘technician’ as ‘he’ refers to the technician in this example. Figure A24b shows overall results on Winogender. Overall accuracy increases with model size. Interestingly, performance for neutral pronouns (“they”) is higher, perhaps because the model must rely on cues other than learned biases to predict the coreference. In cases where the participant is “someone” (e.g., “The technicain told someone they could pay with cash”), we remove “the” from the postamble (“{‘He’/‘She’/‘they’} refers to someone”).

Figure A25 | Average group fairness. The average of distances between sentiment score distributions for each category (defined in Equation 1), at each model size. A lower value indicates less bias.

Additionally, though subjective pronouns are most frequently used in Winogender, we ensure the pronoun in the postamble matches the pronoun used in the Winogender sentence (e.g., if the pronoun is the possessive pronoun “their” the postamble should be “‘Their’ refers to {occupation/participant}”). Together, these tweaks increase results by a few percentage points. Table A21 includes raw numbers for our coreference experiments.

E.3.2. Sentiment Bias
Metrics We treat each filled-in template as a prompt and sample 100 completions of 50 tokens each, using nucleus sampling with 𝑝 = 0.8 (Holtzman et al., 2019). We opt for a larger set of templates and lower number of samples in comparison to prior work (Brown et al. (2020), Huang et al. (2020)), based on preliminary experiments in which we tried both 100 and 800 samples. We found that more samples per prompt gives us a higher confidence measure of the sentiment for that specific prompt, but this is not the aim of our evaluation. As seen in the gender and occupation results, trends can differ between templates. A diversity of templates, rather than increased sampling for the same template, does more to avoid over-biasing our results based on template selection, and preliminary experiments bore this out. We observed slightly less noisy trends in the mean sentiment scores when using more templates with fewer samples.

The samples are scored using the sentiment analysis feature of the Google Cloud Natural Language API22 as done in Huang et al. (2020), which returns a single score ranging from [-1, 1] that we normalize to [0, 1]. Lower scores indicate a more negative sentiment, while higher scores indicate a more positive sentiment. Note that we do not include the prompt in the text being scored to reduce the impact of possible bias the sentiment classifier itself might exhibit (the sentiment classifier may generally score sentences which contain certain demographic terms with higher or lower sentiment). Of course, if the completion refers to the attribute, directly or indirectly, this may still impact our results.

We then use the Wasserstein-1 distance, 𝑊1, between sentiment distributions to calculate a group fairness metric, 𝐺.𝐹., defined by Huang et al. (2020):

Figure A26 | Mean continuation sentiment score by group. Higher values are more positive, lower are more negative. Occupations are plotted differently for readability.

where 𝐺 is all groups within a given category, such as country, 𝑃𝑔 is the distribution of sentiment scores for a particular group 𝑔 ∈ 𝐺, and 𝑃𝑎𝑙𝑙 is the distribution of all sentiment scores for 𝐺. A score of zero indicates no bias according to this metric.

Additional Results Figure A25 and Figure A26 provide additional results to those shown in the main paper. As noted in the main paper, we do not see a strong correlation with the number of parameters in a model and the sentiment of continuations. Further analysis is needed to better understand why some attributes (e.g., “Atheist” in Figure A25) have a slight downward trend in mean sentiment.

Terms and templates For our sentiment bias analysis, we use the following terms and attributes. Those for race and religion are from Brown et al. (2020), and those for country and occupation are from Huang et al. (2020). We also evaluated the race and religion templates without any attribute, which is represented in the list below as “”.

• Countries: Syria, Iran, Libya, Pakistan, Iraq, Denmark, Iceland, Finland, Chile, Italy
• Occupations: an attendant, a teacher, a nurse, an assistant, a secretary, an auditor, a cleaner, a clerk, a counselor, a designer, a writer, a baker, an accountant, an editor, a tailor, a driver, a supervisor, a cook, a chief, a developer, a manager, a lawyer, a farmer, a physician, a guard, an analyst, a mechanic, a sheriff, a CEO
• Races: Asian, Black, White, Latinx, Indian, Middle Eastern, “” (empty string)

Table A22 | Occupation templates.
Table A23 | Race and Religion templates.

• Religions: Atheist, Buddhist, Christian, Hindu, Muslim, Jewish, “” (empty string)

Table A22 includes the templates we used for scoring occupation sentiment, Table A23 includes the templates we used for scoring race and religion sentiment, and Table A24 includes the templates we used for scoring country sentiment. Templates for occupation and country are drawn from Huang et al. (2020), in addition to new templates written by the authors. All of the templates for race and religion are written by the authors.

Sentiment bias word co-occurrence Table A25 compares words used in samples conditioned on different attributes. We use NLTK (Loper and Bird, 2002) to tokenise and part-of-speech (POS) tag our sampled continuations. We then count the number of occurrences of each token (not including stop words) in the samples for each group, and take the difference in these counts between a pair of groups to determine if tokens co-occur more with certain groups. Those words with the highest (or lowest) difference occurred disproportionately for one of the comparison groups. Our co-occurrence results are based solely on samples from Gopher. We do not normalize the counts as all samples are the same length. NLTK POS tagging is imperfect, but we believe it is reliable enough for our qualitative analysis.

Table A24 | Country templates.

In Figure A26 and Figure 7b we observed that particular attributes had notably low sentiment; in particular “Atheist” amongst religions, “White” and “Black” amongst races, and “a sheriff” and “a guard” amongst occupations. In the sentiment distributions for countries, there are two clusters, and all Middle Eastern countries in our analysis appear in the lower sentiment cluster. This guided which attributes we selected for word co-occurrence analysis.

We compare countries from the lower sentiment cluster, “Syria” and “Iran,” with one from the higher sentiment cluster, “Iceland.” In these results, we see a reflection of recent events particularly for Syria, in words such as “flee” and “escape,” while those for Iceland are more neutral in connotation, such as “see,” “eat,” and “spend.” Nouns which co-occur with the word “White” include “race,” “racist” and “racism” whereas words associated with “Black” are more varied (“hair,” “beauty,” “police,” “community”). Because groups that are the majority in the context of our dataset, like “White,” are often unmarked in language, we also compare templates with the “Black” and “White” attribute to the template with “no attribute”. Though “White” corresponds to a low sentiment, the “no attribute” template has a slightly positive mean sentiment. When comparing “Black” and “White” to “no attribute,” we observe that both “White” and “Black” are associated with similar words (“racism,” “race,” “skin”) whereas the “no attribute” template is associated with a broad set of general terms like “life,” “time,” and “car”. We believe this reflects the way in which race is marked in our dataset; because the attribute “White” is an assumed default, it is mentioned more often when it is explicitly relevant to discussions of race and racism.

Similar to our results for gender and occupation, this clearly demonstrates how choices made by researchers, especially which groups to use in analysis and what terms to use for specific demographic groups, have a large impact on conclusions. For this reason, we caution against swapping out demographic terms in bias analyses without careful thought on markedness, and on how the choice of comparison classes will impact results.

Table A25 | Word co-occurrence between attribute pairs. Calculated over samples generated by Gopher.
Table A26 | Compute Usage Overview. We display the petaFLOPs used to train and evaluate a series of models. We include the cost of rematerialising activations during train time, and padding/repetition at evaluation time. We do not account for wasted computation due to development, pre-emption or other sources of inefficiency.

F. Compute Usage

We report the FLOPs used for our models in Table A26 across training and all of our evaluations. We define FLOPs used to include practical implementation details such as rematerialisation (which increases compute by 33%), padding, repeated computation, etc., rather than the theoretical optimal compute. We note that the reported figures represent a best-effort lower bound, and do not account for computation arising from development, pre-emption, or other sources of inefficiency.

We contrast the cost of training to the cost of inference across our various evaluations. We note that our inference costs are higher than necessary because we repeat computation in many of our evaluations by repeatedly processing common prefixes. Removing this repetition would reduce FLOPs used by 4-100×, depending on the evaluation. More efficient evaluations and analyses will be crucial for future work.
Additionally, we report the breakdown of accelerator time spent training in Table A27. We use accelerator time to versus FLOPs to reflect the time spent in communication and on operations bottlenecked on data movement such as relative attention. This includes the communication of activations between model shards as denoted by ‘model parallelism’, the pipeline bubble (Huang et al., 2019), and the communication of gradients as part of the optimiser update.

We remark on a few trends. First, as models increase in size, time spent in attention drops rapidly. Though the fraction of time performing attention is signficant for smaller models (39% for 417M), it’s comparitively cheap for Gopher (8%). Moreover, >70% of the time spent in attention is spent on relative positional encodings, across model sizes. Second, large batch sizes are crucial for compute

Table A27 | Training Time Breakdown. Percentage of the accelerator time spent on different tasks for various models, to the nearest percent. The Linears category includes the attention query, key, value, and output projections. The Optimiser category includes reducing the gradient across data-parallel workers, updating the parameters, and gathering the results across data-parallel workers. For 280B, we report the more efficient 6M token batch size; at 3M tokens the contribution of Pipelining and Optimiser are roughly doubled.

efficiency at large scales because they reduce the cost of pipelining and data-parallelism. Third, rematerialisation constitutes an immense tax on Gopher. Reducing or eliminating this cost through further memory optimisations, smarter rematerialisation and pipelining scheduling policies, or greater memory availability on chips, would translate to large efficiency gains.

Following Patterson et al. (2021), we report the net tCO2e emitted by training Gopher. We trained Gopher for 920 hours in November and December 2020 in Google’s Georgia datacentre. The PUE of the datacenter at this time was 1.08; the net tCO2e per MWh in October 2020 was 0.33. Using an estimate of 283W drawn per chip, this leads to a total of 380 net tCO2e, compared to 552 net tCO2e for GPT-3 (Patterson et al., 2021) or roughly 300 tCO2e per passenger jet round trip from London to New York.

G. Reducing Inference and Training Costs

This research required a large amount of compute to both train a series of models and extensively evaluate them. In Appendix F we have estimated the floating point operations (FLOPs) used for each model’s training run and all of our evaluations. Although training compute costs dominate evaluation in this report, reduced inference costs would allow models to be deployed more widely and thereby increase their applicability.

To continue building increasingly more powerful language models, more efficient training and inference are needed. We explore techniques to make both training and inference more efficient. This covers the compression of models via distillation and pruning for faster inference, and the use of sparse training and reverse distillation for faster training. While we show modest success in the compression of these models, resulting in small shifts in the scaling curves, on the whole, none of the methods we explore are remarkably successful. The general finding is that whilst compressing models for a particular application has seen success, it is difficult to compress them for the objective of language modelling over a diverse corpus. We detail these mixed results with the aim of accelerating research towards solutions within this important space of problems. We also develop and present guidelines for the efficient fine-tuning of our pre-trained models on downstream tasks.

G.1. Efficient Fine-tuning
After pre-training our models, we investigated efficient ways to fine-tune them on specific datasets. Our goal was to create a set of fine-tuning best-practices for downstream use. Our investigation used three datasets chosen for varying overlap with the proportions and types of data in MassiveText.

Figure A27 | Fine-tuning curves. We show fine-tuning only the biases, fine-tuning the final 40% of layers, and fine-tuning the entire model on each dataset. We truncate the evaluation curve at its best point (i.e. before overfitting) for ease of visibility. Fine-tuning an entire model is generally best for performance and FLOP efficiency. Due to the resource requirements, on Python GitHub we omit fine-tuning the final 40% of Gopher and stop the other two runs early.

• Wikitext103 (Merity et al., 2017): A dataset of select Wikipedia articles that have been vetted to be of high quality. The dataset is relatively small and is in-domain for our models. The models overfit on Wikitext103 very quickly.
• Curation Corpus (Curation, 2020): A dataset of bespoke text summaries of finance articles. While the data does not overlap with the model’s training data, it is English language text. The models do overfit, though less quickly than on Wikitext103.
• Python GitHub: A dataset of python code from GitHub. The dataset contains approximately 200,000 .py files from GitHub for training and another 5,000 files for validation. All files used have an MIT Open Source License. While GitHub is in the training data of our model family, the amount is relatively small. The models do not overfit on this dataset after 6 million sequences, which is the most we show.

In order of increasing memory cost, we consider:

• Bias only tuning: Introduce attention biases and train only the biases in the model (Ben Zaken et al., 2021). This uses 66% of FLOPs of training the entire model, but much less memory.
• Last layers only: Fine-tune only the final 40% of layers. This uses 60% of the FLOPs of training the entire model and an intermediate memory footprint.
• Entire model: Adjust all weights in the network during fine-tuning (the baseline approach).

Our goal for these experiments was not to find the best performance of our models on these specific datasets, but rather to find general best practices for efficiently fine-tuning the Gopher family of models on a variety of datasets. This involves trading off between the final performance of the fine-tuned model, the number of FLOPs required to reach that performance, and the memory (and thereby hardware) requirements. Therefore, we sometimes stopped experiments early when the trend

Table A28 | Fine-tuning perplexities. For models between 117 million and 280 billion parameters, we show the 0-shot perplexity along with the minimum perplexity after fine-tuning (F-T) the entire model on three different down-stream datasets. Additional fine-tuning results can be found in Figure A27.

became clear; not all models were tuned for the maximal number of sequences. We show comparisons of the fine-tuning strategies and datasets in Figure A27, and the minimum perplexities achieved are shown in Table A28.

Fine-tuning the entire model – with an appropriate learning rate – led to the best perfor- mance for a given compute budget. While fine-tuning on Wikitext103 and Curation Corpus led to over-fitting, our models did not overfit on our python_github dataset in over four and a half million sequences. For python_github, not all experiments have been run for the same number of sequences, as we were more interested in trends rather than specific performance numbers. In this case, early termination is due to training. For the other datasets, early termination is due to overfitting. Bias-only tuning worked relatively well for in-domain datasets that were prone to over-fitting, such as Wikitext103 and Curation Corpus, though it still under-performed compared to tuning the entire model. On Curation Corpus, bias-only tuning out-performed tuning the last 40% of the layers (see the middle panel in Figure A27). However, bias only tuning had little impact in more out-of-domain datasets, such as python_github, where tuning the biases led to minimal changes from 0-shot performance (see the rightmost panel of Figure A27). Fine-tuning only the final fraction of layers offers a compromise between bias-only and full fine-tuning, we found it to never be a FLOP efficient way to reach a given performance. Nonetheless, there exist reasons why fine-tuning only a fraction of layers may be preferable, such as memory limitations. Fine-tuning the entire model, while the most expensive, consistently led to the best performance.

All models are fine-tuned using Adam except for Gopher which was fine-tuned using Adafactor (Shazeer and Stern, 2018) to decrease the memory footprint and thereby the hardware requirements. A constant learning rate was used for fine-tuning. We found the learning rate to be a key hyperparameter in balancing performance, compute requirements, and tuning method. Specifically for the models where overfitting did occur, we found that the optimal learning rate decreased with the number of parameters being trained. There also exists a clear trade-off between learning rate and the required FLOPs. Specifically, for the largest models, minor improvements can be attained at the cost of significantly more compute. For example, a decrease of 0.04 perplexity on Wikitext103 can be achieved by using a 5× smaller learning rate at the expense of three times as many FLOPs.

G.2. Reducing Inference Costs
G.2.1. Distillation
Distillation is a popular technique to reduce model size while incurring only a small drop — or, sometimes, no drop — in model performance (Hinton et al., 2015). It involves training a smaller student network to predict the output of a trained teacher network. In NLP, it has been shown to be

Figure A28 | Distillation of a 7.1B model to a 1.4B model. We train a 1.4B model using the logits of a 7.1B teacher model. We find that the resulting model outperforms a 1.4B model trained from scratch though considerably underperforms the 7.1B teacher on all tasks.

particularly effective for BERT models fine-tuned on text classification tasks. For example, Jiao et al. (2020) found that it is possible to distill a 7× smaller BERT model during pre-training and fine-tuning, and only incur a 4% relative drop in performance on the MNLI text-classification task suite. Similar successes have been obtained with DistilBERT (Sanh et al., 2019), and FastBERT (Liu et al., 2020). We investigate the distillation of a large pre-trained autoregressive language model to a smaller one using a cross-entropy loss between the student’s output and the teacher’s probability distribution.

We show an ambitious attempt at a 5× compression (7.1B teacher → 1.4B student) in Figure A28 and a less ambitious 2× compression (1.4B teacher → 785M student) in Table A29. In both cases the student network outperforms a similar-sized network trained from scratch (more than 5% lower C4 test perplexity) however there is a significant gap of more than 10% from the teacher.

Table A29 | Distillation of two sizes. We found a large performance gap from the distilled smaller model to the larger teacher.

For the 7.1 to 1.4B distillation, the student is slightly better than a model of the same size trained from scratch (28 versus 30 perplexity on the C4 evaluation set), there is still a significant performance degradation of the student compared to the larger teacher model (28 vs 22 perplexity on C4 evaluation). A more modest attempt at an approximate 50% parameter reduction, using a 1.4B teacher to train a 785M parameter student also leads to clear performance differences between the student and teacher model. We observe a 7% improvement in the evaluation perplexity of the student over the base 785M model, but a 10% gap in perplexity to the 1.4B teacher.

The size of the teacher relative to the student had a clear impact on the efficacy of the method: in training a 417M parameter model, a 1.4B parameter teacher lead to a 2.7% reduction in C4 evaluation loss over using a 7.1B parameter teacher. However, there was still a substantial gap (nearly 20%) to the perplexity of the 1.4B teacher.

We further investigated a variety of schedules transitioning from the cross-entropy loss using

Figure A29 | Accelerating the training of larger models with reverse distillation and warm starting. We use a 600M parameter teacher to accelerate the training of a 1.3B parameter student. We are able to achieve modest gains using a smaller teacher, though over an entire training cycle the benefits appear to be limited. Using the same 600M model architecture initialised via warm starting is much more effective.

the teacher logits to one-hot targets. We found that we were able to make small changes in final performance, though we did not have a general recipe to improve performance and the optimal schedule seems very dependent on the student and teacher model size. We also attempted both logit and attention distillation. This constrained how we were able to compress the student model relative to the teacher and we matched model depths. This slightly outperformed vanilla distillation (a 1.6% improvement in C4 evaluation set perplexity and a 2.4% drop in curation corpus evaluation set perplexity in a 1.4B → 785M run), though results in considerably increased complexity and memory overhead.

Though distillation lead to clear improvements over a model trained from scratch, the modest gains achieved for relative low levels of compression in many cases did not satisfy our aims of an equally performant compressed model. We were unable to maintain the teacher model performance at a 2× compression suggesting that the potential inference gains would be modest for an equally performant model.

G.2.2. Pruning
Similar to distillation, weight pruning has proven to be an effective technique for reducing the inference cost of vision networks (Blalock et al., 2020; Elsen et al., 2019; Evci et al., 2020; Jayakumar et al., 2020), BERT models fine-tuned to perform classification tasks (Sanh et al., 2020) and machine translation (Gale et al., 2019; See et al., 2016). Some of the most performant approaches (Blalock et al., 2020; Singh and Alistarh, 2020) can compress ResNet-50 on ImageNet (Deng et al., 2009) to over 80% sparsity without any accuracy penalty. Movement pruning can reach 95% of the uncompressed model’s accuracy on a fine-tuning task with only 5% of its weights on the entailment classification suite MNLI (Williams et al., 2018) and the question answering benchmark SQuAD 1.1 (Rajpurkar

Figure A30 | Pruning Autoregressive Transformers. (Left) For a 110M parameter model, we show the on-line evaluation performance on the C4 evaluation set. Sparsification begins at 0.6 × 108 sequences and ends after 2.4 × 108 sequences. The final loss values are used to produce the corresponding data points in the scaling curves on the right. (Right) For models pruned to the listed sparsity levels during training, we show the final evaluation loss versus the number of non-zero parameters.

et al., 2016). See et al. (2016) are able to prune LSTMs for machine translation on WMT’14 EN → DE to 80% without loss of accuracy; Gale et al. (2019) are able to prune Transformers on the same task to about 70% without loss of accuracy.

We investigate using weight magnitude pruning (Narang et al., 2017; Zhu and Gupta, 2017) to induce weight sparsity into our language models during training, with the scope of obtaining a final sparsified model for faster inference. Methods such as iterative magnitude pruning (IMP), introduced by Han et al. (2016) and made popular by Frankle and Carbin (2019), that include retraining after each pruning iteration, are completely intractable in a setting where training a model once is already a Herculean task, as is the case for large language models.

For a given level of sparsity, we plot training curves (Figure A30 (left)) and scaling curves with respect to the number of non-embedding parameters in Figure A30 (right), in order to understand the scaling properties of sparse models. We find that models at all investigated levels of sparsity have approximately the same scaling coefficient (slope), while increasing the sparsity decreases the intercept in log-log space. 90% sparsity requires approximately 2.5× fewer parameters for a given evaluation loss.

In the experiments shown in Figure A30, we begin pruning 20% of the way though training and stop pruning 80% of the way though training. We use the sparsity schedule of Zhu and Gupta (2017). We prune every 1,000 steps, though verify that varying the pruning frequency within a reasonable window does not alter the results. We do not prune the embedding layer or the biases. Unlike the other experiments in this manuscript, here we train on the publicly available C4 training set (Raffel et al., 2020a) and use a 1024 rather than 2048 token sequence length for ease of comparison with future results.
However, pruning is not an efficient way to reach a given loss: although the final pruned model used for inference may have fewer parameters for the same target loss than the dense equivalent, the pruning procedure to obtain it requires starting from an even larger dense model that is then discarded – though recent work (Peste et al., 2021) may be promising for obtaining a sparse-dense model pair for the incurred computational cost of finding the sparse one. Furthermore, for large sparsity values, Figure A30 shows an increase in the loss during in-training sparsification. Similar to distillation (see Section G.2.1), we find that the amount of compression pruning can induce in the autoregressive models without an appreciable accuracy drop is low, in the 20-30% range.

In addition, there are practical difficulties in taking advantage of this lowered intercept of the scaling law. Fully unstructured sparsity is difficult to take advantage of on most accelerators, and a reduction in the number of parameters by a factor of 2.5 is not enough to offset the decrease in efficiency of doing sparse computations on GPUs (Gale et al., 2020). On CPUs, Elsen et al. (2019) provide evidence (on vision models) that a 2.5× reduction might yield real speedups; unfortunately, since CPU computation is much slower than GPU-accelerated inference, this would only be applicable to small models, in cases where the latency for sampling is required to be low.

These results, combined with the distillation ones in Section G.2.1, suggest that compressing unconditional generative autoregressive language models for inference cost reduction is a very challenging task – significantly harder than the tasks on which the model compression community usually evaluates its methods.23 Methods that are able to accomplish state-of-the-art compression results in computer vision do not transfer well to large scale language modelling. We propose the following benchmark task: shifting the scaling curve with respect to the parameters for autoregressive Transformer language models trained on the Pile (Gao et al., 2020), or other standard large datasets, ideally without incurring intractable memory or compute overheads, unfeasible at these scales.

G.3. Reducing Training Costs
G.3.1. Dynamic Sparse Training

One problem with the pruning approaches is that they limit the size of the final sparse model to the largest dense model that could be trained (notice the upward shift in all points in Figure A30 as sparsity increases). Dynamic sparse training approaches, such as RigL (Evci et al., 2020), avoid this limitation by having memory and step compute costs proportional to that of the final sparse model. During training, RigL (Evci et al., 2020) dynamically updates the structure of the sparse weight matrices. This is done in two steps: a grow and a drop step. In the grow step, a dense backward pass is done and the 0-valued weights with the largest gradient magnitude are turned “on.” During the drop step, the weights with the lowest magnitude are dropped. These two steps are performed in step at with specified frequency and result in the vast majority of training consisting of sparse gradient updates. The dynamic structure is a key feature of RigL and similar methods, such as Top-KAST (Jayakumar et al., 2020).

In some cases – largely in computer vision – they have also been shown to reduce the FLOPs needed to train models (Evci et al., 2020; Jayakumar et al., 2020). However, in line with our results on pruning and distillation, we find that the expected benefits are not realised in large language models. Specifically, when training with RigL, we obtain minimal reduction in the FLOPs required to reach a particular performance. Future work is needed to understand why this is, and how we can adapt sparse training methods to provide computational benefits to language modelling.

G.3.2. Reverse Distillation
We explore whether small pre-trained models could accelerate the training of new, larger models. First, we attempt to distill a smaller teacher into a larger student. We set the large student’s target to be a linear interpolation between the output of the small teacher (𝑌 ) and the true one-hot target

Figure A31 | Warm starting training. For two different expansion factors and three downstream tasks, we show a comparison between a warm started model and a baseline that has been trained from scratch. (Top) Warm starting of a 4.7B model from a 1.3B model– a 3.5× expansion. The warm started model intersects with a model trained from scratch 1/3 of the way through training. (Bottom) Warm starting of a 9B model from a 4.5B model– a 2.0× expansion. We train the warm started model to the point where it achieves performance comparable to a 9B parameter model trained from scratch – reducing the total number of training steps by just under 40%.

(𝑌ˆ), setting 𝑌target = (1 − 𝛼) 𝑌ˆ + 𝛼 𝑌 for 𝛼 ∈ [0, 1], where 𝛼 follows a schedule beginning at 1 and ending at 0. Across a variety of schedules for 𝛼, we observe that while we can accelerate the start of training, the gains end up being fairly small over the course of an entire pre-training run. For a student which is 2× the size of the teacher, a promising schedule involves the use of the teacher probabilities for the first 5 million sequences, followed by linearly interpolating to the one-hot target over the next 5 million sequences. In all cases, the number of sequences where the teacher provides a useful signal is small compared to an entire training cycle. As the student models become larger, the time during which a smaller teacher is helpful diminishes. Additionally, distillation based approaches either require a large number of precomputed probabilities (with significant storage requirements) or incur runtime memory and compute costs, due to the presence of a teacher model. The technique discussed in the next section – warm starting – is observed to work better (see a comparison of the two methods in Figure A29) than reverse distillation.

G.3.3. Warm starting
We experiment with various ways to combine trained weights with newly initialised ones, allowing us to scale both model depth and width. Our success criterion is to ensure that the warm started model, after being trained to convergence (300B tokens), is no worse than an equivalent model initialised from scratch. We find that a warm started model rapidly recovers from a high initial loss (due to the added parameters) to a loss quite close to that of the base model. We are able to expand 417M parameters by over 3× in size and maintain performance greater than an equivalent fresh model trained from scratch to convergence, implying that the gains were not limited to the start of training. However, at larger sizes, the relative gains achieved at convergence diminish, especially with expansions in width. Using a pre-trained 3B parameter model to accelerate the training of a 7.1B parameter (1.5× in depth, 1.25× in width) model resulted in a slightly worse model at convergence. A more extreme case is shown in Figure A31, where a 4.6B model initialised from a 1.4B model (a 3.3× expansion) is only more performant for a small fraction of the training time, though the majority of additional parameters come via expansions in width (1.5× with, 1.5× depth). Expansions primarily in depth seem to be a more promising route, as demonstrated in Figure A31, where we use a 4.5B parameter model to jump-start the training of a 9B parameter model by replicating each pre-trained layer. In this case, we achieve comparable performance to a model trained to convergence from scratch with a 40% reduction in compute.

Here we provide details additional details in to our warm starting experiments. Attempts to efficiently expand the capacity of models are not new (Chen et al., 2015), but as models get increasingly larger, the potential benefits continue to rise. The warm starting we investigate is similar to the “de-linking” done recently in Lin et al. (2021a) as a way to increase model capacity.

Of the strategies we attempted, the most successful method for increasing the capacity of a pre-trained model is described below.

• Depth: Replicate the parameters for each layer, linearly interpolating from the previous depth to a new one. Specifically, consider a network with 5 layers given by
A B C D E .
To expand this to 10 layers, we double each layer:

A A B B C C D D E E .
However, to expand from 5 to 7 layers we use: round_int(range(num_layers_new)/num_layers_new * num_layers_old). This gives us the expansion pattern:
A B B C D D E .
• Width: Increase the number of attention heads by tiling the weight matrices and hold key and value size constant. Letting 𝐻 be head size and 𝑛 be number of heads, expand an 𝑛𝐻 × 𝑛𝐻 matrix into 𝑚 heads by replicating the first 𝑚 − 𝑛 heads onto the right side of the new matrix. Then, expand bottleneck activation width by replicating the top (𝑚 − 𝑛) ∗ 𝑘 terms from the top of the newly widened matrix onto the bottom. Finally, add a small amount of noise to the newly initialised weights. An illustration is shown in Figure A32.

In all cases, we re-initialise the optimiser state and begin training normally. We found applying the same tiling/replicating procedure to the Adam optimiser state does not aid in performance, and we therefore omit this.

G.3.4. Alternative Warm Starting Methods
We investigate a few other warm starting methods that we do not find to perform as well as the replicating layers in depth and tiling in width:

• freshly initialising all new weights;
• drawing from the weight distributions of existing weights – especially when adding width to our models;
• initialising the new weights to very small values s.t. the behaviours of the original model is nearly preserved.

Of the above methods, all of them clearly under-perform a model trained from scratch. Analysing the weight matrices after training, the model does not successfully integrate the newly initialised weights with the previous structure.

Figure A32 | Schematic for warm starting with increased width. We find that tiling the weight matrices provides the best performance of the various ways to add width to a model that we tried. This is likely because it preserves the banded structure that emerges in the attention weight matrices during training.

The tiling approach that we use has two advantages. Firstly, the weights naturally follow the magnitude distribution of the original model. Secondly, the structure of the weight matrices is naturally enforced. Adding a small amount of noise to tiling in width leads to slightly improved performance over pure tiling.

G.4. Future Work for Efficient Training
The need for more efficient methods to enable the training of better language models remains, as none of the detailed techniques have entirely satisfactory results. Some of the investigated methods do not yield improvements, while others yield minor gains at the expense of considerable code and/or operational complexity. Further promising directions include architecture search (So et al., 2019, 2021), mixture of expert style models (Fedus et al., 2021; Kim et al., 2021; Lewis et al., 2021; Roller et al., 2021b), quantization (Zafrir et al., 2019), hardware accelerated sparsity (Mishra et al., 2021) and semi-parametric approaches (Borgeaud et al., 2021; Guu et al., 2020; Khandelwal et al., 2020; Perez et al., 2019).

H. Dialogue-Prompted Gopher Details

H.1. Construction
The Dialogue-Prompted Gopher model is constructed from the raw Gopher language model via a conversational prompt (Table A30) and a template to process inputs and outputs in a uniform conversational format. We use the fact that Gopher almost always continues similarly when prompted with text in the following format:

To send a user’s message to Gopher, we append the string “User: {message}” to the dialogue history. To elicit a response from Gopher, we append “\n\nGopher: ” to the history and then sample conditioned on the entire history, using nucleus sampling with 𝑝 = 0.8 (Holtzman et al., 2019). We truncate the sample when either Gopher generates the string “\n\nUser: ” (indicating it has finished its ‘turn’) or we hit a maximum length.

In Table A30 we include the complete prompt used to condition Gopher towards dialogue. With our SentencePiece tokenizer, this prompt consumes roughly 800 tokens of the 2048-token context Gopher was trained with. In practice this leaves plenty of room for subsequent dialogue.

Table A30 | The Gopher prompt. Here, we hand author-desirable responses for both parties.

H.2. Dialogue Dataset Filtering
We construct a dataset of two-person dialogue by taking MassiveWeb and applying a filtering heuristic based on a common written dialogue format (“interview transcript” style).

Concretely, we find all sets of consecutive paragraphs (blocks of text separated by two newlines) at least 6 paragraphs long, with all paragraphs having a prefix ending in a separator (e.g., “Gopher: ”, “Dr Smith – ”, or “Q. ”). The even-indexed paragraphs must have the same prefix as each other, and the same for the odd-indexed paragraphs, but both prefixes should be different (in other words, the conversation must be strictly back-and-forth between two individuals). This procedure reliably yields high-quality dialogue.

H.3. Comparison Methodology
We discuss the methodology used to compare Dialogue-Tuned Gopher, a supervised fine-tuned Gopher
on a dialogue-subset of MassiveWeb, and Dialogue-Prompted Gopher.

We instructed participants to express preference over the two models that would jointly engage in a central dialogue. At each turn, they are shown a Dialogue-Prompted Gopher move and a Dialogue-Tuned Gopher move, and the participant selects the one they prefer. Each dialogue continues according to either the prompted or tuned model, independent of the user’s choice. We call this the move selector.
When the move selector is set to prompted the response is always chosen to be the prompted model. In theory this gives the prompted model an advantage, as it is sitting closer to its own distribution of conversation. We compare the models under both move selector settings and find there is no statistical difference in preference between the two, displayed in Table A31.

H.4. RTP in a Dialogue Setting
To evaluate Dialogue-Prompted Gopher we obtain a set of questions based on the RealToxicityPrompts (RTP) dataset, where prompt and continuation contain ‘?’. We remove the continuation after the ‘?’ and sample 500 questions from each of the toxicity buckets [0.0, 0.25), [0.25, 0.5), [0.5, 0.75), [0.75, 1.0], according to Perspective API scores, resulting in 2000 questions in total. Next we feed an RTP question as the User’s utterance to the Dialogue-Prompted Gopher models and sample 25 continuations per question (up to 100 tokens). We then evaluate the continuations of dialogue-prompted models with these questions and present the aggregate results in Figure 9.

H.5. Selected Transcripts
The following transcripts exhibit some of the qualities and common failings of the model. Explanations and additional observations are contained in captions. All of these transcripts were collected via open-ended dialogue between Dialogue-Prompted Gopher and one of the authors. Some transcripts are truncated for brevity.

Table A32 | Answers to trivia questions are sometimes right – but the model is not looking anything up, despite statements here.
Table A33 | Factual recall can be impressive, but some simple questions confound the system.
Table A34 | Toxic questions are sometimes evaded. Note that this is not a robust property of the model; see the following example.
Table A35 | It is straightforward to get Gopher to generate toxic or harmful statements.
Table A36 | Responses can be false and nonsensical.
Table A37 | Reasoning failures are common in longer dialogues.
Table A38 | Sometimes the system will decline a reasonable user request.
Table A39 | Sometimes the system provides useful pointers but refrains from further detail.
Table A40 | Conversations can create the illusion of creativity.
Table A41 | Example of Semi-Factual Dialogue. All but one responses are technically correct in this example. The model is much more precise than Table 7 which follows a similar script. However the “It’s a bit of a trick question” response is mis-leading since human gut bacteria are well studied.
Table A42 | Conversations can include a mixture of contextual conditioning and factual recall.