Skip to main content
Uncategorized

On First-Order Meta-Learning Algorithms

Alex Nichol and Joshua Achiam and John Schulman

OpenAI

{alex, jachiam, joschu}@openai.com

Abstract

This paper considers meta-learning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i.e., learns quickly) when presented with a previously unseen task sampled from this distribution. We analyze a family of algorithms for learning a parameter initialization that can be fine-tuned quickly on a new task, using only first- order derivatives for the meta-learning updates. This family includes and generalizes first-order MAML, an approximation to MAML obtained by ignoring second-order derivatives. It also includes Reptile, a new algorithm that we introduce here, which works by repeatedly sampling a task, training on it, and moving the initialization towards the trained weights on that task. We expand on the results from Finn et al. showing that first-order meta-learning algorithms perform well on some well-established benchmarks for few-shot classification, and we provide theoretical analysis aimed at understanding why these algorithms work.

1           Introduction

While machine learning systems have surpassed humans at many tasks, they generally need far more data to reach the same level of performance. For example, Schmidt et al. [17, 15] showed that human subjects can recognize new object categories based on a few example images. Lake et al. [12] noted that on the Atari game of Frostbite, human novices were able to make significant progress on the game after 15 minutes, but double-dueling-DQN [19] required more than 1000 times more experience to attain the same score.

It is not completely fair to compare humans to algorithms learning from scratch, since humans enter the task with a large amount of prior knowledge, encoded in their brains and DNA. Rather than learning from scratch, they are fine-tuning and recombining a set of pre-existing skills. The work cited above, by Tenenbaum and collaborators, argues that humans’ fast-learning abilities can be explained as Bayesian inference, and that the key to developing algorithms with human-level learning speed is to make our algorithms more Bayesian. However, in practice, it is challenging to develop (from first principles) Bayesian machine learning algorithms that make use of deep neural networks and are computationally feasible.

Meta-learning has emerged recently as an approach for learning from small amounts of data. Rather than trying to emulate Bayesian inference (which may be computationally intractable), meta-learning seeks to directly optimize a fast-learning algorithm, using a dataset of tasks. Specifically, we assume access to a distribution over tasks, where each task is, for example, a classification problem. From this distribution, we sample a training set and a test set of tasks. Our algorithm is fed the training set, and it must produce an agent that has good average performance on the test set. Since each task corresponds to a learning problem, performing well on a task corresponds to learning quickly.

A variety of different approaches to meta-learning have been proposed, each with its own pros and cons. In one approach, the learning algorithm is encoded in the weights of a recurrent network, but gradient descent is not performed at test time. This approach was proposed by Hochreiter et al. [8] who used LSTMs for next-step prediction and has been followed up by a burst of recent work, for example, Santoro et al. [16] on few-shot classification, and Duan et al. [3] for the POMDP setting.

A second approach is to learn the initialization of a network, which is then fine-tuned at test time on the new task. A classic example of this approach is pretraining using a large dataset (such as ImageNet [2]) and fine-tuning on a smaller dataset (such as a dataset of different species of bird [20]). However, this classic pre-training approach has no guarantee of learning an initialization that is good for fine-tuning, and ad-hoc tricks are required for good performance. More recently, Finn et al. [4] proposed an algorithm called MAML, which directly optimizes performance with respect to this initialization—differentiating through the fine-tuning process. In this approach, the learner falls back on a sensible gradient-based learning algorithm even when it receives out-of-sample data, thus allowing it to generalize better than the RNN-based approaches [5]. On the other hand, since MAML needs to differentiate through the optimization process, it’s not a good match for problems where we need to perform a large number of gradient steps at test time. The authors also proposed a variant called first-order MAML (FOMAML), which is defined by ignoring the second derivative terms, avoiding this problem but at the expense of losing some gradient information. Surprisingly, though, they found that FOMAML worked nearly as well as MAML on the Mini- ImageNet dataset [18]. (This result was foreshadowed by prior work in meta-learning [1, 13] that ignored second derivatives when differentiating through gradient descent, without ill effect.) In this work, we expand on that insight and explore the potential of meta-learning algorithms based on first-order gradient information, motivated by the potential applicability to problems where it’s too cumbersome to apply techniques that rely on higher-order gradients (like full MAML).

We make the following contributions:

  • We point out that first-order MAML [4] is simpler to implement than was widely recognized prior to this article.
  • We introduce Reptile, an algorithm closely related to FOMAML, which is equally simple to implement. Reptile is so similar to joint training (i.e., training to minimize loss on the expecation over training tasks) that it is especially surprising that it works as a meta-learning algorithm. Unlike FOMAML, Reptile doesn’t need a training-test split for each task, which may make it a more natural choice in certain settings. It is also related to the older idea of fast weights / slow weights [7].
  • We provide a theoretical analysis that applies to both first-order MAML and Reptile, showing that they both optimize for within-task generalization.
  • On the basis of empirical evaluation on the Mini-ImageNet [18] and Omniglot [11] datasets, we provide some insights for best practices in implementation.

2          Meta-Learning an Initialization

We consider the optimization problem of MAML [4]: find an initial set of parameters, φ, such that for a randomly sampled task τ with corresponding loss Lτ , the learner will have low loss after k updates. That is:

where Ukτ is the operator that updates φ k times using data sampled from τ . In few-shot learning, U corresponds to performing gradient descent or Adam [10] on batches of data sampled from τ .

MAML solves a version of Equation (1) that makes on additional assumption: for a given task τ , the inner-loop optimization uses training samples A, whereas the loss is computed using test samples B. This way, MAML optimizes for generalization, akin to cross-validation. Omitting the superscript k, we notate this as

MAML works by optimizing this loss through stochastic gradient descent, i.e., computing

In Equation (4), U I  (φ) is the Jacobian matrix of the update operation Uτ,A. Uτ,A corresponds to adding a sequence of gradient vectors to the initial vector, i.e., Uτ,A(φ) = φ + g1 + g2 + · · · + gk. (In Adam, the gradients are also rescaled elementwise, but that does not change the conclusions.) First-order MAML (FOMAML) treats these gradients as constants, thus, it replaces Jacobian U I by the identity operation. Hence, the gradient used by FOMAML in the outer-loop optimization is gFOMAML = Lτ,B (φ). Therefore, FOMAML can be implemented in a particularly simple way: (1) sample task τ ; (2) apply the update operator, yielding φ = Uτ,A(φ); (3) compute the gradient at φ-, gFOMAML = Lτ,B (φ-); and finally (4) plug gFOMAML into the outer-loop optimizer.

3          Reptile

In this section, we describe a new first-order gradient-based meta-learning algorithm called Reptile. Like MAML, Reptile learns an initialization for the parameters of a neural network model, such that when we optimize these parameters at test time, learning is fast—i.e., the model generalizes from a small number of examples from the test task. The Reptile algorithm is as follows:

In the last step, instead of simply updating φ in the direction φ φ, we can treat (φ φ) as a gradient and plug it into an adaptive algorithm such as Adam [10]. (Actually, as we will discuss in Section 5.1, it is most natural to define the Reptile gradient as (φ φ-), where α is the stepsize used by the SGD operation.) We can also define a parallel or batch version of the algorithm that evaluates on n tasks each iteration and updates the initialization to

Text Box: i=1

where φi = Uτi (φ); the updated parameters on the i    task.

This algorithm looks remarkably similar to joint training on the expected loss Eτ [Lτ ]. Indeed, if we define U to be a single step of gradient descent (k = 1), then this algorithm corresponds to stochastic gradient descent on the expected loss:

However, if we perform multiple gradient updates in the partial minimization (k > 1), then the expected update Eτ Uk(φ) does not correspond to taking a gradient step on the expected loss Eτ [Lτ ]. Instead, the update includes important terms coming from second-and-higher derivatives of Lτ , as we will analyze in Section 5.1. Hence, Reptile converges to a solution that’s very different from the minimizer of the expected loss Eτ [Lτ ].

Other than the stepsize parameter E and task sampling, the batched version of Reptile is the same as the SimuParallelSGD algorithm [21]. SimuParallelSGD is a method for communication- efficient distributed optimization, where workers perform gradient updates locally and infrequently average their parameters, rather than the standard approach of averaging gradients.

4         Case Study: One-Dimensional Sine Wave Regression

As a simple case study, let’s consider the 1D sine wave regression problem, which is slightly modified from Finn et al. [4]. This problem is instructive since by design, joint training can’t learn a very useful initialization; however, meta-learning methods can.

  • The task τ = (a, b) is defined by the amplitude a and phase φ of a sine wave function fτ (x) = a sin(x + b). The task distribution by sampling a U ([0.1, 5.0]) and b U ([0, 2π]).
  • Sample p points x1, x2, . . . , xp U ([−5, 5])
  • Learner sees (x1, y1), (x2, y2), . . . , (xp, yp) and predicts the whole function f (x)
  • Loss is £2 error on the whole interval [−5, 5]

We calculate this integral using 50 equally-spaced points x.

First note that the average function is zero everywhere, i.e., Eτ [fτ (x)] = 0, due to the random phase b. Therefore, it is useless to train on the expected loss Eτ [Lτ ], as this loss is minimized by the zero function f (x) = 0.

On the other hand, MAML and Reptile give us an initialization that outputs approximately f (x) = 0 before training on a task τ , but the internal feature representations of the network are such that after training on the sampled datapoints (x1, y1), (x2, y2), . . . , (xp, yp), it closely approximates the target function fτ . This learning progress is shown in the figures below. Figure 1 shows that after Reptile training, the network can quickly converge to a sampled sine wave and infer the values away from the sampled points. As points of comparison, we also show the behaviors of MAML and a randomly-initialized network on the same task.

Figure 1: Demonstration of MAML and Reptile on a toy few-shot regression problem, where we train on 10 sampled points of a sine wave, performing 32 gradient steps on an MLP with layers 1 → 64 → 64 → 1.

5          Analysis

In this section, we provide two alternative explanations of why Reptile works.

5.1         Leading Order Expansion of the Update

Here, we will use a Taylor series expansion to approximate the update performed by Reptile and MAML. We will show that both algorithms contain the same leading-order terms: the first term minimizes the expected loss (joint training), the second and more interesting term maximizes within-task generalization. Specifically, it maximizes the inner product between the gradients on different minibatches from the same task. If gradients from different batches have positive inner product, then taking a gradient step on one batch improves performance on the other batch.

Unlike in the discussion and analysis of MAML, we won’t consider a training set and test set from each task; instead, we’ll just assume that each task gives us a sequence of k loss functions L1, L2, . . . , Lk; for example, classification loss on different minibatches. We will use the following definitions:

For each of these definitions, i ∈ [1, k].

First, let’s calculate the SGD gradients to O(α2) as follows.

Next, we will approximate the MAML gradient. Define Ui as the operator that updates the parameter vector on minibatch i: Ui(φ) = φ αLI (φ).

Next, let’s expand to leading order

For simplicity of exposition, let’s consider the k = 2 case, and later we’ll provide the general formulas.

As we will show in the next paragraph, the terms like H2g1 serve to maximize the inner products between the gradients computed on different minibatches, while lone gradient terms like g1 take us to the minimum of the joint training problem.

When we take the expectation of gFOMAML, gReptile, and gMAML under minibatch sampling, we are left with only two kinds of terms which we will call AvgGrad and AvgGradInner. In the equations below Eτ,1,2 [. . . ] means that we are taking the expectation over the task τ and the two minibatches defining L1 and L2, respectively.

  • AvgGrad is defined as gradient of expected loss.

(−AvgGrad) is the direction that brings φ towards the minimum of the “joint training” problem; the expected loss over tasks.

  • The more interesting term is AvgGradInner, defined as follows:

Thus, (−AvgGradInner) is the direction that increases the inner product between gradients of different minibatches for a given task, improving generalization.

Recalling our gradient expressions, we get the following expressions for the meta-gradients, for SGD with k = 2:

In practice, all three gradient expressions first bring us towards the minimum of the expected loss over tasks, then the higher-order AvgGradInner term enables fast learning by maximizing the inner product between gradients within a given task.

Finally, we can extend these calculations to the general k ≥ 2 case:

As in the k = 2, the ratio of coefficients of the AvgGradInner term and the AvgGrad term goes MAML > FOMAML > Reptile. However, in all cases, this ratio increases linearly with both the stepsize α and the number of iterations k. Note that the Taylor series approximation only holds for small αk.

Figure 2: The above illustration shows the sequence of iterates obtained by moving alternately towards two optimal solution manifolds W1 and W2 and converging to the point that minimizes the average squared distance. One might object to this picture on the grounds that we converge to the same point regardless of whether we perform one step or multiple steps of gradient descent. That statement is true, however, note that minimizing the expected distance objective Eτ [D(φ, Wτ )] is different than minimizing the expected loss objective Eτ [Lτ (fφ)]. In particular, there is a high-dimensional manifold of minimizers of the expected loss Lτ (e.g., in the sine wave case, many neural network parameters give the zero function f (φ) = 0), but the minimizer of the expected distance objective is typically a single point.

5.2        Finding a Point Near All Solution Manifolds

Here, we argue that Reptile converges towards a solution φ that is close (in Euclidean distance) to each task τ ’s manifold of optimal solutions. This is a informal argument and should be taken much less seriously than the preceding Taylor series analysis.

Let φ denote the network initialization, and let Wτ denote the set of optimal parameters for task τ . We want to find φ such that the distance D(φ, Wτ ) is small for all tasks.

We will show that Reptile corresponds to performing SGD on that objective.

Given a non-pathological set S ⊂ Rd, then for almost all points φ ∈ Rd the gradient of the squared distance D(φ, S)2 is 2(φ PS(φ)), where PS(φ) is the projection (closest point) of φ onto S. Thus,

Each iteration of Reptile corresponds to sampling a task τ and performing a stochastic gradient update

In practice, we can’t exactly compute PWτ (φ), which is defined as a minimizer of Lτ . However, we can partially minimize this loss using gradient descent. Hence, in Reptile were place W (φ)by the result of running k steps of gradient descent on Lτ starting with initialization φ.

6         Experiments

6.1         Few-Shot Classification

We evaluate our method on two popular few-shot classification tasks: Omniglot [11] and Mini- ImageNet [18]. These datasets make it easy to compare our method to other few-shot learning approaches like MAML.

In few-shot classification tasks, we have a meta-dataset D containing many classes C, where each class is itself a set of example instances {c1, c2, …, cn}. If we are doing K-shot, N -way classification, then we sample tasks by selecting N classes from C and then selecting K + 1 examples for each class. We split these examples into a training set and a test set, where the test set contains a single example for each class. The model gets to see the entire training set, and then it must classify a randomly chosen sample from the test set. For example, if you trained a model for 5-shot, 5-way classification, then you would show it 25 examples (5 per class) and ask it to classify a 26th example. In addition to the above setup, we also experimented with the transductive setting, where the model classifies the entire test set at once. In our transductive experiments, information was shared between the test samples via batch normalization [9]. In our non-transductive experiments, batch normalization statistics were computed using all of the training samples and a single test sample.

We note that Finn et al. [4] use transduction for evaluating MAML.

For our experiments, we used the same CNN architectures and data preprocessing as Finn et al. [4]. We used the Adam optimizer [10] in the inner loop, and vanilla SGD in the outer loop, throughout our experiments. For Adam we set β1 = 0 because we found that momentum reduced performance across the board.1 During training, we never reset or interpolated Adam’s rolling moment data; instead, we let it update automatically at every inner-loop training step. However, we did backup and reset the Adam statistics when evaluating on the test set to avoid information leakage.

The results on Omniglot and Mini-ImageNet are shown in Tables 1 and 2. While MAML, FOMAML, and Reptile have very similar performance on all of these tasks, Reptile does slightly better than the alternatives on Mini-ImageNet and slightly worse on Omniglot. It also seems that transduction gives a performance boost in all cases, suggesting that further research should pay close attention to its use of batch normalization during testing.

Algorithm1-shot 5-way5-shot 5-way
MAML + Transduction48.70 ± 1.84%63.11 ± 0.92%
1st-order MAML + Transduction48.07 ± 1.75%63.15 ± 0.91%
Reptile47.07 ± 0.26%62.74 ± 0.37%
Reptile + Transduction49.97 ± 0.32%65.99 ± 0.58%
Table 1: Results on Mini-ImageNet. Both MAML and 1st-order MAML results are from [4].

Algorithm1-shot 5-way5-shot 5-way1-shot 20-way5-shot 20-way
MAML + Transduction98.7 ± 0.4%99.9 ± 0.1%95.8 ± 0.3%98.9 ± 0.2%
1st-order MAML + Transduction98.3 ± 0.5%99.2 ± 0.2%89.4 ± 0.5%97.9 ± 0.1%
Reptile95.39 ± 0.09%98.90 ± 0.10%88.14 ± 0.15%96.65 ± 0.33%
Reptile + Transduction97.68 ± 0.04%99.48 ± 0.06%89.43 ± 0.14%97.12 ± 0.32%
Table 2: Results on Omniglot. MAML results are from [4]. 1st-order MAML results were generated by the code for [4] with the same hyper-parameters as MAML.

6.2                Comparing Different Inner-Loop Gradient Combinations

For this experiment, we used four non-overlapping mini-batches in each inner-loop, yielding gra- dients g1, g2, g3, and g4. We then compared learning performance when using different linear combinations of the gi’s for the outer loop update. Note that two-step Reptile corresponds to g1 + g2, and two-step FOMAML corresponds to g2.

To make it easier to get an apples-to-apples comparison between different linear combinations, we simplified our experimental setup in several ways. First, we used vanilla SGD in the inner- and outer-loops. Second, we did not use meta-batches. Third, we restricted our experiments to 5-shot, 5-way Omniglot. With these simplifications, we did not have to worry as much about the effects of hyper-parameters or optimizers.

Figure 3 shows the learning curves for various inner-loop gradient combinations. For gradient combinations with more than one term, we ran both a sum and an average of the inner gradients to correct for the effective step size increase.

Figure 3: Different inner-loop gradient combinations on 5-shot 5-way Omniglot.

As expected, using only the first gradient g1 is quite ineffective, since it amounts to optimizing the expected loss over all tasks. Surprisingly, two-step Reptile is noticeably worse than two-step FOMAML, which might be explained by the fact that two-step Reptile puts less weight on AvgGradInner relative to AvgGrad (Equations (34) and (35)). Most importantly, though, all the methods improve as the number of mini-batches increases. This improvement is more significant when using a sum of all gradients (Reptile) rather than using just the final gradient (FOMAML). This also suggests that Reptile can benefit from taking many inner loop steps, which is consistent with the optimal hyper-parameters found for Section 6.1.

Figure 4: The results of hyper-parameter sweeps on 5-shot 5-way Omniglot.

6.3                Overlap Between Inner-Loop Mini-Batches

Both Reptile and FOMAML use stochastic optimization in their inner-loops. Small changes to this optimization procedure can lead to large changes in final performance. This section explores the sensitivity of Reptile and FOMAML to the inner loop hyperparameters, and also shows that FOMAML’s performance significantly drops if mini-batches are selected the wrong way.

The experiments in this section look at the difference between shared-tail FOMAML, where the final inner-loop mini-batch comes from the same set of data as the earlier inner-loop batches, to separate-tail FOMAML, where the final mini-batch comes from a disjoint set of data. Viewing FOMAML as an approximation to MAML, separate-tail FOMAML can be seen as the more correct approach (and was used by Finn et al. [4]), since the training-time optimization resembles the test-time optimization (where the test set doesn’t overlap with the training set). Indeed, we find that separate-tail FOMAML is significantly better than shared-tail FOMAML. As we will show, shared-tail FOMAML degrades in performance when the data used to compute the meta-gradient (gFOMAML = gk) overlaps significantly with the earlier batches; however, Reptile and separate-tail MAML maintain performance and are not very sensitive to the inner-loop hyperparameters.

Figure 4a shows that when minibatches are selected by cycling through the training data (shared-tail, cycle), shared-tail FOMAML performs well up to four inner-loop iterations, but drops in performance starting at five iterations, where the final minibatch (used to compute gFOMAML = gk) overlaps with the earlier ones. When we use random sampling instead (shared-tail, replacement), shared-tail FOMAML degrades more gradually. We hypothesize that this is because some samples still appear in the final batch that were not in the previous batches. The effect is stochastic, so it makes sense that the curve is smoother.

Figure 4b shows a similar phenomenon, but here we fixed the inner-loop to four iterations and instead varied the batch size. For batch sizes greater than 25, the final inner-loop batch for shared-tail FOMAML necessarily contains samples from the previous batches. Similar to Figure 4a, here we observe that shared-tail FOMAML with random sampling degrades more gradually than shared-tail FOMAML with cycling.

In both of these parameter sweeps, separate-tail FOMAML and Reptile do not degrade in performance as the number of inner-loop iterations or batch size changes.

There are several possible explanations for above findings. For example, one might hypothesize that shared-tail FOMAML is only worse in these experiments because its effective step size is much lower than that of separate-tail FOMAML. However, Figure 4c suggests that this is not the

case: performance was equally poor for every choice of step size in a thorough sweep. A different hypothesis is that shared-tail FOMAML performs poorly because, after a few inner-loop steps on a sample, the gradient of the loss for that sample does not contain very much useful information about the sample. In other words, the first few SGD steps might bring the model close to a local optimum, and then further SGD steps might simply bounce around this local optimum.

7          Discussion

Meta-learning algorithms that perform gradient descent at test time are appealing because of their simplicity and generalization properties [5]. The effectiveness of fine-tuning (e.g. from models trained on ImageNet [2]) gives us additional faith in these approaches. This paper proposed a new algorithm called Reptile, whose training process is only subtlely different from joint training and only uses first-order gradient information (like first-order MAML).

We gave two theoretical explanations for why Reptile works. First, by approximating the update with a Taylor series, we showed that SGD automatically gives us the same kind of second-order term that MAML computes. This term adjusts the initial weights to maximize the dot product between the gradients of different minibatches on the same task—i.e., it encourages the gradients to generalize between minibatches of the same task. We also provided a second informal argument, which is that Reptile finds a point that is close (in Euclidean distance) to all of the optimal solution manifolds of the training tasks.

While this paper studies the meta-learning setting, the Taylor series analysis in Section 5.1 may have some bearing on stochastic gradient descent in general. It suggests that when doing stochastic gradient descent, we are automatically performing a MAML-like update that maximizes the generalization between different minibatches. This observation partly explains why fine tuning (e.g., from ImageNet to a smaller dataset [20]) works well. This hypothesis would suggest that joint training plus fine tuning will continue to be a strong baseline for meta-learning in various machine learning problems.

8         Future Work

We see several promising directions for future work:

  • Understanding to what extent SGD automatically optimizes for generalization, and whether this effect can be amplified in the non-meta-learning setting.
  • Applying Reptile in the reinforcement learning setting. So far, we have obtained negative results, since joint training is a strong baseline, so some modifications to Reptile might be necessary.
  • Exploring whether Reptile’s few-shot learning performance can be improved by deeper archi- tectures for the classifier.
  • Exploring whether regularization can improve few-shot learning performance, as currently there is a large gap between training and testing error.
  • Evaluating Reptile on the task of few-shot density modeling [14].

References

  • [1]     Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pages 3981–3989, 2016.
  • [2]    Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hi- erarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE, 2009.
  • [3]    Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, and Pieter Abbeel. RL2: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016.
  • [4]    Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400, 2017.
  • [5]    Chelsea Finn and Sergey Levine. Meta-learning and universality: Deep representations and gradient descent can approximate any learning algorithm. arXiv preprint arXiv:1710.11622, 2017.
  • [6]    Nikolaus Hansen. The CMA evolution strategy: a comparing review. In Towards a new evolutionary computation, pages 75–102. Springer, 2006.
  • [7]    Geoffrey E Hinton and David C Plaut. Using fast weights to deblur old memories. In Proceedings of the ninth annual conference of the Cognitive Science Society, pages 177–186, 1987.
  • [8]    Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pages 87–94. Springer, 2001.
  • [9]    Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reduc- ing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
  • [10]    Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015.
  • [11]     Brenden M. Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B. Tenenbaum. One shot learning of simple visual concepts. In Conference of the Cognitive Science Society (CogSci), 2011.
  • [12]    Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015.
  • [13]    Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In International Conference on Learning Representations (ICLR), 2017.
  • [14]    Scott Reed, Yutian Chen, Thomas Paine, A¨aron van den Oord, SM Eslami, Danilo Rezende, Oriol Vinyals, and Nando de Freitas. Few-shot autoregressive density estimation: Towards learning to learn distributions. arXiv preprint arXiv:1710.10304, 2017.
  • [15]    Ruslan Salakhutdinov, Joshua Tenenbaum, and Antonio Torralba. One-shot learning with a hierarchi- cal nonparametric bayesian model. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning, pages 195–206, 2012.
  • [16]    Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta- learning with memory-augmented neural networks. In International conference on machine learning, pages 1842–1850, 2016.
  • [17]    Lauren A Schmidt. Meaning and compositionality as statistical induction of categories and constraints. PhD thesis, Massachusetts Institute of Technology, 2009.
  • [18]     Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pages 3630–3638, 2016.
  • [19]     Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Van Hasselt, Marc Lanctot, and Nando De Freitas. Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581, 2015.
  • [20]   Ning Zhang, Jeff Donahue, Ross Girshick, and Trevor Darrell. Part-based R-CNNs for fine-grained category detection. In European conference on computer vision, pages 834–849. Springer, 2014.
  • [21]     Martin Zinkevich, Markus Weimer, Lihong Li, and Alex J Smola. Parallelized stochastic gradient descent. In Advances in neural information processing systems, pages 2595–2603, 2010.

A  Hyper-parameters

For all experiments, we linearly annealed the outer step size to 0. We ran each experiment with three different random seeds, and computed the confidence intervals using the standard deviation across the runs.

Initially, we tried optimizing the Reptile hyper-parameters using CMA-ES [6]. However, we found that most hyper-parameters had little effect on the resulting performance. After seeing this result, we simplified all of the hyper-parameters and shared hyper-parameters between experiments when it made sense.

Table 3: Reptile hyper-parameters for the Omniglot comparison between all algorithms.

Parameter5-way20-way
Adam learning rate0.0010.0005
Inner batch size1020
Inner iterations510
Training shots1010
Outer step size1.01.0
Outer iterations100K200K
Meta-batch size55
Eval. inner iterations5050
Eval. inner batch510

Table 4: Reptile hyper-parameters for the Mini-ImageNet comparison between all algorithms.

Parameter1-shot5-shot
Adam learning rate0.0010.001
Inner batch size1010
Inner iterations88
Training shots1515
Outer step size1.01.0
Outer iterations100K100K
Meta-batch size55
Eval. inner batch size515
Eval. inner iterations5050

Table 5: Hyper-parameters for Section 6.2. All outer step sizes were linearly annealed to zero during training.

ParameterValue
Inner learning rate3 × 103
Inner batch size25
Outer step size0.25
Outer iterations40K
Eval. inner batch size25
Eval. inner iterations5

Table 6: Hyper-parameters Section 6.3. All outer step sizes were linearly annealed to zero during training.

ParameterFigure 4bFigure 4aFigure 4c
Inner learning rate3 × 1033 × 1033 × 103
Inner batch size25100
Inner iterations44
Outer step size1.01.0
Outer iterations40K40K40K
Eval. inner batch size252525
Eval. inner iterations555