{ "cells": [ { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "---\n", "description: Continual Learning Algorithms Prototyping Made Easy\n", "---\n", "# Training\n", "\n", "Welcome to the \"_Training_\" tutorial of the \"_From Zero to Hero_\" series. In this part we will present the functionalities offered by the `training` module." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "!pip install git+https://github.com/ContinualAI/avalanche.git" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## πŸ’ͺ The Training Module\n", "\n", "The `training` module in _Avalanche_ is designed with modularity in mind. Its main goals are to:\n", "\n", "1. provide a set of popular **continual learning baselines** that can be easily used to run experimental comparisons;\n", "2. provide simple abstractions to **create and run your own strategy** as efficiently and easy as possible starting from a couple of basic building blocks we already prepared for you.\n", "\n", "At the moment, the `training` module includes two main components:\n", "\n", "* **Strategies**: these are popular baselines already implemented for you which you can use for comparisons or as base classes to define a custom strategy.\n", "* **Plugins**: these are classes that allow to add some specific behaviour to your own strategy. The plugin system allows to define reusable components which can be easily combined together (e.g. a replay strategy, a regularization strategy). They are also used to automatically manage logging and evaluation.\n", "\n", "Keep in mind that Avalanche's components are mostly independent from each other. If you already have your own strategy which does not use Avalanche, you can use benchmarks and metrics without ever looking at Avalanche's strategies.\n", "\n", "## πŸ“ˆ How to Use Strategies & Plugins\n", "\n", "If you want to compare your strategy with other classic continual learning algorithm or baselines, in _Avalanche_ you can instantiate a strategy with a couple lines of code.\n", "\n", "### Strategy Instantiation\n", "Most strategies require only 3 mandatory arguments:\n", "- **model**: this must be a `torch.nn.Module`.\n", "- **optimizer**: `torch.optim.Optimizer` already initialized on your `model`.\n", "- **loss**: a loss function such as those in `torch.nn.functional`.\n", "\n", "Additional arguments are optional and allow you to customize training (batch size, epochs, ...) or strategy-specific parameters (buffer size, regularization strenght, ...)." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from torch.optim import SGD\n", "from torch.nn import CrossEntropyLoss\n", "from avalanche.models import SimpleMLP\n", "from avalanche.training.strategies import Naive, CWRStar, Replay, GDumb, Cumulative, LwF, GEM, AGEM, EWC\n", "\n", "model = SimpleMLP(num_classes=10)\n", "optimizer = SGD(model.parameters(), lr=0.001, momentum=0.9)\n", "criterion = CrossEntropyLoss()\n", "cl_strategy = Naive(\n", " model, optimizer, criterion, \n", " train_mb_size=100, train_epochs=4, eval_mb_size=100\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training & Evaluation\n", "\n", "Each strategy object offers two main methods: `train` and `eval`. Both of them, accept either a _single experience_(`Experience`) or a _list of them_, for maximum flexibility.\n", "\n", "We can train the model continually by iterating over the `train_stream` provided by the scenario." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Starting experiment...\n", "Start of experience: 0\n", "Current Classes: [5, 6]\n", "-- >> Start of training phase << --\n", "-- Starting training on experience 0 (Task 0) from train stream --\n", " 6%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 7/114 [00:00<00:06, 16.87it/s]" ] }, { "name": "stderr", "output_type": "stream", "text": [ "C:\\Users\\w-32\\Anaconda3\\envs\\avalanche-env\\lib\\site-packages\\torch\\autograd\\__init__.py:130: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at ..\\c10\\cuda\\CUDAFunctions.cpp:100.)\n", " Variable._execution_engine.run_backward(\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 114/114 [00:03<00:00, 30.21it/s]\n", "Epoch 0 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.3895\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9004\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 114/114 [00:03<00:00, 32.00it/s]\n", "Epoch 1 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.1044\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9668\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 114/114 [00:03<00:00, 31.91it/s]\n", "Epoch 2 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.0866\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9717\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 114/114 [00:03<00:00, 30.46it/s]\n", "Epoch 3 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.0735\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9766\n", "-- >> End of training phase << --\n", "Training completed\n", "Computing accuracy on the whole test set\n", "-- >> Start of eval phase << --\n", "-- Starting eval on experience 0 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 19/19 [00:00<00:00, 37.18it/s]\n", "> Eval on experience 0 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp000 = 0.0650\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp000 = 0.9773\n", "-- Starting eval on experience 1 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 22/22 [00:00<00:00, 35.77it/s]\n", "> Eval on experience 1 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp001 = 7.7244\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp001 = 0.0000\n", "-- Starting eval on experience 2 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:00<00:00, 33.00it/s]\n", "> Eval on experience 2 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp002 = 9.9629\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp002 = 0.0000\n", "-- Starting eval on experience 3 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 21/21 [00:00<00:00, 34.43it/s]\n", "> Eval on experience 3 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp003 = 10.0202\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp003 = 0.0000\n", "-- Starting eval on experience 4 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 21/21 [00:00<00:00, 34.71it/s]\n", "> Eval on experience 4 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp004 = 8.2904\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp004 = 0.0000\n", "-- >> End of eval phase << --\n", "\tLoss_Stream/eval_phase/test_stream = 7.3221\n", "\tTop1_Acc_Stream/eval_phase/test_stream = 0.1808\n", "Start of experience: 1\n", "Current Classes: [1, 2]\n", "-- >> Start of training phase << --\n", "-- Starting training on experience 1 (Task 0) from train stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 127/127 [00:04<00:00, 29.55it/s]\n", "Epoch 0 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.6087\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.8744\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 127/127 [00:04<00:00, 28.15it/s]\n", "Epoch 1 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.0657\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9810\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 127/127 [00:04<00:00, 28.82it/s]\n", "Epoch 2 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.0500\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9857\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 127/127 [00:03<00:00, 32.60it/s]\n", "Epoch 3 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.0456\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9865\n", "-- >> End of training phase << --\n", "Training completed\n", "Computing accuracy on the whole test set\n", "-- >> Start of eval phase << --\n", "-- Starting eval on experience 0 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 19/19 [00:00<00:00, 39.83it/s]\n", "> Eval on experience 0 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp000 = 7.2488\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp000 = 0.0000\n", "-- Starting eval on experience 1 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 22/22 [00:00<00:00, 39.71it/s]\n", "> Eval on experience 1 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp001 = 0.0314\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp001 = 0.9917\n", "-- Starting eval on experience 2 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:00<00:00, 39.92it/s]\n", "> Eval on experience 2 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp002 = 9.9184\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp002 = 0.0000\n", "-- Starting eval on experience 3 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 21/21 [00:00<00:00, 41.02it/s]\n", "> Eval on experience 3 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp003 = 8.9548\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp003 = 0.0000\n", "-- Starting eval on experience 4 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 21/21 [00:00<00:00, 39.33it/s]\n", "> Eval on experience 4 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp004 = 7.9574\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp004 = 0.0000\n", "-- >> End of eval phase << --\n", "\tLoss_Stream/eval_phase/test_stream = 6.6933\n", "\tTop1_Acc_Stream/eval_phase/test_stream = 0.2149\n", "Start of experience: 2\n", "Current Classes: [0, 8]\n", "-- >> Start of training phase << --\n", "-- Starting training on experience 2 (Task 0) from train stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 118/118 [00:03<00:00, 33.42it/s]\n", "Epoch 0 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.7654\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.8548\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 118/118 [00:03<00:00, 33.74it/s]\n", "Epoch 1 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.0589\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9827\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 118/118 [00:03<00:00, 33.07it/s]\n", "Epoch 2 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.0458\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9861\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 118/118 [00:03<00:00, 30.55it/s]\n", "Epoch 3 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.0403\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9885\n", "-- >> End of training phase << --\n", "Training completed\n", "Computing accuracy on the whole test set\n", "-- >> Start of eval phase << --\n", "-- Starting eval on experience 0 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 19/19 [00:00<00:00, 37.40it/s]\n", "> Eval on experience 0 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp000 = 6.4470\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp000 = 0.0000\n", "-- Starting eval on experience 1 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 22/22 [00:00<00:00, 38.13it/s]\n", "> Eval on experience 1 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp001 = 6.1095\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp001 = 0.0000\n", "-- Starting eval on experience 2 (Task 0) from test stream --\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:00<00:00, 39.37it/s]\n", "> Eval on experience 2 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp002 = 0.0302\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp002 = 0.9918\n", "-- Starting eval on experience 3 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 21/21 [00:00<00:00, 39.85it/s]\n", "> Eval on experience 3 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp003 = 9.1634\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp003 = 0.0000\n", "-- Starting eval on experience 4 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 21/21 [00:00<00:00, 41.02it/s]\n", "> Eval on experience 4 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp004 = 7.6975\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp004 = 0.0000\n", "-- >> End of eval phase << --\n", "\tLoss_Stream/eval_phase/test_stream = 5.9198\n", "\tTop1_Acc_Stream/eval_phase/test_stream = 0.1938\n", "Start of experience: 3\n", "Current Classes: [9, 3]\n", "-- >> Start of training phase << --\n", "-- Starting training on experience 3 (Task 0) from train stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 121/121 [00:03<00:00, 33.38it/s]\n", "Epoch 0 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.8818\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.8196\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 121/121 [00:03<00:00, 32.57it/s]\n", "Epoch 1 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.1196\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9648\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 121/121 [00:03<00:00, 34.24it/s]\n", "Epoch 2 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.0959\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9704\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 121/121 [00:03<00:00, 33.98it/s]\n", "Epoch 3 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.0842\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9731\n", "-- >> End of training phase << --\n", "Training completed\n", "Computing accuracy on the whole test set\n", "-- >> Start of eval phase << --\n", "-- Starting eval on experience 0 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 19/19 [00:00<00:00, 39.91it/s]\n", "> Eval on experience 0 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp000 = 7.2507\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp000 = 0.0000\n", "-- Starting eval on experience 1 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 22/22 [00:00<00:00, 40.37it/s]\n", "> Eval on experience 1 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp001 = 6.4701\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp001 = 0.0000\n", "-- Starting eval on experience 2 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:00<00:00, 40.49it/s]\n", "> Eval on experience 2 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp002 = 7.0259\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp002 = 0.0010\n", "-- Starting eval on experience 3 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 21/21 [00:00<00:00, 41.34it/s]\n", "> Eval on experience 3 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp003 = 0.0645\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp003 = 0.9772\n", "-- Starting eval on experience 4 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 21/21 [00:00<00:00, 39.55it/s]\n", "> Eval on experience 4 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp004 = 9.6361\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp004 = 0.0000\n", "-- >> End of eval phase << --\n", "\tLoss_Stream/eval_phase/test_stream = 6.0662\n", "\tTop1_Acc_Stream/eval_phase/test_stream = 0.1975\n", "Start of experience: 4\n", "Current Classes: [4, 7]\n", "-- >> Start of training phase << --\n", "-- Starting training on experience 4 (Task 0) from train stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 122/122 [00:03<00:00, 33.41it/s]\n", "Epoch 0 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 1.0003\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.8052\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 122/122 [00:03<00:00, 31.35it/s]\n", "Epoch 1 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.0959\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9725\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 122/122 [00:03<00:00, 35.41it/s]\n", "Epoch 2 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.0703\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9800\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 122/122 [00:03<00:00, 32.85it/s]\n", "Epoch 3 ended.\n", "\tLoss_Epoch/train_phase/train_stream/Task000 = 0.0579\n", "\tTop1_Acc_Epoch/train_phase/train_stream/Task000 = 0.9830\n", "-- >> End of training phase << --\n", "Training completed\n", "Computing accuracy on the whole test set\n", "-- >> Start of eval phase << --\n", "-- Starting eval on experience 0 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 19/19 [00:00<00:00, 39.50it/s]\n", "> Eval on experience 0 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp000 = 7.6022\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp000 = 0.0000\n", "-- Starting eval on experience 1 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 22/22 [00:00<00:00, 40.52it/s]\n", "> Eval on experience 1 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp001 = 5.8017\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp001 = 0.0000\n", "-- Starting eval on experience 2 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:00<00:00, 33.56it/s]\n", "> Eval on experience 2 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp002 = 6.5952\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp002 = 0.0020\n", "-- Starting eval on experience 3 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 21/21 [00:00<00:00, 32.92it/s]\n", "> Eval on experience 3 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp003 = 6.5851\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp003 = 0.0411\n", "-- Starting eval on experience 4 (Task 0) from test stream --\n", "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 21/21 [00:00<00:00, 33.33it/s]\n", "> Eval on experience 4 (Task 0) from test stream ended.\n", "\tLoss_Exp/eval_phase/test_stream/Task000/Exp004 = 0.0413\n", "\tTop1_Acc_Exp/eval_phase/test_stream/Task000/Exp004 = 0.9886\n", "-- >> End of eval phase << --\n", "\tLoss_Stream/eval_phase/test_stream = 5.2902\n", "\tTop1_Acc_Stream/eval_phase/test_stream = 0.2074\n" ] } ], "source": [ "from avalanche.benchmarks.classic import SplitMNIST\n", "\n", "# scenario\n", "scenario = SplitMNIST(n_experiences=5, seed=1)\n", "\n", "# TRAINING LOOP\n", "print('Starting experiment...')\n", "results = []\n", "for experience in scenario.train_stream:\n", " print(\"Start of experience: \", experience.current_experience)\n", " print(\"Current Classes: \", experience.classes_in_this_experience)\n", "\n", " cl_strategy.train(experience)\n", " print('Training completed')\n", "\n", " print('Computing accuracy on the whole test set')\n", " results.append(cl_strategy.eval(scenario.test_stream))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Adding Plugins\n", "\n", "We noticed that many continual learning strategies follow roughly the same training/evaluation loops and always implement the same boilerplate code. So, it seems natural to define most strategies by specializing the few methods that need to be changed. Most strategies only _augment_ the naive strategy with additional behavior, without changing the basic training and evlaution loops. These strategies can be easily implemented with a couple of methods.\n", "\n", "_Avalanche_'s plugins allow to augment a strategy with additional behavior. Currently, most continual learning strategies are also implemented as plugins, which makes them easy to combine together. For example, it is extremely easy to create a hybrid strategy that combines replay and EWC together by passing the appropriate `plugins` list to the `BaseStrategy`:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from avalanche.training.strategies import BaseStrategy\n", "from avalanche.training.plugins import ReplayPlugin, EWCPlugin\n", "\n", "replay = ReplayPlugin(mem_size=100)\n", "ewc = EWCPlugin(ewc_lambda=0.001)\n", "strategy = BaseStrategy(\n", " model, optimizer, criterion, \n", " plugins=[replay, ewc])" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## πŸ“Create your Strategy\n", "\n", "In _Avalanche_ you can build your own strategy in 2 main ways:\n", "\n", "1. **Plugins**: Most strategies can be defined by _augmenting_ the basic training and evaluation loops. The easiest way to define a custom strategy such as a regularization or replay strategy, is to define it as a custom plugin. This is the suggested approach as it is the easiest way to define custom strategies.\n", "2. **Subclassing**: In _Avalanche_, continual learning strategies inherit from the `BaseStrategy`, which provides generic training and evaluation loops. You can safely override most methods to customize your strategy. However, there are some caveats to discuss (see later) and in general this approach is more difficult than plugins.\n", "\n", "Keep in mind that if you already have a continual learning strategy that does not use _Avalanche_, you can always use `benchmarks` and `evaluation` without using _Avalanche_'s strategies!\n", "\n", "### Avalanche Strategies - Under the Hood\n", "\n", "As we already mentioned, _Avalanche_ strategies inherit from `BaseStrategy`. This strategy provides:\n", "\n", "1. Basic _Training_ and _Evaluation_ loops which define a naive strategy.\n", "2. _Callback_ points, which can be used to augment the strategy's behavior.\n", "3. A set of variables representing the state of the loops (current batch, predictions, ...) which allows plugins and child classes to easily manipulate the state of the training loop.\n", "\n", "The training loop has the following structure:\n", "```text\n", "train\n", " before_training\n", " before_training_exp\n", " adapt_train_dataset\n", " make_train_dataloader\n", " before_training_epoch\n", " before_training_iteration\n", " before_forward\n", " after_forward\n", " before_backward\n", " after_backward\n", " after_training_iteration\n", " before_update\n", " after_update\n", " after_training_epoch\n", " after_training_exp\n", " after_training\n", "```\n", "\n", "The evaluation loop is similar:\n", "```text\n", "eval\n", " before_eval\n", " adapt_eval_dataset\n", " make_eval_dataloader\n", " before_eval_exp\n", " eval_epoch\n", " before_eval_iteration\n", " before_eval_forward\n", " after_eval_forward\n", " after_eval_iteration\n", " after_eval_exp\n", " after_eval\n", "```\n", "\n", "### Custom Plugin\n", "Plugins provide a simple solution to define a new strategy by augmenting the behavior of another strategy (typically a naive strategy). This approach reduces the overhead and code duplication, **improving code readability and prototyping speed**.\n", "\n", "Creating a plugin is straightforward. You create a class which inherits from `StrategyPlugin` and implements the callbacks that you need. The exact callback to use depend on your strategy. For example, the following replay plugin uses `after_training_exp` to update the buffer after each training experience, and the `adapt_training_dataset` to concatenate the buffer's data with the current experience:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from avalanche.training.plugins import StrategyPlugin\n", "\n", "class ReplayPlugin(StrategyPlugin):\n", " \"\"\"\n", " Experience replay plugin.\n", "\n", " Handles an external memory filled with randomly selected\n", " patterns and implements the \"adapt_train_dataset\" callback to add them to\n", " the training set.\n", "\n", " The :mem_size: attribute controls the number of patterns to be stored in\n", " the external memory. In multitask scenarios, mem_size is the memory size\n", " for each task. We assume the training set contains at least :mem_size: data\n", " points.\n", " \"\"\"\n", "\n", " def __init__(self, mem_size=200):\n", " super().__init__()\n", "\n", " self.mem_size = mem_size\n", " self.ext_mem = {} # a Dict\n", " self.rm_add = None\n", "\n", " def adapt_train_dataset(self, strategy, **kwargs):\n", " \"\"\"\n", " Expands the current training set with datapoint from\n", " the external memory before training.\n", " \"\"\"\n", " curr_data = strategy.experience.dataset\n", "\n", " # Additional set of the current batch to be concatenated to the ext.\n", " # memory at the end of the training\n", " self.rm_add = None\n", "\n", " # how many patterns to save for next iter\n", " h = min(self.mem_size // (strategy.training_exp_counter + 1),\n", " len(curr_data))\n", "\n", " # We recover it using the random_split method and getting rid of the\n", " # second split.\n", " self.rm_add, _ = random_split(\n", " curr_data, [h, len(curr_data) - h]\n", " )\n", "\n", " if strategy.training_exp_counter > 0:\n", " # We update the train_dataset concatenating the external memory.\n", " # We assume the user will shuffle the data when creating the data\n", " # loader.\n", " for mem_task_id in self.ext_mem.keys():\n", " mem_data = self.ext_mem[mem_task_id]\n", " if mem_task_id in strategy.adapted_dataset:\n", " cat_data = ConcatDataset([curr_data, mem_data])\n", " strategy.adapted_dataset[mem_task_id] = cat_data\n", " else:\n", " strategy.adapted_dataset[mem_task_id] = mem_data\n", "\n", " def after_training_exp(self, strategy, **kwargs):\n", " \"\"\" After training we update the external memory with the patterns of\n", " the current training batch/task. \"\"\"\n", " curr_task_id = strategy.experience.task_label\n", "\n", " # replace patterns in random memory\n", " ext_mem = self.ext_mem\n", " if curr_task_id not in ext_mem:\n", " ext_mem[curr_task_id] = self.rm_add\n", " else:\n", " rem_len = len(ext_mem[curr_task_id]) - len(self.rm_add)\n", " _, saved_part = random_split(ext_mem[curr_task_id],\n", " [len(self.rm_add), rem_len]\n", " )\n", " ext_mem[curr_task_id] = ConcatDataset([saved_part, self.rm_add])\n", " self.ext_mem = ext_mem" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "Check `StrategyPlugin`'s documentation for a complete list of the available callbacks." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Custom Strategy\n", "\n", "You can always define a custom strategy by overriding `BaseStrategy` methods.\n", "However, There is an important caveat to keep in mind. If you override a method, you must remember to call all the callback's handlers at the appropriate points. For example, `train` calls `before_training` and `after_training` before and after the training loops, respectively. If your strategy strategy does not call them, plugins may not work as expected. The easiest way to avoid mistakes is to start from the `BaseStrategy` method that you want to override and modify it to your own needs without removing the callbacks handling.\n", "\n", "There is only a single plugin that is always used by default, the `EvaluationPlugin` (see `evaluation` tutorial). This means that if you break callbacks you must log metrics by yourself. This is totally possible but requires some manual work to update, log, and reset each metric, which is done automatically for you by the `BaseStrategy`.\n", "\n", "`BaseStrategy` provides the global state of the loop in the strategy's attributes, which you can safely use when you override a method. As an example, the `Cumulative` strategy trains a model continually on the union of all the experiences encountered so far. To achieve this, the cumulative strategy overrides `adapt_train_dataset` and updates `self.adapted_dataset' by concatenating all the previous experiences with the current one." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "class Cumulative(BaseStrategy):\n", " def __init__(*args, **kwargs):\n", " super().__init__(*args, **kwargs)\n", " self.dataset = {} # cumulative dataset\n", "\n", " def adapt_train_dataset(self, **kwargs):\n", " super().adapt_train_dataset(**kwargs)\n", " curr_task_id = self.experience.task_label\n", " curr_data = self.experience.dataset\n", " if curr_task_id in self.dataset:\n", " cat_data = ConcatDataset([self.dataset[curr_task_id],\n", " curr_data])\n", " self.dataset[curr_task_id] = cat_data\n", " else:\n", " self.dataset[curr_task_id] = curr_data\n", " self.adapted_dataset = self.dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Easy, isn't it? :-\\)\n", "\n", "In general, we recommend to _implement a Strategy via plugins_, if possible. This approach is the easiest to use and requires a minimal knowledge of the `BaseStrategy`. It also allows other people to use your plugin and facilitates interoperability among different strategies.\n", "\n", "For example, replay strategies can be implemented as a custom strategy of the `BaseStrategy` or as plugins. However, creating a plugin is better because it allows to use our replay strategy in conjunction with other strategies." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This completes the \"_Training_\" chapter for the \"_From Zero to Hero_\" series. We hope you enjoyed it!\n", "\n", "## 🀝 Run it on Google Colab\n", "\n", "You can run _this chapter_ and play with it on Google Colaboratory: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ContinualAI/colab/blob/master/notebooks/avalanche/3.-training.ipynb)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.2" } }, "nbformat": 4, "nbformat_minor": 2 }