{ "cells": [ { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "---\n", "description: Automatic Evaluation with Pre-implemented Metrics\n", "---\n", "\n", "# Evaluation\n", "\n", "Welcome to the \"_Evaluation_\" tutorial of the \"_From Zero to Hero_\" series. In this part we will present the functionalities offered by the `evaluation` module." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "!pip install git+https://github.com/ContinualAI/avalanche.git" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 📈 The Evaluation Module\n", "\n", "\n", "\n", "The `evaluation` module is quite straightforward: it offers all the basic functionalities to evaluate and keep track of a continual learning experiment.\n", "\n", "This is mostly done through the **Metrics**: a set of classes which implement the main continual learning metrics computation like A_ccuracy_, F_orgetting_, M_emory Usage_, R_unning Times_, etc. At the moment, in _Avalanche_ we offer a number of pre-implemented metrics you can use for your own experiments. We made sure to include all the major accuracy-based metrics but also the ones related to computation and memory.\n", "\n", "Each metric comes with a standalone class and a set of plugin classes aimed at emitting metric values on specific moments during training and evaluation.\n", "\n", "#### Standalone metric\n", "\n", "As an example, the standalone `Accuracy` class can be used to monitor the average accuracy over a stream of `` pairs. The class provides an `update` method to update the current average accuracy, a `result` method to print the current average accuracy and a `reset` method to set the current average accuracy to zero. The call to `result`does not change the metric state." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Initial Accuracy: 0.0\n", "Average Accuracy: 0.5\n", "Average Accuracy: 0.75\n", "After reset: 0.0\n" ] } ], "source": [ "import torch\n", "from avalanche.evaluation.metrics import Accuracy\n", "\n", "# create an instance of the standalone Accuracy metric\n", "# initial accuracy is 0\n", "acc_metric = Accuracy()\n", "print(\"Initial Accuracy: \", acc_metric.result()) # output 0\n", "\n", "# two consecutive metric updates\n", "real_y = torch.tensor([1, 2]).long()\n", "predicted_y = torch.tensor([1, 0]).float()\n", "acc_metric.update(real_y, predicted_y)\n", "acc = acc_metric.result()\n", "print(\"Average Accuracy: \", acc) # output 0.5\n", "predicted_y = torch.tensor([1,2]).float()\n", "acc_metric.update(real_y, predicted_y)\n", "acc = acc_metric.result()\n", "print(\"Average Accuracy: \", acc) # output 0.75\n", "\n", "# reset accuracy to 0\n", "acc_metric.reset()\n", "print(\"After reset: \", acc_metric.result()) # output 0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Plugin metric\n", "\n", "If you want to integrate the available metrics automatically in the training and evaluation flow, you can use plugion metrics, like `EpochAccuracy` which logs the accuracy after each training epoch, or `ExperienceAccuracy` which logs the accuracy after each evaluation experience. Each of these metrics emits a **curve** composed by its values at different points in time \\(e.g. on different training epochs\\). In order to simplify the use of these metrics, we provided utility functions with which you can create different plugin metrics in one shot. The results of these functions can be passed as parameters directly to the `EvaluationPlugin`\\(see below\\).\n", "\n", "{% hint style=\"info\" %}\n", "We recommend to use the helper functions when creating plugin metrics.\n", "{% endhint %}" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from avalanche.evaluation.metrics import accuracy_metrics, \\\n", " loss_metrics, forgetting_metrics, bwt_metrics,\\\n", " confusion_matrix_metrics, cpu_usage_metrics, \\\n", " disk_usage_metrics, gpu_usage_metrics, MAC_metrics, \\\n", " ram_usage_metrics, timing_metrics\n", "\n", "# you may pass the result to the EvaluationPlugin\n", "metrics = accuracy_metrics(epoch=True, experience=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The metrics currently available in the current _Avalanche_ release are:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from avalanche.evaluation.metrics import Accuracy, \\\n", "MinibatchAccuracy, EpochAccuracy, RunningEpochAccuracy, \\\n", "ExperienceAccuracy, StreamAccuracy, \\\n", "Loss, MinibatchLoss, EpochLoss, RunningEpochLoss, \\\n", "ExperienceLoss, StreamLoss, \\\n", "Forgetting, ExperienceForgetting, StreamForgetting, \\\n", "BWT, ExperienceBWT, StreamBWT, \\\n", "ConfusionMatrix, StreamConfusionMatrix, WandBStreamConfusionMatrix, \\\n", "CPUUsage, MinibatchCPUUsage, EpochCPUUsage, RunningEpochCPUUsage, \\\n", "ExperienceCPUUsage, StreamCPUUsage, \\\n", "DiskUsage, MinibatchDiskUsage, EpochDiskUsage, \\\n", "ExperienceDiskUsage, StreamDiskUsage, \\\n", "MaxGPU, MinibatchMaxGPU, EpochMaxGPU, ExperienceMaxGPU, \\\n", "StreamMaxGPU,\\\n", "MAC, MinibatchMAC, EpochMAC, ExperienceMAC, \\\n", "MaxRAM, MinibatchMaxRAM, EpochMaxRAM, ExperienceMaxRAM, \\\n", "StreamMaxRAM, \\\n", "ElapsedTime, MinibatchTime, EpochTime, RunningEpochTime, \\\n", "ExperienceTime, StreamTime" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 📐Evaluation Plugin\n", "\n", "The **Evaluation Plugin**, is the object in charge of configuring and controlling the evaluation procedure. This object can be passed to a Strategy as as a \"special\" plugin through the evaluator attribute.\n", "\n", "The Evaluation Plugin accepts as inputs the plugin metrics you want to track. In addition, you can add one or more loggers to print the metrics in different ways \\(on file, on standard output, on Tensorboard...\\)." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Starting experiment...\n", "Start of experience: 0\n", "Current Classes: [1, 9]\n", "Training completed\n", "Computing accuracy on the whole test set\n", "Start of experience: 1\n", "Current Classes: [0, 8]\n", "Training completed\n", "Computing accuracy on the whole test set\n", "Start of experience: 2\n", "Current Classes: [3, 6]\n", "Training completed\n", "Computing accuracy on the whole test set\n", "Start of experience: 3\n", "Current Classes: [2, 7]\n", "Training completed\n", "Computing accuracy on the whole test set\n", "Start of experience: 4\n", "Current Classes: [4, 5]\n", "Training completed\n", "Computing accuracy on the whole test set\n" ] } ], "source": [ "from torch.nn import CrossEntropyLoss\n", "from torch.optim import SGD\n", "from avalanche.benchmarks.classic import SplitMNIST\n", "from avalanche.evaluation.metrics import forgetting_metrics, \\\n", "accuracy_metrics, loss_metrics, timing_metrics, cpu_usage_metrics, \\\n", "confusion_matrix_metrics, disk_usage_metrics\n", "from avalanche.models import SimpleMLP\n", "from avalanche.training.plugins import EvaluationPlugin\n", "from avalanche.training.strategies import Naive\n", "\n", "scenario = SplitMNIST(n_experiences=5)\n", "\n", "# MODEL CREATION\n", "model = SimpleMLP(num_classes=scenario.n_classes)\n", "\n", "# DEFINE THE EVALUATION PLUGIN\n", "# The evaluation plugin manages the metrics computation.\n", "# It takes as argument a list of metrics, collectes their results and returns\n", "# them to the strategy it is attached to.\n", "\n", "eval_plugin = EvaluationPlugin(\n", " accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),\n", " loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),\n", " timing_metrics(epoch=True),\n", " forgetting_metrics(experience=True, stream=True),\n", " cpu_usage_metrics(experience=True),\n", " confusion_matrix_metrics(num_classes=scenario.n_classes, save_image=False, stream=True),\n", " disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True)\n", ")\n", "\n", "# CREATE THE STRATEGY INSTANCE (NAIVE)\n", "cl_strategy = Naive(\n", " model, SGD(model.parameters(), lr=0.001, momentum=0.9),\n", " CrossEntropyLoss(), train_mb_size=500, train_epochs=1, eval_mb_size=100,\n", " evaluator=eval_plugin)\n", "\n", "# TRAINING LOOP\n", "print('Starting experiment...')\n", "results = []\n", "for experience in scenario.train_stream:\n", " print(\"Start of experience: \", experience.current_experience)\n", " print(\"Current Classes: \", experience.classes_in_this_experience)\n", "\n", " # train returns a dictionary which contains all the metric values\n", " res = cl_strategy.train(experience)\n", " print('Training completed')\n", "\n", " print('Computing accuracy on the whole test set')\n", " # test also returns a dictionary which contains all the metric values\n", " results.append(cl_strategy.eval(scenario.test_stream))" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Implement your own metric\n", "\n", "To implement a **standalone metric**, you have to subclass `Metric` class." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from avalanche.evaluation import Metric\n", "\n", "\n", "# a standalone metric implementation\n", "class MyStandaloneMetric(Metric[float]):\n", " \"\"\"\n", " This metric will return a `float` value\n", " \"\"\"\n", " def __init__(self):\n", " \"\"\"\n", " Initialize your metric here\n", " \"\"\"\n", " super().__init__()\n", " pass\n", "\n", " def update(self):\n", " \"\"\"\n", " Update metric value here\n", " \"\"\"\n", " pass\n", "\n", " def result(self) -> float:\n", " \"\"\"\n", " Emit the metric result here\n", " \"\"\"\n", " return 0\n", "\n", " def reset(self):\n", " \"\"\"\n", " Reset your metric here\n", " \"\"\"\n", " pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " To implement a **plugin metric** you have to subclass `MetricPlugin` class" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from avalanche.evaluation import PluginMetric\n", "from avalanche.evaluation.metrics import Accuracy\n", "\n", "\n", "class MyPluginMetric(PluginMetric[float]):\n", " \"\"\"\n", " This metric will return a `float` value after\n", " each training epoch\n", " \"\"\"\n", "\n", " def __init__(self):\n", " \"\"\"\n", " Initialize the metric\n", " \"\"\"\n", " super().__init__()\n", "\n", " self._accuracy_metric = Accuracy()\n", " # current x values for the metric curve\n", " self.x_coord = 0\n", "\n", " def reset(self) -> None:\n", " \"\"\"\n", " Reset the metric\n", " \"\"\"\n", " self._accuracy_metric.reset()\n", "\n", " def result(self) -> float:\n", " \"\"\"\n", " Emit the result\n", " \"\"\"\n", " return self._accuracy_metric.result()\n", "\n", " def after_training_iteration(self, strategy: 'PluggableStrategy') -> None:\n", " \"\"\"\n", " Update the accuracy metric with the current\n", " predictions and targets\n", " \"\"\"\n", " self._accuracy_metric.update(strategy.mb_y,\n", " strategy.logits)\n", "\n", " def before_training_epoch(self, strategy: 'PluggableStrategy') -> None:\n", " \"\"\"\n", " Reset the accuracy before the epoch begins\n", " \"\"\"\n", " self.reset()\n", "\n", " def after_training_epoch(self, strategy: 'PluggableStrategy'):\n", " \"\"\"\n", " Emit the result\n", " \"\"\"\n", " value = self._accuracy_metric.result()\n", " self.x_coord += 1 # increment x value\n", " return [MetricValue(self, 'metric_full_name', value,\n", " self.x_coord)]\n", "\n", " def __str__(self):\n", " \"\"\"\n", " Here you can specify the name of your metric\n", " \"\"\"\n", " return \"Top1_Acc_Epoch\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Accessing metric values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to access all the metrics computed during training and evaluation, you have to make sure that `collect_all=True` is set when creating the `EvaluationPlugin` (default option is `True`). This option maintains an updated version of all metric results in the plugin, which can be retrieved by calling `evaluation_plugin.get_all_metrics()`. You can call this methods whenever you need the metrics. \n", "\n", "The result is a dictionary with full metric names as keys and a tuple of two lists as values. The first list stores all the `x` values recorded for that metric. Each `x` value represents the time step at which the corresponding metric value has been computed. The second list stores metric values associated to the corresponding `x` value. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "eval_plugin = EvaluationPlugin(\n", " accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),\n", " loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),\n", " forgetting_metrics(experience=True, stream=True),\n", " timing_metrics(epoch=True),\n", " cpu_usage_metrics(experience=True),\n", " confusion_matrix_metrics(num_classes=scenario.n_classes, save_image=False, stream=True),\n", " disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True),\n", " collect_all=True # this is default value anyway\n", ")\n", "\n", "# since no training and evaluation has been performed, this will return an empty dict.\n", "metric_dict = eval_plugin.get_all_metrics()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Alternatively, the `train` and `eval` method of every `strategy` returns a dictionary storing, for each metric, the last value recorded for that metric. You can use these dictionaries to incrementally accumulate metrics. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This completes the \"_Evaluation_\" tutorial for the \"_From Zero to Hero_\" series. We hope you enjoyed it!\n", "\n", "## 🤝 Run it on Google Colab\n", "\n", "You can run _this chapter_ and play with it on Google Colaboratory: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ContinualAI/colab/blob/master/notebooks/avalanche/4.-evaluation.ipynb)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.9" } }, "nbformat": 4, "nbformat_minor": 2 }