{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Anomaly Detection in Camera Trap Images - Implementation\n", "This is an index file for the implementation part of my bachelor thesis 'Anomaly Detection in Camera Trap Images'." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Approach 1: Lapse Frame Differencing\n", "**Note:** Before this approach can be used on a new session, lapse maps have to be generated for this session using *scan_sessions.ipynb*.\n", "\n", " - *[approach1a_basic_frame_differencing.ipynb](approach1a_basic_frame_differencing.ipynb)*: Implementation.\n", " - *[approach1b_histograms.ipynb](approach1b_histograms.ipynb)*: Discarded similar approach using histogram distribution to compare Lapse and Motion images.\n", "\n", "## Approach 2: Median Frame Differencing\n", " - *[approach2_background_estimation.ipynb](approach2_background_estimation.ipynb)*: Implementation.\n", "\n", "## Approach 3: Bag of Visual Words\n", "### Notebooks\n", " - *[approach3_local_features.ipynb](approach3_local_features.ipynb)*: Visualizations and evaluation of single trainings.\n", " - *[approach3_boxplot.ipynb](approach3_boxplot.ipynb)*: Boxplot to compare multiple vocabularies generated using random prototypes.\n", "\n", "### Scripts\n", " - *[train_bow.py](train_bow.py)*: Training of BOW model\n", " - *[eval_bow.py](eval_bow.py)*: Evaluation of BOW model\n", "\n", "## Approach 4: Autoencoder\n", "### Notebooks\n", " - *[approach4_autoencoder.ipynb](approach4_autoencoder.ipynb)*: Visualizations and evaluation of single trainings.\n", " - *[approach4_boxplot.ipynb](approach4_boxplot.ipynb)*: Boxplot to compare different trainings.\n", "\n", "### Scripts\n", " - *[train_autoencoder.py](train_autoencoder.py)*: Training of autoencoder\n", " - *[eval_autoencoder.py](eval_autoencoder.py)*: Evaluation of trained autoencoder\n", "\n", "## Helpers\n", " - *[analyze_dataset.ipynb](analyze_dataset.ipynb)*: Dataset statistics, check for duplicates\n", " - *[analyze_labels.ipynb](analyze_labels.ipynb)*: Annotation statistics (number of normal/anomalous motion samples)\n", " - *[check_csv.ipynb](check_csv.ipynb)*: Loads annotations from *Kadaverbilder_leer.csv*\n", " - *[generate_lapseless_session.ipynb](generate_lapseless_session.ipynb)*: Generate a session with artificial lapse data from a lapseless session (e.g., Fox_03 -> GFox_03)\n", " - *[quick_label.py](quick_label.py)*: Minimal quick labeling script using OpenCV\n", " - *[read_csv_annotations.ipynb](read_csv_annotations.ipynb)*: Loads annotations from *observations.csv* and *media.csv*\n", " - *[resize_session.ipynb](resize_session.ipynb)*: Session preprocessing (crop and resize images)\n", " - *[scan_sessions.ipynb](scan_sessions.ipynb)*: Creates lapse maps (map between lapse images and their EXIF dates), statistics of inconsistencies in sessions\n", "\n", "## Early experiments\n", " - *[deprecated/experiments.ipynb](deprecated/experiments.ipynb)*: Early experiments with lapse images and frame differencing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Copyright © 2023 Felix Kleinsteuber and Computer Vision Group, Friedrich Schiller University Jena" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.10.4 ('pytorch-gpu')", "language": "python", "name": "python3" }, "language_info": { "name": "python", "version": "3.10.4" }, "orig_nbformat": 4, "vscode": { "interpreter": { "hash": "17cd5c528a3345b75540c61f907eece919c031d57a2ca1e5653325af249173c9" } } }, "nbformat": 4, "nbformat_minor": 2 }