Felix Kleinsteuber 3 éve
szülő
commit
f3916a4103
80 módosított fájl, 194 hozzáadás és 78 törlés
  1. 34 12
      approach1a_basic_frame_differencing.ipynb
  2. 8 11
      approach1b_histograms.ipynb
  3. 10 9
      approach2_background_estimation.ipynb
  4. 5 13
      approach3_local_features.ipynb
  5. 77 0
      day_vs_night.ipynb
  6. 10 4
      eval_bow.py
  7. BIN
      plots/approach1a/roc_curves/Beaver_01.png
  8. BIN
      plots/approach1a/roc_curves/Beaver_01_absmean.pdf
  9. BIN
      plots/approach1a/roc_curves/Beaver_01_absmean.png
  10. BIN
      plots/approach1a/roc_curves/Beaver_01_absmean_sigma2.pdf
  11. BIN
      plots/approach1a/roc_curves/Beaver_01_absmean_sigma2.png
  12. BIN
      plots/approach1a/roc_curves/Beaver_01_absmean_sigma4.pdf
  13. BIN
      plots/approach1a/roc_curves/Beaver_01_absmean_sigma4.png
  14. BIN
      plots/approach1a/roc_curves/Beaver_01_absstd.pdf
  15. BIN
      plots/approach1a/roc_curves/Beaver_01_absstd.png
  16. BIN
      plots/approach1a/roc_curves/Beaver_01_absvar_sigma2.pdf
  17. BIN
      plots/approach1a/roc_curves/Beaver_01_absvar_sigma2.png
  18. BIN
      plots/approach1a/roc_curves/Beaver_01_absvar_sigma4.pdf
  19. BIN
      plots/approach1a/roc_curves/Beaver_01_absvar_sigma4.png
  20. BIN
      plots/approach1a/roc_curves/Beaver_01_sigma2.png
  21. BIN
      plots/approach1a/roc_curves/Beaver_01_sigma4.png
  22. BIN
      plots/approach1a/roc_curves/Beaver_01_sqmean.pdf
  23. BIN
      plots/approach1a/roc_curves/Beaver_01_sqmean.png
  24. BIN
      plots/approach1a/roc_curves/Beaver_01_sqmean_sigma2.pdf
  25. BIN
      plots/approach1a/roc_curves/Beaver_01_sqmean_sigma2.png
  26. BIN
      plots/approach1a/roc_curves/Beaver_01_sqmean_sigma4.pdf
  27. BIN
      plots/approach1a/roc_curves/Beaver_01_sqmean_sigma4.png
  28. BIN
      plots/approach1a/roc_curves/Beaver_01_sqvar.pdf
  29. BIN
      plots/approach1a/roc_curves/Beaver_01_sqvar.png
  30. BIN
      plots/approach1a/roc_curves/Beaver_01_sqvar_sigma2.pdf
  31. BIN
      plots/approach1a/roc_curves/Beaver_01_sqvar_sigma2.png
  32. BIN
      plots/approach1a/roc_curves/Beaver_01_sqvar_sigma4.pdf
  33. BIN
      plots/approach1a/roc_curves/Beaver_01_sqvar_sigma4.png
  34. BIN
      plots/approach1a/roc_curves/Marten_01_absmean.pdf
  35. BIN
      plots/approach1a/roc_curves/Marten_01_absmean.png
  36. BIN
      plots/approach1a/roc_curves/Marten_01_absmean_sigma2.pdf
  37. BIN
      plots/approach1a/roc_curves/Marten_01_absmean_sigma2.png
  38. BIN
      plots/approach1a/roc_curves/Marten_01_absmean_sigma4.pdf
  39. BIN
      plots/approach1a/roc_curves/Marten_01_absmean_sigma4.png
  40. BIN
      plots/approach1a/roc_curves/Marten_01_absvar.pdf
  41. BIN
      plots/approach1a/roc_curves/Marten_01_absvar.png
  42. BIN
      plots/approach1a/roc_curves/Marten_01_absvar_sigma2.pdf
  43. BIN
      plots/approach1a/roc_curves/Marten_01_absvar_sigma2.png
  44. BIN
      plots/approach1a/roc_curves/Marten_01_absvar_sigma4.pdf
  45. BIN
      plots/approach1a/roc_curves/Marten_01_absvar_sigma4.png
  46. BIN
      plots/approach1a/roc_curves/Marten_01_sqmean.pdf
  47. BIN
      plots/approach1a/roc_curves/Marten_01_sqmean.png
  48. BIN
      plots/approach1a/roc_curves/Marten_01_sqmean_sigma2.pdf
  49. BIN
      plots/approach1a/roc_curves/Marten_01_sqmean_sigma2.png
  50. BIN
      plots/approach1a/roc_curves/Marten_01_sqmean_sigma4.pdf
  51. BIN
      plots/approach1a/roc_curves/Marten_01_sqmean_sigma4.png
  52. BIN
      plots/approach1a/roc_curves/Marten_01_sqvar.pdf
  53. BIN
      plots/approach1a/roc_curves/Marten_01_sqvar.png
  54. BIN
      plots/approach1a/roc_curves/Marten_01_sqvar_sigma2.pdf
  55. BIN
      plots/approach1a/roc_curves/Marten_01_sqvar_sigma2.png
  56. BIN
      plots/approach1a/roc_curves/Marten_01_sqvar_sigma4.pdf
  57. BIN
      plots/approach1a/roc_curves/Marten_01_sqvar_sigma4.png
  58. BIN
      plots/approach1b/roc_curves/Beaver_01_pmean.pdf
  59. BIN
      plots/approach1b/roc_curves/Beaver_01_pmean.png
  60. BIN
      plots/approach2/roc_curves/Beaver_01_sqmean.pdf
  61. BIN
      plots/approach2/roc_curves/Beaver_01_sqmean.png
  62. BIN
      plots/approach2/roc_curves/Beaver_01_sqmean_sigma2.pdf
  63. BIN
      plots/approach2/roc_curves/Beaver_01_sqmean_sigma2.png
  64. BIN
      plots/approach2/roc_curves/Beaver_01_sqmean_sigma4.pdf
  65. BIN
      plots/approach2/roc_curves/Beaver_01_sqmean_sigma4.png
  66. BIN
      plots/approach2/roc_curves/Beaver_01_sqmean_sigma6.pdf
  67. BIN
      plots/approach2/roc_curves/Beaver_01_sqmean_sigma6.png
  68. BIN
      plots/approach2/roc_curves/Beaver_01_sqvar.pdf
  69. BIN
      plots/approach2/roc_curves/Beaver_01_sqvar.png
  70. BIN
      plots/approach2/roc_curves/Beaver_01_sqvar_sigma2.pdf
  71. BIN
      plots/approach2/roc_curves/Beaver_01_sqvar_sigma2.png
  72. BIN
      plots/approach2/roc_curves/Beaver_01_sqvar_sigma4.pdf
  73. BIN
      plots/approach2/roc_curves/Beaver_01_sqvar_sigma4.png
  74. BIN
      plots/approach2/roc_curves/Beaver_01_sqvar_sigma6.pdf
  75. BIN
      plots/approach2/roc_curves/Beaver_01_sqvar_sigma6.png
  76. 3 1
      py/ImageUtils.py
  77. 10 7
      py/LocalFeatures.py
  78. 11 3
      py/PlotUtils.py
  79. 20 13
      results.ipynb
  80. 6 5
      train_bow.py

A különbségek nem kerülnek megjelenítésre, a fájl túl nagy
+ 34 - 12
approach1a_basic_frame_differencing.ipynb


A különbségek nem kerülnek megjelenítésre, a fájl túl nagy
+ 8 - 11
approach1b_histograms.ipynb


A különbségek nem kerülnek megjelenítésre, a fájl túl nagy
+ 10 - 9
approach2_background_estimation.ipynb


A különbségek nem kerülnek megjelenítésre, a fájl túl nagy
+ 5 - 13
approach3_local_features.ipynb


+ 77 - 0
day_vs_night.ipynb

@@ -0,0 +1,77 @@
+{
+ "cells": [
+  {
+   "cell_type": "code",
+   "execution_count": 4,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "from skimage import io\n",
+    "import numpy as np"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 15,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "img = io.imread(\"sample.jpg\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 16,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "99.88014299849979"
+      ]
+     },
+     "execution_count": 16,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "np.mean([abs(img[:,:,0] - img[:,:,1]), abs(img[:,:,1] - img[:,:,2]), abs(img[:,:,2] - img[:,:,0])])"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3.10.4 ('pytorch-gpu')",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.10.4"
+  },
+  "orig_nbformat": 4,
+  "vscode": {
+   "interpreter": {
+    "hash": "17cd5c528a3345b75540c61f907eece919c031d57a2ca1e5653325af249173c9"
+   }
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}

+ 10 - 4
eval_bow.py

@@ -1,3 +1,8 @@
+# Approach 3: Local features
+# This script is used for calculating BOW features of Motion images
+# using a BOW vocabulary.
+# See train_bow.py for training.
+
 import argparse
 import argparse
 import os
 import os
 import numpy as np
 import numpy as np
@@ -12,6 +17,7 @@ def main():
     parser.add_argument("session_name", type=str, help="Name of the session to use for Lapse images (e.g. marten_01)")
     parser.add_argument("session_name", type=str, help="Name of the session to use for Lapse images (e.g. marten_01)")
     parser.add_argument("--clusters", type=int, help="Number of clusters / BOW vocabulary size", default=1024)
     parser.add_argument("--clusters", type=int, help="Number of clusters / BOW vocabulary size", default=1024)
     parser.add_argument("--step_size", type=int, help="DSIFT keypoint step size. Smaller step size = more keypoints.", default=30)
     parser.add_argument("--step_size", type=int, help="DSIFT keypoint step size. Smaller step size = more keypoints.", default=30)
+    parser.add_argument("--keypoint_size", type=int, help="DSIFT keypoint size. Should be >= step_size.", default=60)
 
 
     args = parser.parse_args()
     args = parser.parse_args()
 
 
@@ -21,9 +27,9 @@ def main():
 
 
     # Lapse DSIFT descriptors
     # Lapse DSIFT descriptors
 
 
-    dictionary_file = os.path.join(save_dir, f"bow_dict_{args.step_size}_{args.clusters}.npy")
-    train_feat_file = os.path.join(save_dir, f"bow_train_{args.step_size}_{args.clusters}.npy")
-    eval_file = os.path.join(save_dir, f"bow_eval_{args.step_size}_{args.clusters}.csv")
+    dictionary_file = os.path.join(save_dir, f"bow_dict_{args.step_size}_{args.keypoint_size}_{args.clusters}.npy")
+    train_feat_file = os.path.join(save_dir, f"bow_train_{args.step_size}_{args.keypoint_size}_{args.clusters}.npy")
+    eval_file = os.path.join(save_dir, f"bow_eval_{args.step_size}_{args.keypoint_size}_{args.clusters}.csv")
 
 
     if not os.path.isfile(dictionary_file):
     if not os.path.isfile(dictionary_file):
         print(f"ERROR: BOW dictionary missing! ({dictionary_file})")
         print(f"ERROR: BOW dictionary missing! ({dictionary_file})")
@@ -42,7 +48,7 @@ def main():
 
 
         print("Evaluating...")
         print("Evaluating...")
         with open(eval_file, "a+") as f:
         with open(eval_file, "a+") as f:
-            for filename, feat in generate_bow_features(list(session.generate_motion_images()), dictionary, kp_step=args.step_size):
+            for filename, feat in generate_bow_features(list(session.generate_motion_images()), dictionary, kp_step=args.step_size, kp_size=args.keypoint_size):
                 y = clf.decision_function(feat)[0]
                 y = clf.decision_function(feat)[0]
                 f.write(f"{filename},{y}\n")
                 f.write(f"{filename},{y}\n")
                 f.flush()
                 f.flush()

BIN
plots/approach1a/roc_curves/Beaver_01.png


BIN
plots/approach1a/roc_curves/Beaver_01_absmean.pdf


BIN
plots/approach1a/roc_curves/Beaver_01_absmean.png


BIN
plots/approach1a/roc_curves/Beaver_01_absmean_sigma2.pdf


BIN
plots/approach1a/roc_curves/Beaver_01_absmean_sigma2.png


BIN
plots/approach1a/roc_curves/Beaver_01_absmean_sigma4.pdf


BIN
plots/approach1a/roc_curves/Beaver_01_absmean_sigma4.png


BIN
plots/approach1a/roc_curves/Beaver_01_absstd.pdf


BIN
plots/approach1a/roc_curves/Beaver_01_absstd.png


BIN
plots/approach1a/roc_curves/Beaver_01_absvar_sigma2.pdf


BIN
plots/approach1a/roc_curves/Beaver_01_absvar_sigma2.png


BIN
plots/approach1a/roc_curves/Beaver_01_absvar_sigma4.pdf


BIN
plots/approach1a/roc_curves/Beaver_01_absvar_sigma4.png


BIN
plots/approach1a/roc_curves/Beaver_01_sigma2.png


BIN
plots/approach1a/roc_curves/Beaver_01_sigma4.png


BIN
plots/approach1a/roc_curves/Beaver_01_sigma4.pdf → plots/approach1a/roc_curves/Beaver_01_sqmean.pdf


BIN
plots/approach1a/roc_curves/Beaver_01_sqmean.png


BIN
plots/approach1a/roc_curves/Beaver_01_sqmean_sigma2.pdf


BIN
plots/approach1a/roc_curves/Beaver_01_sqmean_sigma2.png


BIN
plots/approach1a/roc_curves/Beaver_01_sqmean_sigma4.pdf


BIN
plots/approach1a/roc_curves/Beaver_01_sqmean_sigma4.png


BIN
plots/approach1a/roc_curves/Beaver_01_sqvar.pdf


BIN
plots/approach1a/roc_curves/Beaver_01_sqvar.png


BIN
plots/approach1a/roc_curves/Beaver_01_sqvar_sigma2.pdf


BIN
plots/approach1a/roc_curves/Beaver_01_sqvar_sigma2.png


BIN
plots/approach1a/roc_curves/Beaver_01_sqvar_sigma4.pdf


BIN
plots/approach1a/roc_curves/Beaver_01_sqvar_sigma4.png


BIN
plots/approach1a/roc_curves/Beaver_01.pdf → plots/approach1a/roc_curves/Marten_01_absmean.pdf


BIN
plots/approach1a/roc_curves/Marten_01_absmean.png


BIN
plots/approach1a/roc_curves/Marten_01_absmean_sigma2.pdf


BIN
plots/approach1a/roc_curves/Marten_01_absmean_sigma2.png


BIN
plots/approach1a/roc_curves/Marten_01_absmean_sigma4.pdf


BIN
plots/approach1a/roc_curves/Marten_01_absmean_sigma4.png


BIN
plots/approach1a/roc_curves/Marten_01_absvar.pdf


BIN
plots/approach1a/roc_curves/Marten_01_absvar.png


BIN
plots/approach1a/roc_curves/Marten_01_absvar_sigma2.pdf


BIN
plots/approach1a/roc_curves/Marten_01_absvar_sigma2.png


BIN
plots/approach1a/roc_curves/Marten_01_absvar_sigma4.pdf


BIN
plots/approach1a/roc_curves/Marten_01_absvar_sigma4.png


BIN
plots/approach1a/roc_curves/Beaver_01_sigma2.pdf → plots/approach1a/roc_curves/Marten_01_sqmean.pdf


BIN
plots/approach1a/roc_curves/Marten_01_sqmean.png


BIN
plots/approach1a/roc_curves/Marten_01_sqmean_sigma2.pdf


BIN
plots/approach1a/roc_curves/Marten_01_sqmean_sigma2.png


BIN
plots/approach1a/roc_curves/Marten_01_sqmean_sigma4.pdf


BIN
plots/approach1a/roc_curves/Marten_01_sqmean_sigma4.png


BIN
plots/approach1a/roc_curves/Marten_01_sqvar.pdf


BIN
plots/approach1a/roc_curves/Marten_01_sqvar.png


BIN
plots/approach1a/roc_curves/Marten_01_sqvar_sigma2.pdf


BIN
plots/approach1a/roc_curves/Marten_01_sqvar_sigma2.png


BIN
plots/approach1a/roc_curves/Marten_01_sqvar_sigma4.pdf


BIN
plots/approach1a/roc_curves/Marten_01_sqvar_sigma4.png


BIN
plots/approach1b/roc_curves/Beaver_01_pmean.pdf


BIN
plots/approach1b/roc_curves/Beaver_01_pmean.png


BIN
plots/approach2/roc_curves/Beaver_01_sqmean.pdf


BIN
plots/approach2/roc_curves/Beaver_01_sqmean.png


BIN
plots/approach2/roc_curves/Beaver_01_sqmean_sigma2.pdf


BIN
plots/approach2/roc_curves/Beaver_01_sqmean_sigma2.png


BIN
plots/approach2/roc_curves/Beaver_01_sqmean_sigma4.pdf


BIN
plots/approach2/roc_curves/Beaver_01_sqmean_sigma4.png


BIN
plots/approach2/roc_curves/Beaver_01_sqmean_sigma6.pdf


BIN
plots/approach2/roc_curves/Beaver_01_sqmean_sigma6.png


BIN
plots/approach2/roc_curves/Beaver_01_sqvar.pdf


BIN
plots/approach2/roc_curves/Beaver_01_sqvar.png


BIN
plots/approach2/roc_curves/Beaver_01_sqvar_sigma2.pdf


BIN
plots/approach2/roc_curves/Beaver_01_sqvar_sigma2.png


BIN
plots/approach2/roc_curves/Beaver_01_sqvar_sigma4.pdf


BIN
plots/approach2/roc_curves/Beaver_01_sqvar_sigma4.png


BIN
plots/approach2/roc_curves/Beaver_01_sqvar_sigma6.pdf


BIN
plots/approach2/roc_curves/Beaver_01_sqvar_sigma6.png


+ 3 - 1
py/ImageUtils.py

@@ -1,5 +1,6 @@
 from datetime import datetime
 from datetime import datetime
 from PIL import Image
 from PIL import Image
+import numpy as np
 import matplotlib.pyplot as plt
 import matplotlib.pyplot as plt
 
 
 def get_image_date(img_path: str) -> datetime:
 def get_image_date(img_path: str) -> datetime:
@@ -37,4 +38,5 @@ def display_images(images: list, titles: list, colorbar=False, size=(8, 5), row_
     plt.tight_layout()
     plt.tight_layout()
     plt.show()
     plt.show()
 
 
-
+def is_daytime(img, threshold=50) -> bool:
+    return np.mean([abs(img[:,:,0] - img[:,:,1]), abs(img[:,:,1] - img[:,:,2]), abs(img[:,:,2] - img[:,:,0])]) > threshold

+ 10 - 7
py/LocalFeatures.py

@@ -7,13 +7,14 @@ from tqdm import tqdm
 
 
 from py.Session import SessionImage
 from py.Session import SessionImage
 
 
-def dense_keypoints(img, step=30):
+def dense_keypoints(img, step=30, size=60):
     """Generates a list of densely sampled keypoints on img. The keypoints are arranged tightly
     """Generates a list of densely sampled keypoints on img. The keypoints are arranged tightly
     next to each other without spacing. The group of all keypoints is centered in the image.
     next to each other without spacing. The group of all keypoints is centered in the image.
 
 
     Args:
     Args:
         img (_type_): Image to sample from. (only the shape is relevant)
         img (_type_): Image to sample from. (only the shape is relevant)
-        step (int, optional): Vertical and horizontal step size between and size of keypoints. Defaults to 30.
+        step (int, optional): Vertical and horizontal step size between keypoints. Defaults to 30.
+        size (int, optional): Size of keypoints. Defaults to 60.
 
 
     Returns:
     Returns:
         list[cv.KeyPoint]: List of keypoints
         list[cv.KeyPoint]: List of keypoints
@@ -21,16 +22,17 @@ def dense_keypoints(img, step=30):
     # calculate offset to center keypoints
     # calculate offset to center keypoints
     off = ((img.shape[0] % step) // 2, (img.shape[1] % step) // 2)
     off = ((img.shape[0] % step) // 2, (img.shape[1] % step) // 2)
     border_dist = (step + 1) // 2
     border_dist = (step + 1) // 2
-    return [cv.KeyPoint(x, y, step) for y in range(border_dist + off[0], img.shape[0] - border_dist, step) 
+    return [cv.KeyPoint(x, y, size) for y in range(border_dist + off[0], img.shape[0] - border_dist, step) 
                                     for x in range(border_dist + off[1], img.shape[1] - border_dist, step)]
                                     for x in range(border_dist + off[1], img.shape[1] - border_dist, step)]
 
 
 
 
-def extract_descriptors(images: list[SessionImage], kp_step: int = 30):
+def extract_descriptors(images: list[SessionImage], kp_step: int = 30, kp_size: int = 60):
     """Extracts DSIFT descriptors from the provided images and returns them in a single array.
     """Extracts DSIFT descriptors from the provided images and returns them in a single array.
 
 
     Args:
     Args:
         images (list[SessionImage]): List of images to read and compute descriptors from.
         images (list[SessionImage]): List of images to read and compute descriptors from.
         kp_step (int, optional): Keypoint step size, see dense_keypoints. Defaults to 30.
         kp_step (int, optional): Keypoint step size, see dense_keypoints. Defaults to 30.
+        kp_size (int, optional): Keypoint size, see dense_keypoints. Defaults to 60.
 
 
     Returns:
     Returns:
         np.array, shape=(len(images)*keypoints_per_image, 128): DSIFT descriptors.
         np.array, shape=(len(images)*keypoints_per_image, 128): DSIFT descriptors.
@@ -40,7 +42,7 @@ def extract_descriptors(images: list[SessionImage], kp_step: int = 30):
     output_kp = False
     output_kp = False
     for image in tqdm(images):
     for image in tqdm(images):
         img = image.read_opencv(gray=True)
         img = image.read_opencv(gray=True)
-        kp = dense_keypoints(img, kp_step)
+        kp = dense_keypoints(img, kp_step, kp_size)
         # output number of keypoints once
         # output number of keypoints once
         if not output_kp:
         if not output_kp:
             print(f"{len(kp)} keypoints per image.")
             print(f"{len(kp)} keypoints per image.")
@@ -66,7 +68,7 @@ def generate_dictionary_from_descriptors(dscs, dictionary_size: int):
     dictionary = BOW.cluster()
     dictionary = BOW.cluster()
     return dictionary
     return dictionary
 
 
-def generate_bow_features(images: list[SessionImage], dictionary, kp_step: int = 30):
+def generate_bow_features(images: list[SessionImage], dictionary, kp_step: int = 30, kp_size: int = 60):
     """Calculates the BOW features for the provided images using dictionary.
     """Calculates the BOW features for the provided images using dictionary.
     Yields a feature vector for every image.
     Yields a feature vector for every image.
 
 
@@ -74,6 +76,7 @@ def generate_bow_features(images: list[SessionImage], dictionary, kp_step: int =
         images (list[SessionImage]): List of images to read and compute feature vectors from.
         images (list[SessionImage]): List of images to read and compute feature vectors from.
         dictionary (np.array, shape=(-1, 128)): BOW dictionary.
         dictionary (np.array, shape=(-1, 128)): BOW dictionary.
         kp_step (int, optional): Keypoint step size, see dense_keypoints. Must be identical to the step size used for vocabulary generation. Defaults to 30.
         kp_step (int, optional): Keypoint step size, see dense_keypoints. Must be identical to the step size used for vocabulary generation. Defaults to 30.
+        kp_size (int, optional): Keypoint size, see dense_keypoints. Must be identical to the size used for vocabulary generation. Defaults to 60.
 
 
     Yields:
     Yields:
         (str, np.array of shape=(dictionary.shape[0])): (filename, feature vector)
         (str, np.array of shape=(dictionary.shape[0])): (filename, feature vector)
@@ -85,6 +88,6 @@ def generate_bow_features(images: list[SessionImage], dictionary, kp_step: int =
     
     
     for image in tqdm(images):
     for image in tqdm(images):
         img = image.read_opencv(gray=True)
         img = image.read_opencv(gray=True)
-        kp = dense_keypoints(img, kp_step)
+        kp = dense_keypoints(img, kp_step, kp_size)
         feat = bow_extractor.compute(img, kp)
         feat = bow_extractor.compute(img, kp)
         yield image.filename, feat
         yield image.filename, feat

+ 11 - 3
py/PlotUtils.py

@@ -1,7 +1,7 @@
 import matplotlib.pyplot as plt
 import matplotlib.pyplot as plt
 from sklearn.metrics import roc_curve, auc
 from sklearn.metrics import roc_curve, auc
 
 
-def plot_roc_curve(test_labels: list, test_df: list, title: str, figsize=(8, 8), savefile = None):
+def plot_roc_curve(test_labels: list, test_df: list, title: str, figsize=(8, 8), savefile = None, show: bool = True):
     fpr, tpr, thresholds = roc_curve(test_labels, test_df)
     fpr, tpr, thresholds = roc_curve(test_labels, test_df)
     auc_score = auc(fpr, tpr)
     auc_score = auc(fpr, tpr)
 
 
@@ -17,5 +17,13 @@ def plot_roc_curve(test_labels: list, test_df: list, title: str, figsize=(8, 8),
     if savefile is not None:
     if savefile is not None:
         plt.savefig(f"{savefile}.png", bbox_inches="tight")
         plt.savefig(f"{savefile}.png", bbox_inches="tight")
         plt.savefig(f"{savefile}.pdf", bbox_inches="tight")
         plt.savefig(f"{savefile}.pdf", bbox_inches="tight")
-    plt.show()
-    return fpr, tpr, thresholds, auc_score
+    if show:
+        plt.show()
+    return fpr, tpr, thresholds, auc_score
+
+def get_percentiles(fpr, tpr, thresholds, percentiles=[0.9, 0.95, 0.98, 0.99]):
+    for percentile in percentiles:
+        for i, tp in enumerate(tpr):
+            if tp >= percentile:
+                print(f"{percentile} percentile : TPR = {tp:.4f}, FPR = {fpr[i]:.4f} <-> TNR = {(1 - fpr[i]):.4f} @ thresh {thresholds[i]}")
+                break

+ 20 - 13
results.ipynb

@@ -13,14 +13,17 @@
    "source": [
    "source": [
     "## Beaver_01\n",
     "## Beaver_01\n",
     "\n",
     "\n",
-    "| Approach | Configuration | Best AUC |\n",
-    "| --- | --- | ---: |\n",
-    "| 1a - Basic Frame Differencing | - | 0.7335 |\n",
-    "| | $\\sigma=2$ | 0.8658 |\n",
-    "| | $\\sigma=4$ | 0.8747 |\n",
-    "| 2 - Background Estimation | No Lapse | 0.7524 |\n",
-    "| 3 - BOW | $k=2048, kp=30$ | 0.7741 |\n",
-    "| 4 - Autoencoder | Deep +Noise +Sparse KDE | 0.9209 |"
+    "| Approach | Configuration | Best AUC | TNR @TPR>0.9 | TNR @TPR>0.99 |\n",
+    "| --- | --- | ---: | ---: | ---: |\n",
+    "| 1a - Basic Frame Differencing | abs var | 0.7415 | | |\n",
+    "| | $\\sigma=2$, sq var | 0.8986 | | |\n",
+    "| | $\\sigma=4$, sq var | 0.9156 | | |\n",
+    "| 1b - Histogram Comparison | p-mean | 0.6707 | | |\n",
+    "| 2 - Background Estimation | no lapse, sq var | 0.7897 | | |\n",
+    "| | $\\sigma=2$, no lapse, sq var | 0.8735 | | |\n",
+    "| | $\\sigma=4$, no lapse, sq var | 0.8776 | | |\n",
+    "| 3 - BOW | $k=2048, kp=30$ | 0.7741 | 0.4976 | 0.0564 |\n",
+    "| 4 - Autoencoder | Deep +Noise +Sparse KDE | 0.9209 | | |"
    ]
    ]
   },
   },
   {
   {
@@ -29,12 +32,16 @@
    "source": [
    "source": [
     "\n",
     "\n",
     "## Marten_01\n",
     "## Marten_01\n",
+    "teilannotiert!\n",
     "\n",
     "\n",
-    "| Approach | Configuration | Best AUC |\n",
-    "| --- | --- | ---: |\n",
-    "| 2 - Background Estimation | No Lapse | 0.5832 |\n",
-    "| 3 - BOW | $k=1024, kp=30$ | 0.7099 |\n",
-    "| 4 - Autoencoder | Deep +Noise +Sparse KDE | 1.0000 |"
+    "| Approach | Configuration | Best AUC | TNR @TPR>0.9 | TNR @TPR>0.99 |\n",
+    "| --- | --- | ---: | ---: | ---: |\n",
+    "| 1a - Basic Frame Differencing | sq var | 0.9854 | | |\n",
+    "| | $\\sigma = 2$, sq var | 1.0000 | 1.0000 | 1.0000 |\n",
+    "| | $\\sigma = 4$, sq var | 1.0000 | 1.0000 | 1.0000 |\n",
+    "| 2 - Background Estimation | No Lapse | 0.5832 | | |\n",
+    "| 3 - BOW | $k=1024, kp=30$ | 0.7099 | | |\n",
+    "| 4 - Autoencoder | Deep +Noise +Sparse KDE | 1.0000 | | |"
    ]
    ]
   },
   },
   {
   {

+ 6 - 5
train_bow.py

@@ -16,6 +16,7 @@ def main():
     parser.add_argument("session_name", type=str, help="Name of the session to use for Lapse images (e.g. marten_01)")
     parser.add_argument("session_name", type=str, help="Name of the session to use for Lapse images (e.g. marten_01)")
     parser.add_argument("--clusters", type=int, help="Number of clusters / BOW vocabulary size", default=1024)
     parser.add_argument("--clusters", type=int, help="Number of clusters / BOW vocabulary size", default=1024)
     parser.add_argument("--step_size", type=int, help="DSIFT keypoint step size. Smaller step size = more keypoints.", default=30)
     parser.add_argument("--step_size", type=int, help="DSIFT keypoint step size. Smaller step size = more keypoints.", default=30)
+    parser.add_argument("--keypoint_size", type=int, help="DSIFT keypoint size. Should be >= step_size.", default=60)
 
 
     args = parser.parse_args()
     args = parser.parse_args()
 
 
@@ -25,9 +26,9 @@ def main():
 
 
     # Lapse DSIFT descriptors
     # Lapse DSIFT descriptors
 
 
-    lapse_dscs_file = os.path.join(save_dir, f"lapse_dscs_{args.step_size}.npy")
-    dictionary_file = os.path.join(save_dir, f"bow_dict_{args.step_size}_{args.clusters}.npy")
-    train_feat_file = os.path.join(save_dir, f"bow_train_{args.step_size}_{args.clusters}.npy")
+    lapse_dscs_file = os.path.join(save_dir, f"lapse_dscs_{args.step_size}_{args.keypoint_size}.npy")
+    dictionary_file = os.path.join(save_dir, f"bow_dict_{args.step_size}_{args.keypoint_size}_{args.clusters}.npy")
+    train_feat_file = os.path.join(save_dir, f"bow_train_{args.step_size}_{args.keypoint_size}_{args.clusters}.npy")
 
 
     if os.path.isfile(lapse_dscs_file):
     if os.path.isfile(lapse_dscs_file):
         if os.path.isfile(dictionary_file):
         if os.path.isfile(dictionary_file):
@@ -39,7 +40,7 @@ def main():
     else:
     else:
         # Step 1 - extract dense SIFT descriptors
         # Step 1 - extract dense SIFT descriptors
         print("Extracting lapse descriptors...")
         print("Extracting lapse descriptors...")
-        lapse_dscs = extract_descriptors(list(session.generate_lapse_images()), kp_step=args.step_size)
+        lapse_dscs = extract_descriptors(list(session.generate_lapse_images()), kp_step=args.step_size, kp_size=args.keypoint_size)
         os.makedirs(save_dir, exist_ok=True)
         os.makedirs(save_dir, exist_ok=True)
         np.save(lapse_dscs_file, lapse_dscs)
         np.save(lapse_dscs_file, lapse_dscs)
 
 
@@ -61,7 +62,7 @@ def main():
     else:
     else:
         # Step 3 - calculate training data (BOW features of Lapse images)
         # Step 3 - calculate training data (BOW features of Lapse images)
         print(f"Extracting BOW features from Lapse images...")
         print(f"Extracting BOW features from Lapse images...")
-        features = [feat for _, feat in generate_bow_features(list(session.generate_lapse_images()), dictionary, kp_step=args.step_size)]
+        features = [feat for _, feat in generate_bow_features(list(session.generate_lapse_images()), dictionary, kp_step=args.step_size, kp_size=args.keypoint_size)]
         np.save(train_feat_file, features)
         np.save(train_feat_file, features)
     
     
     print("Complete!")
     print("Complete!")

Nem az összes módosított fájl került megjelenítésre, mert túl sok fájl változott