Browse Source

updated READMEs

Dimitri Korsch 1 year ago
parent
commit
3323fb5640
2 changed files with 14 additions and 5 deletions
  1. 9 0
      README.md
  2. 5 5
      cvdatasets/dataset/README.md

+ 9 - 0
README.md

@@ -5,6 +5,15 @@
 pip install cvdatasets
 ```
 
+Small addition: you can use this package to resize images in a fast way:
+```bash
+python -m cvdatasets.resize <src folder> <dest folder> --size 600
+python -m cvdatasets.resize <src folder> <dest folder> --size 600 --fit_short
+```
+The first line resizes all images in `<src folder>` so that the larger size is `600px` and stores them to `<dest folder>`.
+The second line does the same, except that the smaller size is `600px`.
+
+
 ## Motivation
 We want to follow the interface of custom [PyTorch datasets](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html#creating-a-custom-dataset-for-your-files) (originally presented by [Chainer](https://docs.chainer.org/en/latest/reference/generated/chainer.dataset.DatasetMixin.html#chainer.dataset.DatasetMixin)):
 

+ 5 - 5
cvdatasets/dataset/README.md

@@ -5,13 +5,13 @@ Consequently, to create a `Dataset` instance an [`Annotation`](../annotations.py
 
 
 ```python
-# replace NAB_Annotations with CUB_Annotations to load CUB200-2011 annotations
-from cvdatasets import NAB_Annotations, Dataset
+from cvdatasets import FileListAnnotations
+from cvdatasets import Dataset
 
-annot = NAB_Annotations("path/to/nab/folder")
+annot = FileListAnnotations("path/to/annotation_folder")
 
-train_data = Dataset(uuids=annot.train_uuids, annotations=annot)
-test_data = Dataset(uuids=annot.test_uuids, annotations=annot)
+train_data = annot.new_dataset(dataset_cls=Dataset, subset="train")
+test_data = annot.new_dataset(dataset_cls=Dataset, subset="test")
 ```
 
 ## Working with part and bounding box annotations