https://github.com/mbari-org/mgozzi-augment
https://github.com/mbari-org/mgozzi-augment
Last synced: 10 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/mbari-org/mgozzi-augment
- Owner: mbari-org
- License: mit
- Created: 2022-06-28T17:50:26.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2022-08-01T23:39:28.000Z (over 3 years ago)
- Last Synced: 2025-02-04T21:43:19.277Z (11 months ago)
- Language: Jupyter Notebook
- Size: 86.9 KB
- Stars: 0
- Watchers: 6
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# mgozzi-augment
## Note for Sagemaker
Sometimes paths are tricky to find on Sagemaker as the terminal does not reveal the same paths as running a script in a notebook. For example, when configuring the custom_config.yaml file under the /opt/ml/custom_config.yaml, the terminal revealed the file location to be /home/sagemaker-user/mgozzi-augment/deepsea-yolov5/opt/ml/input/data/images/train which yielded a file not found error when running the associated command via the notebook. The correct path was found to be /root/mgozzi-augment/deepsea-yolov5/opt/ml/input/data/images/train. This shows that the notebook runs as a root user, and omits the sagemaker-user directory.
# Most Important Files
The three primary files of concern in order to configure the YOLOv5 system are the following files: the yolov5*.yaml file located under yolov5/models/, the hyp.scratch-low.yaml file located under /deepsea-yolov5/opt/ml/input/data/ in our dataset configuration, and the custom_config.yaml file located under /deepsea-yolov5/opt/ml/custom_config.yaml. The yolov5*.yaml model file is utilized in order to store anchor values under the anchors section. In this study the yolov5s.yaml file was modified to store our custom anchor values. The hyp.scratch-low.yaml file is utilized as the hyperparameter configuration file for manipulating the training input data. This file contains various transforms such as gamma, hue/saturation/value, scale, and more which are all intended to be tuned towards our specific dataset. The contents of this file will be replaced upon completing a full evolution cycle. The most effective hyperparameters will be automatically saved under a WandB folder, at least for the commands we’ve implemented, named hyp_evolve.yaml. This file specifies which evolution was the fittest, along with the metrics of the best training run, and the total number of evolutions tested.