https://github.com/deeplearnphysics/larcv_training_utils
Repo containing training tools (IO, batch processing, etc) for larcv datasets. Meant to be forked.
https://github.com/deeplearnphysics/larcv_training_utils
Last synced: 8 months ago
JSON representation
Repo containing training tools (IO, batch processing, etc) for larcv datasets. Meant to be forked.
- Host: GitHub
- URL: https://github.com/deeplearnphysics/larcv_training_utils
- Owner: DeepLearnPhysics
- Created: 2019-06-04T18:53:05.000Z (almost 7 years ago)
- Default Branch: master
- Last Pushed: 2019-06-05T16:53:02.000Z (almost 7 years ago)
- Last Synced: 2025-03-26T18:40:13.438Z (about 1 year ago)
- Language: Python
- Size: 44.9 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Larcv Training Utils
This repository contains some core, skeleton code to get a network up and running using larcv. It's designed for larcv3.
It has a master branch with just some core function placeholders, and IO implementation. It has a `torch` and `tf` branch with more functions filled in for tensorflow and torch. The intention is to fork this repo for your own purposes so you can fill in your own model and various details, without having to worry too much about opening and closing IO utilities, starting data loading, keeping io in sync, etc.
This repo also contains tools for distributed learning, including learning rate schedules based on epochs and performing allreduce on metrics, etc. Horovod and MPI4PY are the communication tools.
The master branch has ability to run io tests on files, so if you only want to perform IO benchmarks on a larcv dataset, fork this repository and keep the master branch going. Then, you can run IO tests even in distributed mode.