Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/drkostas/3d-semantic-segmentation
Semantic Segmentation with Transformers on 3D Medical Images
https://github.com/drkostas/3d-semantic-segmentation
deep-learning pytorch segfomer semantic-segmentation
Last synced: 20 days ago
JSON representation
Semantic Segmentation with Transformers on 3D Medical Images
- Host: GitHub
- URL: https://github.com/drkostas/3d-semantic-segmentation
- Owner: drkostas
- License: apache-2.0
- Created: 2022-05-09T22:31:25.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2022-12-30T18:07:23.000Z (almost 2 years ago)
- Last Synced: 2024-10-12T08:32:32.097Z (about 1 month ago)
- Topics: deep-learning, pytorch, segfomer, semantic-segmentation
- Language: Jupyter Notebook
- Homepage:
- Size: 42.5 MB
- Stars: 62
- Watchers: 8
- Forks: 7
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# COSC525: Final Project: Semantic Segmentation with Transformers on 3D Medical Images
[![GitHub license](https://img.shields.io/badge/license-Apache-blue.svg)](
https://github.com/drkostas/COSC525-Project1/blob/master/LICENSE)## Table of Contents
+ [About](#about)
+ [Getting Started](#getting_started)
+ [Prerequisites](#prerequisites)
+ [Installing the requirements](#installing)
+ [Running the code](#run_locally)
+ [Execution Options](#execution_options)
+ [main.py](#src_main)
+ [Todo](#todo)
+ [License](#license)Final Project for the Deep Learning course (COSC 525). Involves the development of a semantic
segmentation model with transformers on 3D medical imagesThe main code is located in the [main.py](main.py) file. All the other code such is located
in the [src folder](src).These instructions will get you a copy of the project up and running on your local machine.
You need to have a machine with Python > 3.6 and any Bash based shell (e.g. zsh) installed.
```ShellSession
$ python3.9 -V
Python 3.9.1$ echo $SHELL
/usr/bin/zsh
```## Installing the requirements
All the installation steps are being handled by the [Makefile](Makefile). You can either use conda or
venv by setting the flag `env=`. To load an env file use the
flag `env_file=`Before installing everything, make any changes needed in the [settings.ini](settings.ini) file.
Then, to create a conda environment, install the requirements, setup the library and run the tests
execute the following command:```ShellSession
$ make install
```In order to run the code, you will only need to change the yml file if you need to, and either run its
file directly or invoke its console script.First, make sure you are in the correct virtual environment:
```ShellSession
$ conda activate cosc525_finalproject$ which python
/home//anaconda3/envs/src/bin/python
```Now, in order to run the code you can call the [main.py](main.py)
directly.```ShellSession
$ python main.py -h
usage: main.py -d DATASET -n NETWORK -c CONFIG_FILE [-l LOG] [-h]Project 1 for the Deep Learning class (COSC 525). Involves the development of a FeedForward Neural Network.
Required Arguments:
-d DATASET, --dataset DATASET
The datasets to train the network on. Options (defined in yml): [and, xor, class_example]
-n NETWORK, --network NETWORK
The network configuration to use. Options (defined in yml): [1x1_net, 2x1_net, 2x2_net]
-c CONFIG_FILE, --config-file CONFIG_FILE
The path to the yaml configuration file.Optional Arguments:
-l LOG, --log LOG Name of the output log file
-h, --help Show this help message and exit
```Read the [TODO](TODO.md) to see the current task list.
This project is licensed under the Apache License - see the [LICENSE](LICENSE) file for details.