https://github.com/hsm207/rossmann-fastai
My experiments on the rossmann dataset
https://github.com/hsm207/rossmann-fastai
deep-learning fastai pytorch tabular-data
Last synced: 8 months ago
JSON representation
My experiments on the rossmann dataset
- Host: GitHub
- URL: https://github.com/hsm207/rossmann-fastai
- Owner: hsm207
- License: gpl-3.0
- Created: 2019-06-01T13:22:26.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2019-06-01T22:04:44.000Z (over 6 years ago)
- Last Synced: 2025-01-14T11:16:27.849Z (9 months ago)
- Topics: deep-learning, fastai, pytorch, tabular-data
- Language: Jupyter Notebook
- Size: 218 KB
- Stars: 0
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Introduction
This repository contains my experiments on the [Rossmann Store Sales](https://www.kaggle.com/c/rossmann-store-sales/data) dataset
using the [fastai](https://github.com/fastai/fastai) library.The goal is to investigate if there are any general techniques (e.g. preprocessing steps, feature engineering, etc)
that can help deep learning models on tabular data converge faster and/or with higher accuracy.# Usage
1. Install Docker
2. Navigate to this project's root directory
3. Build and run this project's Dockerfile by executing:
```bash
docker build -t fastai . && \
docker run --runtime=nvidia \
--name fastai \
-p 8888:8888 \
--dns 8.8.8.8 \
fastai jupyter notebook --ip 0.0.0.0 --allow-root
```
This will start a jupyter server on port 8888.
4. Open a web browser and navigate to the URL displayed in the previous step.5. Navigate to the [notebooks](/notebooks) folder. This folder contains:
* [01_rossmann_data_clean](/notebooks/01_rossman_data_clean.ipynb): Download the training and test set
* [02_EDA](/notebooks/02_EDA.ipynb): Some exploratory data analysis on the dataset
* [03_modelling](/notebooks/03_modelling.ipynb): Models/techniques I've tried so far
# Contributing
Feel free to raise a pull request for any questions, comments, feedback, etc.