An open API service indexing awesome lists of open source software.

https://github.com/masterx35/parallel-computing-

Explore parallel computing with sorting algorithms and a deep neural network. Compare performance using Python's multithreading, multiprocessing, and GPU acceleration. 🚀💻
https://github.com/masterx35/parallel-computing-

dask differential-equations distributed-computing hpc lecture-notes mysql neural-networks neural-ode neural-sde parallel parallel-computing parallel-programming pydata scientific-machine-learning scikit-learn sciml scipy simulation

Last synced: 6 months ago
JSON representation

Explore parallel computing with sorting algorithms and a deep neural network. Compare performance using Python's multithreading, multiprocessing, and GPU acceleration. 🚀💻

Awesome Lists containing this project

README

          

# Parallel Computing with Python: Sorting Algorithms & Deep Learning

![Parallel Computing](https://img.shields.io/badge/Parallel%20Computing-Python-blue.svg)
![Releases](https://img.shields.io/badge/Releases-v1.0.0-orange.svg)

## Overview

This repository contains coursework focused on parallel computing using Python. It explores various parallel sorting algorithms and deep learning techniques, specifically applied to the MNIST dataset. The project leverages multiprocessing and GPU acceleration to enhance performance.

### Table of Contents

- [Features](#features)
- [Technologies Used](#technologies-used)
- [Installation](#installation)
- [Usage](#usage)
- [Examples](#examples)
- [Performance Comparison](#performance-comparison)
- [Contributing](#contributing)
- [License](#license)
- [Contact](#contact)

## Features

- Implemented parallel sorting algorithms.
- Developed a deep learning model for MNIST digit classification.
- Utilized multiprocessing for CPU-bound tasks.
- Employed GPU acceleration for deep learning.
- Performance comparison of various algorithms.
- Well-structured code with comments for easy understanding.

## Technologies Used

- **Python**: The primary programming language for this project.
- **NumPy**: For numerical computations.
- **Pandas**: For data manipulation and analysis.
- **TensorFlow/Keras**: For building and training deep learning models.
- **Matplotlib**: For data visualization.
- **Multiprocessing**: To execute tasks in parallel on the CPU.
- **CUDA**: For GPU acceleration.

## Installation

To set up this project on your local machine, follow these steps:

1. Clone the repository:
```bash
git clone https://github.com/masterx35/Parallel-Computing-
```

2. Navigate to the project directory:
```bash
cd Parallel-Computing-
```

3. Install the required packages:
```bash
pip install -r requirements.txt
```

4. If you plan to use GPU acceleration, ensure you have the appropriate drivers and libraries installed.

## Usage

After installation, you can run the sorting algorithms and deep learning models. Here are some commands to get you started:

### Running Parallel Sorting Algorithms

To execute the parallel sorting algorithms, use:
```bash
python parallel_sorting.py
```

### Training the Deep Learning Model

To train the MNIST model, run:
```bash
python train_mnist.py
```

You can adjust parameters in the scripts to experiment with different configurations.

## Examples

### Parallel Sorting Algorithms

The repository includes several sorting algorithms, including:

- **Merge Sort**: A divide-and-conquer algorithm that splits the array into halves, sorts each half, and merges them back together.
- **Quick Sort**: A highly efficient sorting algorithm that selects a 'pivot' element and partitions the other elements into two sub-arrays.

### Deep Learning with MNIST

The MNIST dataset consists of 70,000 images of handwritten digits. The model architecture includes:

- **Input Layer**: Accepts 28x28 pixel images.
- **Hidden Layers**: Fully connected layers with ReLU activation.
- **Output Layer**: Softmax activation for digit classification.

To visualize the training process, the script generates graphs showing loss and accuracy over epochs.

## Performance Comparison

This section provides insights into the performance of different algorithms. We compared:

- **Execution Time**: Measured in seconds for various input sizes.
- **Accuracy**: Evaluated on the MNIST dataset.

Results indicate that parallel algorithms significantly reduce execution time compared to their sequential counterparts. The deep learning model achieved an accuracy of over 98% on the test set.

## Contributing

Contributions are welcome! If you want to add features or fix bugs, please follow these steps:

1. Fork the repository.
2. Create a new branch:
```bash
git checkout -b feature/YourFeature
```
3. Make your changes and commit them:
```bash
git commit -m "Add your feature"
```
4. Push to the branch:
```bash
git push origin feature/YourFeature
```
5. Create a pull request.

## License

This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.

## Contact

For any questions or suggestions, feel free to reach out:

- GitHub: [masterx35](https://github.com/masterx35)
- Email: masterx35@example.com

Check the [Releases](https://github.com/masterx35/Parallel-Computing-/releases) section for downloadable files and further updates.