https://github.com/starmanfrommars/parallel-computing
This repo contains all codes of Parallel Computing Lab for Semester 7
https://github.com/starmanfrommars/parallel-computing
c linux omp-parallel parallel-computing
Last synced: 6 months ago
JSON representation
This repo contains all codes of Parallel Computing Lab for Semester 7
- Host: GitHub
- URL: https://github.com/starmanfrommars/parallel-computing
- Owner: starmanfrommars
- Created: 2025-09-16T06:04:59.000Z (6 months ago)
- Default Branch: master
- Last Pushed: 2025-09-16T06:33:41.000Z (6 months ago)
- Last Synced: 2025-09-16T08:32:53.202Z (6 months ago)
- Topics: c, linux, omp-parallel, parallel-computing
- Language: C
- Homepage:
- Size: 3.91 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# PARALLEL-COMPUTING-LAB
- Course Code : BCS702
- Credits : 4
## Course outcomes:
* Explain the need for parallel programming
* Demonstrate parallelism in MIMD system.
* Apply MPI library to parallelize the code to solve the given problem.
* Apply OpenMP pragma and directives to parallelize the code to solve the given problem
* Design a CUDA program for the given problem.
## Lab Experiments
1. Write an OpenMP program that divides the Iterations into chunks containing 2 iterations, respectively (OMP_SCHEDULE=static,2). Its input should be the number of iterations, and its output should be which iterations of a parallelized for loop are executed by which thread. For example, if there are two threads and four iterations, the output might be the following:
- a. Thread 0 : Iterations 0 −− 1
- b. Thread 1 : Iterations 2 −− 3
2. 1 Write a OpenMP program to sort an array on n elements using both sequential and parallel mergesort(using Section). Record the difference in execution time.
3. Write a OpenMP program to calculate n Fibonacci numbers using tasks.
4. Write a OpenMP program to find the prime numbers from 1 to n employing parallel for directive. Record both serial and parallel execution times.
5. Write a MPI Program to demonstration of MPI_Send and MPI_Recv.
6. Write a MPI program to demonstration of deadlock using point to point communication and avoidance of deadlock by altering the call sequence
7. Write a MPI Program to demonstration of Broadcast operation.
8. Write a MPI Program demonstration of MPI_Scatter and MPI_Gather
9. Write a MPI Program to demonstration of MPI_Reduce and MPI_Allreduce (MPI_MAX,MPI_MIN, MPI_SUM, MPI_PROD)
[ Above experiments are in order of lab execution date]
## Experiment Dates
Experiment No
Date
Lab 1
12-09-2025
Lab 2
16-09-2025
## Command for Execution
gcc FileName.c -fopenmp