https://github.com/gyakobo/multi-threading
This project was made to showcase a sample example of muli-threading in the C programming language.
https://github.com/gyakobo/multi-threading
c function-approximation integrals integration multithreading number-pi parallel-computing
Last synced: 4 months ago
JSON representation
This project was made to showcase a sample example of muli-threading in the C programming language.
- Host: GitHub
- URL: https://github.com/gyakobo/multi-threading
- Owner: Gyakobo
- License: mit
- Created: 2024-06-04T00:03:38.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-12-05T17:43:45.000Z (about 1 year ago)
- Last Synced: 2024-12-05T18:34:07.356Z (about 1 year ago)
- Topics: c, function-approximation, integrals, integration, multithreading, number-pi, parallel-computing
- Language: C
- Homepage:
- Size: 138 KB
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Multi-Threading or Parallelism




author: [Andrew Gyakobo](https://github.com/Gyakobo)
This project was made to showcase a sample example of muli-threading in the C programming language. To be more exact, in this project we'll be trying to approximate the value $\pi$.
## Introduction
Multi-threading is a programming concept where multiple threads are spawned by a process to execute tasks concurrently. Each thread runs independently but shares the process's resources like memory and file handles. Multi-threading can lead to more efficient use of resources, faster execution of tasks, and improved performance in multi-core systems.
### Key Concepts
1. Thread: A lightweight process or the smallest unit of execution within a process.
1. Concurrency vs. Parallelism: Concurrency means multiple threads make progress at the same time, while parallelism means multiple threads run simultaneously on different cores.
1. Synchronization: Mechanism to control the access of multiple threads to shared resources.
Thread Safety: Ensuring that shared data is accessed by only one thread at a time.
## Methodology
We'll be utilizing the function $\dfrac{4}{1 + x^2}$, the integral of which is a near approximation of $\pi$. Thus we'll be calculating the following formula:
$$
\int \dfrac{4}{1 + x^2} dx \approx \pi
$$
>[!NOTE]
>The graph below showcases the integrated function.

There is of course a minor issue with this calculation. In particular, as the $dx$ component gets ever smaller, the integration gets more precise. Hence it becomes a priority to make the $dx$ as small as possible. This however certainly backfires as with the decreasing $dx$ the integration becomes more complex as there are more facets in the function to compute.
Here is a more specific example of the aforementioned computation, this isn't a representation of the previously calculated function but still comminucates the same idea:

As you can witness, the integration is just a summation of all the rectangles entangled under the function. This is roughly what is being calculated:
$f(x_{i})$ - the function $4/(1 + x^2)$
$\Delta x$ - is the select width of the individual squares that we have to compute.
$$
\displaystyle\sum\limits_{i=0}^{\infty} f(x_{i}) \Delta x \approx \pi
$$
From here we can distinctly see that the smaller the $dx$, the more rectangular areas we have have to compute and add up. This however proves to be a challenge cause the more the rectangles the more the computation, and we know that it is essential to have an enormous amount of said shapes.
Henceforth, a viable solution to generate as much rectangles as possible would be to use parallelism and multi-core processing with the C library ``.
## Code snippets
* From the getgo the code greets us with two include statements:
```c
#include
#include
```
* Furthermore, we define the `const int num_steps` *(the quantity of rectangles, the area of which shall be integrated)* and then the `double step` *(the dx or the width of each rectangle)*
* Now entering the main scope of our program we initialize the multi-threading aspect using the `#pragma omp parallel` where each so-called thread runs simeaultaneously from one another and calculates the partial area `local_area`.
```c
#pragma omp parallel
{
int id = omp_get_thread_num();
int n = omp_get_num_threads();
int i;
double local_area = 0;
for (i = id; i[!IMPORTANT]
>It is important to acquiesce that before running this program you need to fathom and fully understand the limits of your PC set before making such calculations.
## The OpenMP - open Multi-processing library
Just as a side note the **OpenMP** library comprises of the following parts. Also feel free to download, edit, commit and leave feedback to the project.
### Compiler Directives
```c
#pragma omp parallel
#pragma omp critical
#pragma omp barrier
#pragma omp master
```
### Functions
```c
#include
int omp_get_thread_num()
int omp_get_num_threads()
```
### Compiling and Linking
```bash
gcc -fopenmp # C compiler
g++ -fopenmp # C++ compiler
```
### Environmental variables
```bash
export OMP_NUM_THREADS=8
export OMP_NESTED=TRUE
```
## License
MIT