Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ctoic/mpi
MPI Basics
https://github.com/ctoic/mpi
Last synced: about 14 hours ago
JSON representation
MPI Basics
- Host: GitHub
- URL: https://github.com/ctoic/mpi
- Owner: Ctoic
- Created: 2023-11-02T19:50:00.000Z (about 1 year ago)
- Default Branch: master
- Last Pushed: 2023-11-02T20:05:59.000Z (about 1 year ago)
- Last Synced: 2023-11-02T21:23:44.959Z (about 1 year ago)
- Language: C
- Homepage:
- Size: 11.7 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: readme.md
Awesome Lists containing this project
README
# Introduction to Parallel Distributed Computing
- Introduction to Parallel Distributed Computing: is a way to solve a large problem by dividing it into smaller parts and then solving each of those parts simultaneously.
- Parallel computing has been around for decades, but it has gained popularity in recent years due to the emergence of Big Data, Artificial Intelligence, and Data Science.
- Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but it's gaining broader interest due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
- Parallel computing is closely related to concurrent computing—they are frequently used together, and often conflated, though the two are distinct: it is possible to have parallelism without concurrency (such as bit-level parallelism), and concurrency without parallelism (such as multitasking by time-sharing on a single-core CPU). In parallel computing, a computational task is typically broken down into several, often many, very similar subtasks that can be processed independently and whose results are combined afterwards, upon completion. In contrast, in concurrent computing, the various processes often do not address related tasks; when they do, as is typical in distributed computing, the separate tasks may have a varied nature and often require some inter-process communication during execution.## How to Program Distributed Computers ?
- Message Passing Based Programming Model : MPI
- Shared Memory Based Programming Model : OpenMP
- Hybrid Programming Model : MPI + OpenMP
- GPU Programming Model : CUDA
- Cloud Computing : AWS, Azure, Google Cloud, IBM Cloud, Oracle Cloud, Alibaba Cloud, etc.
- Hadoop, Spark, etc.
- Quantum Computing : IBM Q, D-Wave, etc.
- Quantum Machine Learning : Qiskit, PennyLane, etc.
- Quantum Programming Languages : Q#, Qiskit, PennyLane, etc.
- Quantum Programming Frameworks : Qiskit, PennyLane, etc.
- Quantum Programming Libraries : Qiskit, PennyLane, etc.
- Quantum Programming Tools : Qiskit, PennyLane, etc.### Introduction to MPI
- MPI stands for Message Passing Interface. It is a specification for the developers and users of message passing libraries. By itself, it is NOT a library - but rather the specification of what such a library should be.
- MPI is a library specification for the message-passing paradigm. The goal of the Message Passing Interface is to provide a widely used standard for writing message passing programs. The interface attempts to be both efficient and flexible. MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a library - but rather the specification of what such a library should be. MPI primarily addresses the message-passing parallel programming model: data is moved from the address space of one process to that of another process through cooperative operations on each process. MPI is a language-independent communications protocol used to program parallel computers. It was developed in the early 1990s by a broadly based committee of vendors, users, and researchers. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in Fortran 77 or the C programming language. There are several well-tested and efficient implementations of MPI, many of which are open-source or in the public domain. These implementations are portable across a wide variety of distributed-memory parallel architectures. The MPI Forum, a broad-based group of parallel computer vendors, library writers, and applications specialists, has defined a standard interface for message-passing libraries. The Message Passing Interface (MPI) is a library specification for the message-passing paradigm. The goal of the Message Passing Interface is to provide a widely used standard for writing message passing programs. The interface attempts to be both efficient and flexible. MPI is a language-independent communications protocol used to program parallel computers. It was developed in the early 1990s by a broadly based committee of vendors, users, and researchers. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in Fortran 77 or the C programming language. There are several well-tested and efficient implementations of MPI, many of which are open-source or in the public domain. These implementations are portable across a wide variety of distributed-memory parallel architectures. The MPI Forum, a broad-based group of parallel computer vendors, library writers, and applications specialists, has defined a standard interface for message-passing libraries. The Message Passing Interface (MPI) is a library specification for the message-passing paradigm. The goal of the Message Passing Interface is to provide a widely used standard for writing message passing programs. The interface attempts to be both efficient and flexible. MPI is a language-independent communications protocol used to program parallel computers.