{"id":37643362,"url":"https://github.com/charmplusplus/charm","last_synced_at":"2026-01-16T11:22:11.724Z","repository":{"id":37978924,"uuid":"182860956","full_name":"charmplusplus/charm","owner":"charmplusplus","description":"The Charm++ parallel programming system. Visit https://charmplusplus.org/ for more information.","archived":false,"fork":false,"pushed_at":"2026-01-12T03:12:19.000Z","size":265198,"stargazers_count":228,"open_issues_count":608,"forks_count":57,"subscribers_count":20,"default_branch":"main","last_synced_at":"2026-01-12T05:51:53.157Z","etag":null,"topics":["asynchronous-tasks","cpp","hpc","parallel-computing","runtime"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/charmplusplus.png","metadata":{"files":{"readme":"README.ampi","changelog":"CHANGES","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":".zenodo.json","notice":"NOTICE","maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2019-04-22T20:14:58.000Z","updated_at":"2025-12-17T22:33:38.000Z","dependencies_parsed_at":"2023-02-18T17:16:24.711Z","dependency_job_id":"7649a6a0-ec3f-407c-9641-21437d3a4cbb","html_url":"https://github.com/charmplusplus/charm","commit_stats":null,"previous_names":["charmplusplus/charm","uiuc-ppl/charm"],"tags_count":79,"template":false,"template_full_name":null,"purl":"pkg:github/charmplusplus/charm","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/charmplusplus%2Fcharm","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/charmplusplus%2Fcharm/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/charmplusplus%2Fcharm/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/charmplusplus%2Fcharm/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/charmplusplus","download_url":"https://codeload.github.com/charmplusplus/charm/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/charmplusplus%2Fcharm/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28478220,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-16T06:30:42.265Z","status":"ssl_error","status_checked_at":"2026-01-16T06:30:16.248Z","response_time":107,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["asynchronous-tasks","cpp","hpc","parallel-computing","runtime"],"created_at":"2026-01-16T11:22:11.075Z","updated_at":"2026-01-16T11:22:11.715Z","avatar_url":"https://github.com/charmplusplus.png","language":"C++","funding_links":[],"categories":[],"sub_categories":[],"readme":"\nAdaptive MPI (AMPI)\n-------------------\nAMPI is an implementation of the MPI standard written on top of Charm++, meant\nto give MPI applications access to high-level, application-independent features\nsuch as overdecomposition (processor virtualization), dynamic load balancing,\nautomatic fault tolerance, and overlap of computation and communication. For\nmore information on all topics related to AMPI, consult the AMPI manual here:\n\n    http://charm.cs.illinois.edu/manuals/html/ampi/manual.html\n\n\nBuilding AMPI\n-------------\nAMPI has its own target in the build system. You can run the top-level\nbuild script interactively using \"./build\", or you can specify your\narchitecture, operating system, compilers, and other options directly.\nFor example:\n\n    ./build AMPI netlrts-linux-x86_64 gfortran gcc --with-production\n\n\nCompiling and Linking AMPI Programs\n-----------------------------------\nAMPI source files can be compiled and linked with the wrappers found\nin bin/, such as ampicc, ampicxx, ampif77, and ampif90:\n\n    ampif90 pgm.f90 -o pgm\n\nFor consistency with other MPI implementations, these wrappers are also\nprovided using their standard names with the suffix \".ampi\".\nAdditionally, the \"bin/ampi\" subdirectory contains the wrappers with\ntheir standard names, for simplicity of overriding the default system MPI\nvia the $PATH environment variable.\n\n\nRunning AMPI Programs\n---------------------\nAMPI programs can be run with charmrun like any other Charm++ program. In\naddition to the number of processes, specified with \"+p n\", AMPI programs\nalso take the total number of virtual processors (VPs) to run with as \"+vp n\".\nFor example, to run an AMPI program 'pgm' on 4 processors using 32 ranks, do:\n\n    ./charmrun +p 4 ./pgm +vp 32\n\nTo run with dynamic load balancing, add \"+balancer \u003cLB\u003e\":\n\n    ./charmrun +p 4 ./pgm +vp 32 +balancer RefineLB\n\n\nPorting to AMPI\n---------------\nGlobal and static variables are unsafe for use in virtualized AMPI programs.\nThis is because globals are defined at the process level, and AMPI ranks are\nimplemented as user-level threads, which may share a process with other ranks\nTherefore, to run with more than 1 VP per processor, all globals and statics\nthat are non-readonly and whose value does not depend on rank must be modified\nto use local storage. Consult the AMPI manual for more information on global\nvariable privatization and automated approaches to privatization.\n\nAMPI programs must have the following main function signatures, so that AMPI\ncan bootstrap before invoking the user's main function:\n    * C/C++ programs should use \"int main(int argc, char **argv)\"\n    * Fortran programs must use \"Subroutine MPI_Main\" instead of\n      \"Program Main\"\n\n\nIncompatibilities and Extensions\n--------------------------------\nAMPI has some known flaws and incompatibilities with other MPI implementations:\n    * RMA routines do not have support for derived datatypes.\n    * Not all collectives are supported on intercommunicators.\n    * No support for MPI_Pack_external, MPI_Pack_external_size, MPI_Unpack_external.\n\nAMPI also has extensions to the MPI standard to enable use of the high-level\nfeatures provided by the Charm++ adaptive runtime system. All extensions are\nprefixed with AMPI_:\n    * AMPI_Migrate tells the runtime system that the application has reached a\n      point at which the runtime system may serialize and migrate ranks.\n      It is used for dynamic load balancing and fault tolerance. See the AMPI\n      manual for more information on how to use it.\n    * AMPI_Register_pup is used to register PUP routines and user data.\n    * AMPI_Get_pup_data returns a pointer to user data managed by the runtime.\n    * AMPI_Load_set_value sets the calling rank's load to the given user value.\n    * AMPI_Load_start_measure starts load balance information collection.\n    * AMPI_Load_stop_measure stops load balance information collection.\n    * AMPI_Load_reset_measure clears the load balance database.\n    * AMPI_Migrate_to_pe migrates the calling rank to the given PE.\n    * AMPI_Set_migratable sets the migratability of the calling rank.\n    * AMPI_Command_argument_count returns the number of command line arguments\n      given to a Fortran AMPI program excluding charmrun and AMPI parameters.\n    * AMPI_Get_command_argument returns an argument from the command line\n      to a Fortran AMPI program.\n\nMPI-IO is support is available via our port of the ROMIO library. however:\n    * ROMIO is not built by default due to the fact that the current port is\n      incompatible with GCC 14 and beyond.  Add --with-romio to your\n      build line to enable MPI-IO support via ROMIO.\n\nNote that AMPI defines a preprocessor symbol \"AMPI\" so that user codes can\ncheck for AMPI's presence at compile time using \"#ifdef AMPI\".\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcharmplusplus%2Fcharm","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcharmplusplus%2Fcharm","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcharmplusplus%2Fcharm/lists"}