Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/PrincetonUniversity/hpc_beginning_workshop
https://github.com/PrincetonUniversity/hpc_beginning_workshop
Last synced: 3 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/PrincetonUniversity/hpc_beginning_workshop
- Owner: PrincetonUniversity
- License: mit
- Created: 2017-04-11T15:23:59.000Z (over 7 years ago)
- Default Branch: main
- Last Pushed: 2024-05-15T21:11:18.000Z (6 months ago)
- Last Synced: 2024-07-04T02:20:24.809Z (4 months ago)
- Language: Shell
- Size: 26.8 MB
- Stars: 134
- Watchers: 8
- Forks: 42
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Getting Started with the Research Computing Clusters - Running Example Jobs on the HPC Clusters
## About
This resource provides example jobs that can be run on Princeton University's Research Comptuing clusters.## Useful links
* [Getting Started with HPC at Princeton](https://researchcomputing.princeton.edu/getting-started) - Landing page for getting started with our clusters.
* [Research Computing KnowledgeBase](https://researchcomputing.princeton.edu/support/knowledge-base) - Search all of our help articles for using the clusters at Princeton.
* [Research Computing FAQ](https://researchcomputing.princeton.edu/support/faq) - View answers to frequently asked questions.
* [Research Computing Support Page](https://researchcomputing.princeton.edu/support) - Where to go if you need more help.# Running Example Jobs on the HPC Clusters
The sample jobs above are written with Nobel, Adroit, Della, Stellar, and Tiger in mind. You will probably need to use different environment modules on Traverse.
Follow the directions below to begin running simple jobs on Adroit.
After SSHing to Adroit the first step is to `cd` (change directory)
to your directory on `/scratch/network/`. We do this because `/scratch/network`
is a much faster filesystem with more space than `/home`.```
ssh @adroit.princeton.edu # vpn required from off-campus
cd /scratch/network/ # on tiger, della or stellar replace /scratch/network/ with /scratch/gpfs/
git clone https://github.com/PrincetonUniversity/hpc_beginning_workshop
cd hpc_beginning_workshop
```Then choose an example and follow the directions. For instance:
```
cd python/cpu
cat README.md
```For more detailed instructions on running an example Python or R job, see the [First Slurm Job](https://researchcomputing.princeton.edu/get-started/guide-princeton-clusters/3-first-slurm-job) section of the [Guide to the Princeton Clusters](https://researchcomputing.princeton.edu/get-started/guide-princeton-clusters).
## Where to store your files
To familiarize yourself with the cluster's file systems and where to store to your files, review our [Data Storage](https://researchcomputing.princeton.edu/support/knowledge-base/data-storage) page.
**IMPORTANT**: *You should run your jobs out of /scratch/network on Adroit and /scratch/gpfs on the other clusters. These filesystems are very fast and provide vast amounts of storage. Do not run jobs out of /tigress or /projects. These filesystems are slow and should only be used for backing-up the files that you produce on /scratch/gpfs or /scratch/network. Your /home directory on all clusters is small should only be used for storing source code, executables, Conda environments and small data sets*.