Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/adamcanray/private-ipfs-worker
https://github.com/adamcanray/private-ipfs-worker
Last synced: 18 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/adamcanray/private-ipfs-worker
- Owner: adamcanray
- Created: 2024-02-15T05:13:50.000Z (11 months ago)
- Default Branch: main
- Last Pushed: 2024-02-28T06:03:05.000Z (10 months ago)
- Last Synced: 2024-02-29T04:39:25.860Z (10 months ago)
- Language: Shell
- Size: 7.81 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# IPFS Worker Node
This repository is for IPFS Worker Node.
## Run
1. Build the image
```bash
docker build . -t ipfs_worker --no-cache
```if we want to build the image for specific platform (ex: `--platform linux/arm64/v8`).
2. Run the container
```bash
docker run --name -e MANAGER_IPFS_ID= -e MANAGER_IP_ADDRESS= -d ipfs_worker
```> we can get the `MANAGER_IPFS_ID` and `MANAGER_IP_ADDRESS` from the manager node:
```bash
docker exec -it /bin/sh
ipfs id
ifconfig
```> if we want to expose the port, we can add for example `-p 4001:4001 -p 5001:5001 -p 8080:8080` to the command, but on ipfs config we are not exposed the API server (`:5001`) to the public.
## Addition
### About project
This project is running well with alphine linux with arm base (`linux/arm64/v8`), if you run this project using different distribution, maybe you should adjust some scripts or service config (ex: `services/ipfs`, etc).
### Porject Debugging
If the ipfs daemon is not running, we can start it manually for each node with `rc-service ipfs start` command. First, we should exec into the container
```bash
docker exec -it /bin/sh
```then start the ipfs daemon.
```bash
rc-status -a
rc-service ipfs start
touch /run/openrc/softlevel
rc-service ipfs restart
rc-update add ipfs default
sleep 1
rc-status -a
```## Notes
- In [previous](https://github.com/adamcanray/Private-IPFS-Cluster-Data-Replication), manager and worker project is in one code base, we run with docker compose to simplify the automation, since currently the worker project is not on one code base anymore, we should run it in some steps (ex: running worker node manually, then run worker nodes and pointing it to the worker manually), see [Run](/#Run) section above.
- The `swarm.key`is should be confidential (generated once from manager).
- do step 2 on [Run](/#Run) section above for running other worker nodes.