{"id":18762636,"url":"https://github.com/deephyper/nasbigdata","last_synced_at":"2026-04-04T00:03:10.666Z","repository":{"id":104962479,"uuid":"279793726","full_name":"deephyper/NASBigData","owner":"deephyper","description":"Neural architecture search for big data problems","archived":false,"fork":false,"pushed_at":"2021-11-08T16:24:31.000Z","size":1263,"stargazers_count":4,"open_issues_count":1,"forks_count":2,"subscribers_count":0,"default_branch":"master","last_synced_at":"2025-06-21T04:47:25.663Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-2-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/deephyper.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2020-07-15T07:10:42.000Z","updated_at":"2022-12-08T22:11:17.000Z","dependencies_parsed_at":"2023-11-30T16:15:10.155Z","dependency_job_id":null,"html_url":"https://github.com/deephyper/NASBigData","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/deephyper/NASBigData","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deephyper%2FNASBigData","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deephyper%2FNASBigData/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deephyper%2FNASBigData/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deephyper%2FNASBigData/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/deephyper","download_url":"https://codeload.github.com/deephyper/NASBigData/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deephyper%2FNASBigData/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":273381915,"owners_count":25095330,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-03T02:00:09.631Z","response_time":76,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-07T18:22:33.038Z","updated_at":"2026-04-04T00:03:10.650Z","avatar_url":"https://github.com/deephyper.png","language":"Jupyter Notebook","readme":"# AgEBO-Tabular\n\n[![DOI](https://zenodo.org/badge/279793726.svg)](https://zenodo.org/badge/latestdoi/279793726)\n\nThe code is available at [NASBigData Github repo](https://github.com/deephyper/NASBigData).\n\nAging Evolution with Bayesian Optimization (AgEBO) is a nested-distributed algorithm to generate better neural architectures. AgEBO advantages are:\n\n- the parallel evaluation of neural networks on computing ressources (e.g., cores, gpu, nodes).\n- the parallel training of each evaluated neural networks by using data-parallelism (Horovod).\n- the jointly optimization of hyperparameters and neural architectures which enables the automatic adaptation of data-parallelism setting to avoid a loss of accuracy.\n\nThis repo contains the experimental materials linked to the implementation of AgEBO algorithm in DeepHyper's repo.\nThe version of DeepHyper used is: [e8e07e2db54dceed83b626104b66a07509a95a8c](https://github.com/deephyper/deephyper/commit/e8e07e2db54dceed83b626104b66a07509a95a8c)\n\n## Environment information\n\nThe experiments were executed on the [ThetaGPU](https://www.alcf.anl.gov/alcf-resources/theta) supercomputer.\n\n* OS Login Node: Ubuntu 18.04.5 LTS (GNU/Linux 4.15.0-112-generic x86_64)\n* OS Compute Node: NVIDIA DGX Server Version 4.99.9 (GNU/Linux 5.3.0-62-generic x86_64)\n* Python: Miniconda Python 3.8\n\nFor more information about the environment refer to the `infos-sc21.txt` which was generated with the provided SC [Author-Kit](https://github.com/SC-Tech-Program/Author-Kit.)\n\n## Installation\n\nInstall Miniconda: [conda.io](https://docs.conda.io/en/latest/miniconda.html). Then create a Python environment:\n\n```console\nconda create -n dh-env python=3.8\n\n```\n\nThen install Deephyper. To have the detailed installation process of DeepHyper follow the instructions given at: [deephyper.readthedocs.io](https://deephyper.readthedocs.io/). We propose the following commands:\n\n```console\nconda activate dh-env\nconda install gxx_linux-64 gcc_linux-64 -y\ngit clone https://github.com/deephyper/deephyper.git\ncd deephyper/\ngit checkout e8e07e2db54dceed83b626104b66a07509a95a8c\npip install -e.\npip install ray[default]\n```\n\nFinally, install the NASBigData package::\n\n```console\ncd ..\ngit clone https://github.com/deephyper/NASBigData.git\ncd NASBigData/\npip install -e.\n```\n\n## Download and Generate datasets from ECP-Candle\n\nHave the following dependencies installed:\n\n```console\npip install numba\npip install astropy\npip install patsy\npip install statsmodels\n```\n\nFor the Combo dataset run:\n\n```console\ncd NASBigData/nas_big_data/combo/\nsh download_data.sh\n```\n\nFor the Attn dataset run:\n\n```console\ncd NASBigData/nas_big_data/attn/\nsh download_data.sh\n```\n\n## How it works\n\nThe AgEBO algorithm (Aging Evolution with Bayesian Optimisation) was directly added to the DeepHyper project and can be found [here](https://github.com/deephyper/deephyper/blob/e8e07e2db54dceed83b626104b66a07509a95a8c/deephyper/search/nas/agebo.py#L90).\n\nTo submit and run an experiment on the ThetaGPU system the following command is used:\n\n```console\ndeephyper ray-submit nas agebo -w combo_2gpu_8_agebo_sync -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 2 --num-gpus-per-task 2 -as ../SetUpEnv.sh --n-jobs 16\n```\n\nwhere\n\n* `-w` denotes the name of the experiment.\n* `-n` denotes the number of nodes requested.\n* `-t` denotes the allocation time (minutes) requested.\n* `-A` denotes the project's name at the ALCF.\n* `-q` denotes the queue's name.\n* `--problem` is the Python package import to the Problem definition (which define the hyperparameter and neural architecture search space, the loss to optimise, etc.).\n* `--run` is the Python package import to the run function (which evaluate each configuration sampled by the search).\n* `--max-evals` denotes the maximum number of evaluations to performe (often affected to an high value so that the search uses the whole allocation time).\n* `--num-cpus-per-task` the number of cores used by each evaluation.\n* `--num-gpus-per-task` the number of GPUs used by each evaluation.\n* `--as` the absolute PATH to the activation script `SetUpEnv.sh` (used to initialise the good environment on compute nodes when the allocation is starting).\n* `--n-jobs` the number of processes that the surrogate model of the Bayesian optimiser can use.\n\n\nThe `deephyper ray-submit ...` command will create a directory with `-w` name and automatically generate a submission script for Cobalt (the scheduler at the ALCF). Such a submission script will be composed of the following.\n\nThe initialisation of the environment:\n\n```bash\n#!/bin/bash -x\n#COBALT -A datascience\n#COBALT -n 8\n#COBALT -q full-node\n#COBALT -t 180\n\nmkdir infos \u0026\u0026 cd infos\n\nACTIVATE_PYTHON_ENV=\"/lus/grand/projects/datascience/regele/thetagpu/agebo/SetUpEnv.sh\"\necho \"Script to activate Python env: $ACTIVATE_PYTHON_ENV\"\nsource $ACTIVATE_PYTHON_ENV\n\n```\n\nThe initialisation of the Ray cluster:\n\n```bash\n# USER CONFIGURATION\nCPUS_PER_NODE=8\nGPUS_PER_NODE=8\n\n\n# Script to launch Ray cluster\n# Getting the node names\nmapfile -t nodes_array -d '\\n' \u003c $COBALT_NODEFILE\n\nhead_node=${nodes_array[0]}\nhead_node_ip=$(dig $head_node a +short | awk 'FNR==2')\n\n# if we detect a space character in the head node IP, we'll\n# convert it to an ipv4 address. This step is optional.\nif [[ \"$head_node_ip\" == *\" \"* ]]; then\nIFS=' ' read -ra ADDR \u003c\u003c\u003c\"$head_node_ip\"\nif [[ ${#ADDR[0]} -gt 16 ]]; then\n  head_node_ip=${ADDR[1]}\nelse\n  head_node_ip=${ADDR[0]}\nfi\necho \"IPV6 address detected. We split the IPV4 address as $head_node_ip\"\nfi\n\n# Starting the Ray Head Node\nport=6379\nip_head=$head_node_ip:$port\nexport ip_head\necho \"IP Head: $ip_head\"\n\necho \"Starting HEAD at $head_node\"\nssh -tt $head_node_ip \"source $ACTIVATE_PYTHON_ENV; \\\n    ray start --head --node-ip-address=$head_node_ip --port=$port \\\n    --num-cpus $CPUS_PER_NODE --num-gpus $GPUS_PER_NODE --block\" \u0026\n\n# optional, though may be useful in certain versions of Ray \u003c 1.0.\nsleep 10\n\n# number of nodes other than the head node\nworker_num=$((${#nodes_array[*]} - 1))\necho \"$worker_num workers\"\n\nfor ((i = 1; i \u003c= worker_num; i++)); do\n    node_i=${nodes_array[$i]}\n    node_i_ip=$(dig $node_i a +short | awk 'FNR==1')\n    echo \"Starting WORKER $i at $node_i with ip=$node_i_ip\"\n    ssh -tt $node_i_ip \"source $ACTIVATE_PYTHON_ENV; \\\n        ray start --address $ip_head \\\n        --num-cpus $CPUS_PER_NODE --num-gpus $GPUS_PER_NODE\" --block \u0026\n    sleep 5\ndone\n\n```\n\nThe DeepHyper command to start the search:\n\n```bash\ndeephyper nas agebo --evaluator ray --ray-address auto \\\n    --problem nas_big_data.combo.problem_agebo.Problem \\\n    --run deephyper.nas.run.tf_distributed.run \\\n    --max-evals 10000 \\\n    --num-cpus-per-task 2 \\\n    --num-gpus-per-task 2 \\\n    --n-jobs=16\n```\n\n## Commands to reproduce\n\nAll the commands can be found in the [NASBigData repo](https://github.com/deephyper/NASBigData).\n\nThe experiments are name as `{dataset}_{x}gpu_{y}_{z}_{other}` where\n\n* `dataset` is the name of the corresponding dataset (e.g., combo or attn).\n* `x` is the number of GPUs used for each trained neural network (e.g., 1, 2, 4, 8).\n* `y` is the number of nodes used for the allocation (e.g., 1, 2, 4, 8, 16).\n* `z` is the name of the algorithm (e.g., age, agebo).\n* `other` are other keywords used to differentiate some experiments (e.g., kappa value)\u003e\n\nWe give the full set of commands used to run our experiments.\n\n### Combo dataset\n\n* combo_1gpu_8_age\n\n```console\ndeephyper ray-submit nas regevo -w combo_1gpu_8_age -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_ae.Problem --run deephyper.nas.run.alpha.run --max-evals 10000 --num-cpus-per-task 1 --num-gpus-per-task 1 -as ../SetUpEnv.sh\n```\n\n* combo_2gpu_8_age\n\n```console\ndeephyper ray-submit nas regevo -w combo_2gpu_8_age -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_ae.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 2 --num-gpus-per-task 2 -as ../SetUpEnv.sh\n```\n\n* combo_8gpu_8_age\n\n```console\ndeephyper ray-submit nas regevo -w combo_8gpu_8_age -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_ae.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 8 --num-gpus-per-task 8 -as ../SetUpEnv.sh\n```\n\n* combo_8gpu_8_agebo\n\n```console\ndeephyper ray-submit nas agebo -w combo_8gpu_8_agebo -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 8 --num-gpus-per-task 8 -as ../SetUpEnv.sh --n-jobs 16\n```\n\n* combo_2gpu_8_agebo\n\n```console\ndeephyper ray-submit nas agebo -w combo_2gpu_8_agebo -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 2 --num-gpus-per-task 2 -as ../SetUpEnv.sh --n-jobs 16\n```\n\n* combo_1gpu_2_age\n\n```console\ndeephyper ray-submit nas regevo -w combo_1gpu_2_age -n 2 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_ae.Problem --run deephyper.nas.run.alpha.run --max-evals 10000 --num-cpus-per-task 1 --num-gpus-per-task 1 -as ../SetUpEnv.sh\n```\n\n* combo_2gpu_4_age\n\n```console\ndeephyper ray-submit nas regevo -w combo_2gpu_4_age -n 4 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_ae.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 2 --num-gpus-per-task 2 -as ../SetUpEnv.sh\n```\n\n* combo_4gpu_8_age\n\n```console\ndeephyper ray-submit nas regevo -w combo_4gpu_8_age -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_ae.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 4 --num-gpus-per-task 4 -as ../SetUpEnv.sh\n```\n\n* combo_8gpu_16_age\n\n```console\ndeephyper ray-submit nas regevo -w combo_8gpu_16_age -n 16 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_ae.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 8 --num-gpus-per-task 8 -as ../SetUpEnv.sh\n```\n\n* combo_1gpu_2_agebo\n\n```console\ndeephyper ray-submit nas agebo -w combo_1gpu_2_agebo -n 2 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_agebo.Problem --run deephyper.nas.run.alpha.run --max-evals 10000 --num-cpus-per-task 1 --num-gpus-per-task 1 -as ../SetUpEnv.sh --n-jobs 16\n```\n\n* combo_2gpu_4_agebo\n\n```console\ndeephyper ray-submit nas agebo -w combo_2gpu_4_agebo -n 4 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 2 --num-gpus-per-task 2 -as ../SetUpEnv.sh --n-jobs 16\n```\n\n* combo_4gpu_8_agebo\n\n```console\ndeephyper ray-submit nas agebo -w combo_4gpu_8_agebo -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 4 --num-gpus-per-task 4 -as ../SetUpEnv.sh --n-jobs 16\n```\n\n* combo_8gpu_16_agebo\n\n```console\ndeephyper ray-submit nas agebo -w combo_8gpu_16_agebo -n 16 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 8 --num-gpus-per-task 8 -as ../SetUpEnv.sh --n-jobs 16\n```\n\n* combo_4gpu_8_agebo_1_96\n\n```console\ndeephyper ray-submit nas agebo -w combo_4gpu_8_agebo_1_96 -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 4 --num-gpus-per-task 4 -as ../SetUpEnv.sh --n-jobs 16 --kappa 1.96\n```\n\n* combo_4gpu_8_agebo_19_6\n\n```console\ndeephyper ray-submit nas agebo -w combo_4gpu_8_agebo_19_6 -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 4 --num-gpus-per-task 4 -as ../SetUpEnv.sh --n-jobs 16 --kappa 19.6\n```\n\n* combo_1gpu_8_agebo\n\n```console\ndeephyper ray-submit nas agebo -w combo_1gpu_8_agebo -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_agebo.Problem --run deephyper.nas.run.alpha.run --max-evals 10000 --num-cpus-per-task 1 --num-gpus-per-task 1 -as ../SetUpEnv.sh --n-jobs 16\n```\n\n* combo_4gpu_8_ambsmixed\n\n```console\ndeephyper ray-submit nas ambsmixed -w combo_4gpu_8_ambsmixed -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 4 --num-gpus-per-task 4 -as ../SetUpEnv.sh --n-jobs 16\n```\n\n* combo_4gpu_8_regevomixed\n\n```console\ndeephyper ray-submit nas regevomixed -w combo_4gpu_8_regevomixed -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 4 --num-gpus-per-task 4 -as ../SetUpEnv.sh\n```\n\n* combo_2gpu_1_age\n\n```console\ndeephyper ray-submit nas regevo -w combo_2gpu_1_age -n 1 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_ae.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 2 --num-gpus-per-task 2 -as ../SetUpEnv.sh\n```\n\n* combo_2gpu_2_age\n\n```console\ndeephyper ray-submit nas regevo -w combo_2gpu_2_age -n 2 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_ae.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 2 --num-gpus-per-task 2 -as ../SetUpEnv.sh\n```\n\n* combo_2gpu_16_age\n\n```console\ndeephyper ray-submit nas regevo -w combo_2gpu_16_age -n 16 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_ae.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 2 --num-gpus-per-task 2 -as ../SetUpEnv.sh\n```\n\n* combo_2gpu_1_agebo\n\n```console\ndeephyper ray-submit nas agebo -w combo_2gpu_1_agebo -n 1 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 2 --num-gpus-per-task 2 -as ../SetUpEnv.sh --n-jobs 16\n```\n\n* combo_2gpu_2_agebo\n\n```console\ndeephyper ray-submit nas agebo -w combo_2gpu_2_agebo -n 2 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 2 --num-gpus-per-task 2 -as ../SetUpEnv.sh --n-jobs 16\n```\n\n* combo_2gpu_16_agebo\n\n```console\ndeephyper ray-submit nas agebo -w combo_2gpu_16_agebo -n 16 -t 180 -A datascience -q full-node --problem nas_big_data.combo.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 2 --num-gpus-per-task 2 -as ../SetUpEnv.sh --n-jobs 16\n```\n\n### Attn dataset\n\n* attn_1gpu_8_age\n\n```console\ndeephyper ray-submit nas regevo -w attn_1gpu_8_age -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.attn.problem_ae.Problem --run deephyper.nas.run.alpha.run --max-evals 10000 --num-cpus-per-task 1 --num-gpus-per-task 1 -as ../SetUpEnv.sh\n```\n\n* attn_1gpu_8_agebo\n\n```console\ndeephyper ray-submit nas agebo -w attn_1gpu_8_agebo -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.attn.problem_agebo.Problem --run deephyper.nas.run.alpha.run --max-evals 10000 --num-cpus-per-task 1 --num-gpus-per-task 1 -as ../SetUpEnv.sh --n-jobs 16\n```\n\n* attn_2gpu_8_agebo\n\n```console\ndeephyper ray-submit nas agebo -w attn_2gpu_8_agebo -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.attn.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 2 --num-gpus-per-task 2 -as ../SetUpEnv.sh --n-jobs 16\n```\n\n* attn_4gpu_8_agebo\n\n```console\ndeephyper ray-submit nas agebo -w attn_4gpu_8_agebo -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.attn.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 4 --num-gpus-per-task 4 -as ../SetUpEnv.sh --n-jobs 16\n```\n\n* attn_8gpu_8_agebo\n\n```console\ndeephyper ray-submit nas agebo -w attn_8gpu_8_agebo -n 8 -t 180 -A datascience -q full-node --problem nas_big_data.attn.problem_agebo.Problem --run deephyper.nas.run.tf_distributed.run --max-evals 10000 --num-cpus-per-task 8 --num-gpus-per-task 8 -as ../SetUpEnv.sh --n-jobs 16\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeephyper%2Fnasbigdata","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdeephyper%2Fnasbigdata","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeephyper%2Fnasbigdata/lists"}