{"id":13564974,"url":"https://github.com/bigscience-workshop/bigscience","last_synced_at":"2025-05-16T05:06:08.651Z","repository":{"id":36993023,"uuid":"365552545","full_name":"bigscience-workshop/bigscience","owner":"bigscience-workshop","description":"Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.","archived":false,"fork":false,"pushed_at":"2024-07-29T17:13:38.000Z","size":3414,"stargazers_count":990,"open_issues_count":21,"forks_count":100,"subscribers_count":37,"default_branch":"master","last_synced_at":"2025-04-08T15:06:34.405Z","etag":null,"topics":["machine-learning","models","nlp","training"],"latest_commit_sha":null,"homepage":"","language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/bigscience-workshop.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":"CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-05-08T15:46:42.000Z","updated_at":"2025-04-04T02:13:27.000Z","dependencies_parsed_at":"2024-08-01T13:21:29.447Z","dependency_job_id":"1d92c66c-3927-4d0b-b515-43f1de8e5ee9","html_url":"https://github.com/bigscience-workshop/bigscience","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bigscience-workshop%2Fbigscience","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bigscience-workshop%2Fbigscience/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bigscience-workshop%2Fbigscience/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bigscience-workshop%2Fbigscience/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/bigscience-workshop","download_url":"https://codeload.github.com/bigscience-workshop/bigscience/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254471061,"owners_count":22076585,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["machine-learning","models","nlp","training"],"created_at":"2024-08-01T13:01:38.739Z","updated_at":"2025-05-16T05:06:05.066Z","avatar_url":"https://github.com/bigscience-workshop.png","language":"Shell","readme":"# bigscience\n\n[Research workshop on large language models - The Summer of Language Models 21](https://bigscience.huggingface.co/)\n\nAt the moment we have 2 code repos:\n\n1. https://github.com/bigscience-workshop/Megatron-DeepSpeed - this is our flagship code base\n2. https://github.com/bigscience-workshop/bigscience - (this repo) for everything else - docs, experiments, etc.\n\nCurrently, the most active segments of this repo are:\n\n- [JZ](./jz/) - Lots of information about our work environment which helps evaluate, plan and get things done\n- [Experiments](./experiments) - many experiments are being done. Documentation, result tables, scripts and logs are all there\n- [Datasets info](./data/)\n- [Train](./train) - all the information about the current trainings (see below for the most important ones)\n\nWe have READMEs for specific aspects, such as:\n- [hub integration](./tools/README.md)\n\n\n## Trainings\n\nWhile we keep detailed chronicles of experiments and findings for some of the main trainings, here is a doc that contains a summary of the most important findings: [Lessons learned](train/lessons-learned.md)\n\n\n### Train 1 - 13B - unmodified Megatron gpt2 - baseline\n\n* [the full spec and discussions](./train/tr1-13B-base)\n* [the training script](./train/tr1-13B-base/tr1-13B-round1.slurm)\n* checkpoints and logs:\n   - [tensorboard](https://huggingface.co/bigscience/tr1-13B-tensorboard/tensorboard)\n   - [logs](https://huggingface.co/bigscience/tr1-13B-logs/)\n* [chronicles](./train/tr1-13B-base/chronicles.md)\n\nYou can watch the training logs live by running this `tail -f` like script over remote log file that gets synced to the hub once an hour:\n```\nperl -e '$u=shift; $b=0; while(1){($e)=qx[curl -sI $u]=~/content-length: (\\d+)/; \\\nprint qx[curl -sr $b-$e -L $u] if $e\u003e$b; $b=$e; sleep 300}' \\\nhttps://huggingface.co/bigscience/tr1-13B-logs/resolve/main/main_log.txt\n\n```\n\n### Train 3\n\nArchitecture and scaling baseline runs: no fancy tricks, just GPT2. Here are links to the respective tensorboards:\n\n| Size                \t| 1B3 \t| 760M \t| 350M \t| 125M \t|\n|---------------------\t|-----\t|------\t|------\t|------\t|\n| C4 + low warmup     \t| [a](https://huggingface.co/bigscience/tr3-1B3-modeling-baseline-tensorboard)   \t| [b](https://huggingface.co/bigscience/tr3b-760M-modeling-baseline-tensorboard)    \t| [c](https://huggingface.co/bigscience/tr3c-350M-modeling-baseline-tensorboard)    \t|      \t|\n| OSCAR + low warmup  \t| [f](https://huggingface.co/bigscience/tr3f-1B3-diagnostic2-low-warmup-oscar-tensorboard)   \t|      \t|      \t|      \t|\n| C4 + high warmup    \t| [e](https://huggingface.co/bigscience/tr3e-1B3-diagnostic1-warmup-c4-tensorboard)   \t|      \t|      \t|      \t|\n| OSCAR + high warmup \t| **[d (current baseline)](https://huggingface.co/bigscience/tr3d-1B3-more-warmup-tensorboard)**   \t| [g](https://huggingface.co/bigscience/tr3g-760M-v2-tensorboard)    \t| [h](https://huggingface.co/bigscience/tr3h-350M-v2-tensorboard)    \t| [i](https://huggingface.co/bigscience/tr3i-125M-v2-tensorboard)    \t|\n| Pile + high warmup  \t| [m](https://huggingface.co/bigscience/tr3m-1B3-pile-tensorboard)   \t| [j](https://huggingface.co/bigscience/tr3j-760M-pile-tensorboard)    \t| [k](https://huggingface.co/bigscience/tr3k-350M-pile-tensorboard)    \t| [l](https://huggingface.co/bigscience/tr3l-125M-pile-tensorboard)    \t|\n\n\n### Train 8\n\n104B - unmodified Megatron gpt2 - with extra-wide hidden size to learn how to deal with training instabilities\n\n* [the full spec and discussions](./train/tr8-104B-wide)\n* [the training script](./train/tr8-104B-wide/tr8-104B.slurm)\n* checkpoints and logs:\n   - [tensorboard](https://huggingface.co/bigscience/tr8-104B-logs/tensorboard)\n   - [logs](https://huggingface.co/bigscience/tr8-104B-logs/tree/main/logs)\n* [chronicles](./train/tr8-104B-wide/chronicles.md)\n\nYou can watch the training logs live by running this `tail -f` like script over remote log file that gets synced to the hub once an hour:\n```\nperl -e '$u=shift; $b=0; while(1){($e)=qx[curl -sI $u]=~/content-length: (\\d+)/; \\\nprint qx[curl -sr $b-$e -L $u] if $e\u003e$b; $b=$e; sleep 300}' \\\nhttps://cdn-lfs.huggingface.co/bigscience/tr8-104B-logs/b2cc478d5ae7c9ec937ea2db1d2fe09de593fa2ec38c171d6cc5dca094cd79f9\n```\n\n### Train 11\n\n**This is the current main training**\n\ntr11-176B-ml\n\n* [the full spec and discussions](./train/tr11-176B-ml/)\n* [the training script](./train/tr11-176B-ml/tr11-176B-ml.slurm)\n* checkpoints and logs:\n   - [tensorboard](https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard)\n   - [logs](https://huggingface.co/bigscience/tr11-176B-ml-logs/tree/main/logs/main)\n* [chronicles-prequel](./train/tr11-176B-ml/chronicles-prequel.md)\n* [chronicles](./train/tr11-176B-ml/chronicles.md)\n\nYou can watch the training logs live by running this `tail -f` like script over remote log file that gets synced to the hub once an hour:\n```\nperl -e '$u=shift; $b=0; while(1){($e)=qx[curl -LsI $u]=~/2 200.*?content-length: (\\d+)/s; \\\nprint qx[curl -Lsr $b-$e $u] if $e\u003e$b; $b=$e; sleep 300}' \\\nhttps://huggingface.co/bigscience/tr11-176B-ml-logs/resolve/main/logs/main/main_log.txt\n```\n","funding_links":[],"categories":["Shell"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbigscience-workshop%2Fbigscience","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbigscience-workshop%2Fbigscience","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbigscience-workshop%2Fbigscience/lists"}