{"id":19066202,"url":"https://github.com/epfml/dynamic-sparse-flash-attention","last_synced_at":"2025-10-13T08:38:41.388Z","repository":{"id":168908958,"uuid":"644728151","full_name":"epfml/dynamic-sparse-flash-attention","owner":"epfml","description":null,"archived":false,"fork":false,"pushed_at":"2023-06-02T12:28:57.000Z","size":181,"stargazers_count":143,"open_issues_count":2,"forks_count":6,"subscribers_count":7,"default_branch":"main","last_synced_at":"2025-04-28T12:38:49.594Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/epfml.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2023-05-24T06:16:16.000Z","updated_at":"2025-04-11T08:12:48.000Z","dependencies_parsed_at":null,"dependency_job_id":"a07aed9e-9872-4d97-8bf6-1c13853607a0","html_url":"https://github.com/epfml/dynamic-sparse-flash-attention","commit_stats":null,"previous_names":["epfml/dynamic-sparse-flash-attention"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/epfml/dynamic-sparse-flash-attention","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/epfml%2Fdynamic-sparse-flash-attention","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/epfml%2Fdynamic-sparse-flash-attention/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/epfml%2Fdynamic-sparse-flash-attention/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/epfml%2Fdynamic-sparse-flash-attention/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/epfml","download_url":"https://codeload.github.com/epfml/dynamic-sparse-flash-attention/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/epfml%2Fdynamic-sparse-flash-attention/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":279014320,"owners_count":26085492,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-13T02:00:06.723Z","response_time":61,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-09T00:55:27.253Z","updated_at":"2025-10-13T08:38:41.335Z","avatar_url":"https://github.com/epfml.png","language":"Jupyter Notebook","readme":"# Dynamic Sparse FlashAttention\n\nCode to reproduce results for the paper \"Faster Causal Attention Over Large Sequences Through Sparse Flash Attention\"\n\n# Setup\n\nTo install the required python dependencies, first run:\n\n```bash\npip install -r ./requirements.txt\n```\n\nThen install Triton:\n```bash\ngit clone https://github.com/openai/triton.git\ncd triton \ngit checkout b2a757d00028fe844a93904036a18e8670bfe92f\ncd python \npip install cmake\npip install -e .\n```\nIn the command above we set the Triton library to the commit used in our experiments. Feel free to experiment with later Triton versions. \n\n# Reproducing our LM experiments on OpenWebText2\n\n**GPU requirements:** Preferably, you need at least one A100. Some of our experiments use data-parallelism with up to 3 A100s. You should have no problem running those experiments on any GPU supporting `bfloat16`, you might have to change the model parameters to adapt to the memory available. \n\nGo in the `openwebtext2-experiments` folder and run the `script/train-LMs.sh` command.\n\n# Reproducing our runtime results\n\n**GPU requirements:** We used one A100. \n\nFor the Hash-sparse and QK-sparse results, go in the `runtime-experiments` folder and check the `timeperf-hash-and-qk-sparse.ipynb` notebook.\n\n# Reproducing our Reformer results\n\nComing soon\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fepfml%2Fdynamic-sparse-flash-attention","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fepfml%2Fdynamic-sparse-flash-attention","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fepfml%2Fdynamic-sparse-flash-attention/lists"}