{"id":23092036,"url":"https://github.com/thiswillbeyourgithub/beta-variational-autoencoder","last_synced_at":"2025-10-14T00:38:18.954Z","repository":{"id":210920106,"uuid":"727320532","full_name":"thiswillbeyourgithub/Beta-Variational-Autoencoder","owner":"thiswillbeyourgithub","description":"Simple implementation of a Beta VAE by GPT-4","archived":false,"fork":false,"pushed_at":"2025-01-11T16:51:31.000Z","size":352,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-10-14T00:38:17.817Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/thiswillbeyourgithub.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-12-04T16:19:49.000Z","updated_at":"2025-01-11T16:51:34.000Z","dependencies_parsed_at":"2023-12-05T15:55:59.796Z","dependency_job_id":"adeb4ef4-3ad2-4057-927e-b208aa3fa9e9","html_url":"https://github.com/thiswillbeyourgithub/Beta-Variational-Autoencoder","commit_stats":{"total_commits":66,"total_committers":1,"mean_commits":66.0,"dds":0.0,"last_synced_commit":"a1a65a74c9dd340b3108d5bdfc3afc1c970d3a1c"},"previous_names":["thiswillbeyourgithub/beta-variational-autoencoder"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/thiswillbeyourgithub/Beta-Variational-Autoencoder","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thiswillbeyourgithub%2FBeta-Variational-Autoencoder","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thiswillbeyourgithub%2FBeta-Variational-Autoencoder/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thiswillbeyourgithub%2FBeta-Variational-Autoencoder/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thiswillbeyourgithub%2FBeta-Variational-Autoencoder/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/thiswillbeyourgithub","download_url":"https://codeload.github.com/thiswillbeyourgithub/Beta-Variational-Autoencoder/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thiswillbeyourgithub%2FBeta-Variational-Autoencoder/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":279017357,"owners_count":26086052,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-13T02:00:06.723Z","response_time":61,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-12-16T21:26:53.330Z","updated_at":"2025-10-14T00:38:18.936Z","avatar_url":"https://github.com/thiswillbeyourgithub.png","language":"Python","readme":"# Beta-Variational-Autoencoder\nSimple Beta-VAE using scikit-learn API, made mostly by prompting GPT-4.\nWith a single argument this can instead be a regular autoencoder (no variational).\nThe [VeLO optimizer](https://github.com/janEbert/PyTorch-VeLO) can be used (apparently only on cpu and not on cuda?)\n\nI made this because I couldn't find an appropriate implementation in python / pytorch and needed one for another project: [QuestEA](https://github.com/thiswillbeyourgithub/QuestEA)\n\n# Example result on handwritten digits:\n![](./Demo.png)\n\n## Notes\n* There are 2 compression layers and the decompression is symetrical.\n* A wrapper called `OptimizedBVAE` can be used to do a grid search over the hidden_dim parameters then return the best model after further training.\n* This side quest was done hastily, there might be huge mistakes as well as unoptimized code etc. If you see that, please notify me by creating an issue!\n\n\nThe optimizer used is AdamW, but `VeLO` can be used from [this repo](https://github.com/janEbert/PyTorch-VeLO).\n\n## Usage\n```\nfrom bvae import ReducedBVAE\nmodel = ReducedBVAE(\n    input_dim,\n    z_dim,  # lowest nmuber of dimension\n    hidden_dim,  # number of neurons in the 2nd layer of the compression\n    dataset_size,\n    lr=1e-3,\n    epochs=1000,\n    beta=1.0,\n    weight_decay=0.01,\n    use_VeLO=False,\n    use_scheduler=True,\n    )\nmodel.prepare_dataset(\n    dataset=dataset,\n    val_ratio=0.2,\n    batch_size=500,\n)\nmodel.train_bvae(\n    patience=100,\n)\nprojection = model.transform(dataset)\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fthiswillbeyourgithub%2Fbeta-variational-autoencoder","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fthiswillbeyourgithub%2Fbeta-variational-autoencoder","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fthiswillbeyourgithub%2Fbeta-variational-autoencoder/lists"}