{"id":30663361,"url":"https://github.com/policyengine/l0","last_synced_at":"2026-01-20T17:35:23.787Z","repository":{"id":309133423,"uuid":"1035253253","full_name":"PolicyEngine/L0","owner":"PolicyEngine","description":"A package that implements the L0 penalty per Luizos, Welling, \u0026 Kingma (2017) https://arxiv.org/abs/1712.01312","archived":false,"fork":false,"pushed_at":"2025-08-25T14:25:23.000Z","size":1007,"stargazers_count":0,"open_issues_count":20,"forks_count":1,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-08-25T15:37:10.066Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/PolicyEngine.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-08-10T01:44:48.000Z","updated_at":"2025-08-18T09:15:38.000Z","dependencies_parsed_at":"2025-08-10T04:16:06.831Z","dependency_job_id":"84c9c863-07de-429f-a911-542022cb70b1","html_url":"https://github.com/PolicyEngine/L0","commit_stats":null,"previous_names":["policyengine/l0"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/PolicyEngine/L0","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PolicyEngine%2FL0","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PolicyEngine%2FL0/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PolicyEngine%2FL0/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PolicyEngine%2FL0/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/PolicyEngine","download_url":"https://codeload.github.com/PolicyEngine/L0/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PolicyEngine%2FL0/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":273010994,"owners_count":25030369,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-08-31T02:00:09.071Z","response_time":79,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-08-31T17:10:25.576Z","updated_at":"2026-01-20T17:35:23.781Z","avatar_url":"https://github.com/PolicyEngine.png","language":"Python","readme":"# L0 Regularization\n\n[![PyPI version](https://badge.fury.io/py/l0-python.svg)](https://pypi.org/project/l0-python/)\n[![CI](https://github.com/PolicyEngine/L0/actions/workflows/push.yml/badge.svg)](https://github.com/PolicyEngine/L0/actions)\n\nA PyTorch implementation of L0 regularization based on [Louizos, Welling, \u0026 Kingma (2017)](https://arxiv.org/abs/1712.01312), designed for survey calibration and sparse regression.\n\n## Installation\n\n```bash\npip install l0-python\n```\n\nFor development:\n```bash\ngit clone https://github.com/PolicyEngine/L0.git\ncd L0\npip install -e .[dev]\n```\n\n## Our Approach to Test-Time Gates\n\nThe original Hard Concrete formulation uses temperature (β) during training to control the sharpness of stochastic gates. At test time, there's a design choice: whether to include temperature in the deterministic gate computation.\n\nWe include temperature at test time:\n```python\n# Our approach: include temperature\nz = sigmoid(log_alpha / beta) * (zeta - gamma) + gamma\n\n# Alternative: omit temperature\nz = sigmoid(log_alpha) * (zeta - gamma) + gamma\n```\n\nIncluding temperature produces sharper 0/1 decisions, which we find beneficial for achieving clean sparsity in our applications. See `examples/sparse_regression_demo.py` for a demonstration on a 4-variable regression problem.\n\n## Primary Use Case: Survey Calibration\n\nThis package was developed for PolicyEngine's survey calibration, where we select a sparse subset of survey households while matching population targets.\n\n```python\nimport numpy as np\nfrom scipy import sparse as sp\nfrom l0.calibration import SparseCalibrationWeights\n\n# Setup: Q targets, N households\nQ, N = 200, 10000\nM = sp.random(Q, N, density=0.3, format=\"csr\")  # Household characteristics\ny = np.random.uniform(1e6, 1e8, size=Q)          # Population targets\n\n# Initialize model\nmodel = SparseCalibrationWeights(\n    n_features=N,\n    beta=0.35,\n    gamma=-0.1,\n    zeta=1.1,\n    init_keep_prob=0.5,\n    init_weights=1.0,\n    log_weight_jitter_sd=0.05,\n    device=\"cuda\",\n)\n\n# Train with L0+L2 regularization\nmodel.fit(\n    M=M,\n    y=y,\n    lambda_l0=1e-6,\n    lambda_l2=1e-8,\n    lr=0.15,\n    epochs=2000,\n    loss_type=\"relative\",\n    verbose=True,\n)\n\n# Get results\nactive = model.get_active_weights()\nprint(f\"Selected {active['count']} of {N} households\")\nprint(f\"Sparsity: {model.get_sparsity():.1%}\")\n```\n\n### Key Features\n\n- **Non-negative weights**: Constrained via log-space parameterization\n- **L0 sparsity**: Directly minimizes the count of active weights\n- **Relative loss**: Scale-invariant for targets spanning orders of magnitude\n- **Group-wise averaging**: Balance loss across target groups with different sizes\n- **GPU support**: CUDA acceleration for large problems\n\n## Sparse Regression\n\nFor sparse linear regression with scipy sparse matrices:\n\n```python\nfrom scipy import sparse as sp\nfrom l0.sparse import SparseL0Linear\n\n# Sparse design matrix\nX = sp.random(1000, 500, density=0.1, format=\"csr\")\ny = np.random.randn(1000)\n\nmodel = SparseL0Linear(n_features=500)\nmodel.fit(X, y, lambda_l0=0.001, epochs=1000)\n\n# Get sparse coefficients\ncoef = model.get_coefficients(threshold=0.01)\n```\n\n## Example: Variable Selection\n\nThe `examples/sparse_regression_demo.py` script demonstrates L0 regularization on a simple problem where the true coefficients are `[1, 0, -2, 0]`:\n\n```bash\npython examples/sparse_regression_demo.py\n```\n\nOutput:\n```\nTrue coefficients:        [ 1.  0. -2.  0.]\nRecovered coefficients:   [ 1.039  0.    -2.069 -0.   ]\nGates:                    [1. 0. 1. 0.]\n```\n\nThe model correctly identifies that only variables 1 and 3 contribute to the outcome.\n\n## Testing\n\n```bash\npytest tests/ -v --cov=l0\n```\n\n## Citation\n\n```bibtex\n@article{louizos2017learning,\n  title={Learning Sparse Neural Networks through L0 Regularization},\n  author={Louizos, Christos and Welling, Max and Kingma, Diederik P},\n  journal={arXiv preprint arXiv:1712.01312},\n  year={2017}\n}\n```\n\n## License\n\nMIT License - see [LICENSE](LICENSE) for details.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpolicyengine%2Fl0","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fpolicyengine%2Fl0","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpolicyengine%2Fl0/lists"}