{"id":48115685,"url":"https://github.com/lunal-dev/home","last_synced_at":"2026-04-04T16:15:15.684Z","repository":{"id":311376142,"uuid":"1040957973","full_name":"lunal-dev/home","owner":"lunal-dev","description":"Lunal is the AI confidential compute platform. We run your AI workloads (inference, training, agents) inside hardware-encrypted environments called Trusted Execution Environments (TEEs). Your data and code stay private while being processed. Your code can't be tampered with. You can cryptographically verify both claims without trusting us.","archived":false,"fork":false,"pushed_at":"2026-03-31T21:30:48.000Z","size":3678,"stargazers_count":76,"open_issues_count":1,"forks_count":4,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-01T00:38:41.180Z","etag":null,"topics":["confidential-computing","cryptography","privacy","security","tee","trusted","trusted-computing","verifiability","zero-knowledge"],"latest_commit_sha":null,"homepage":"https://lunal.dev","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/lunal-dev.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"docs/security.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-08-19T19:03:31.000Z","updated_at":"2026-03-31T21:30:53.000Z","dependencies_parsed_at":"2025-08-24T12:16:51.897Z","dependency_job_id":"d9194b8f-a2c6-4492-ba77-fe93b51d64b0","html_url":"https://github.com/lunal-dev/home","commit_stats":null,"previous_names":["lunal-dev/home"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/lunal-dev/home","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lunal-dev%2Fhome","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lunal-dev%2Fhome/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lunal-dev%2Fhome/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lunal-dev%2Fhome/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/lunal-dev","download_url":"https://codeload.github.com/lunal-dev/home/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lunal-dev%2Fhome/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31405699,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-04T10:20:44.708Z","status":"ssl_error","status_checked_at":"2026-04-04T10:20:06.846Z","response_time":60,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["confidential-computing","cryptography","privacy","security","tee","trusted","trusted-computing","verifiability","zero-knowledge"],"created_at":"2026-04-04T16:15:15.463Z","updated_at":"2026-04-04T16:15:15.644Z","avatar_url":"https://github.com/lunal-dev.png","language":"TypeScript","readme":"\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"./assets/logo.png\" alt=\"Conf AI Logo\" width=\"200\" height=\"200\"\u003e\n\u003c/div\u003e\n\n\u003cbr\u003e\n\n\u003cdiv align=\"center\"\u003e\n  \u003cnav\u003e\n    \u003ca href=\"/README.md\"\u003eHome\u003c/a\u003e\u0026nbsp;\u0026nbsp;\n    \u003ca href=\"/components.md\"\u003eComponents\u003c/a\u003e\u0026nbsp;\u0026nbsp;\n    \u003ca href=\"/enterprise.md\"\u003eEnterprise\u003c/a\u003e\u0026nbsp;\u0026nbsp;\n    \u003ca href=\"/docs/\"\u003eDocs\u003c/a\u003e\u0026nbsp;\u0026nbsp;\n    \u003ca href=\"/blog/\"\u003eBlog\u003c/a\u003e\u0026nbsp;\u0026nbsp;\n    \u003ca href=\"/careers/\"\u003eCareers\u003c/a\u003e\u0026nbsp;\u0026nbsp;\n    \u003ca href=\"/team.md\"\u003eTeam\u003c/a\u003e\n  \u003c/nav\u003e\n\u003c/div\u003e\n\n# Confidential AI\n\nConfidential AI (Conf AI) is the AI confidential compute platform. We run your AI workloads (inference, training, agents) inside hardware-encrypted environments called Trusted Execution Environments (TEEs). Your data and code stay private while being processed. Your code can't be tampered with. You can cryptographically verify both claims without trusting us.\n\nYou deploy your code as-is unchanged. You get end-to-end privacy, enhanced security, and full verifiability with negligible performance overhead.\n\n\n[Say hi](mailto:founders@confidential.ai). See [enterprise](/enterprise.md) for licensed deployments, [components](/components.md) for our stack breakdown, or the [docs](/docs/) for technical depth.\n\n## Example Use Cases\n\n* You are an **AI inference provider** who needs to guarantee data privacy during inference. You use Conf AI to offer an end-to-end private inference product where customer data is never visible to you or your infrastructure.\n* You are an **AI lab** that needs to train on highly sensitive data and prove to customers exactly what data was used during training. You use Conf AI to set up fully confidential training workloads, enabling customers to cryptographically verify training data provenance.\n* You are an **AI lab** that needs to protect proprietary weights from extraction during inference or fine-tuning. You use Conf AI to ensure weights never leave hardware-enforced secure enclaves.\n* You are an **inference provider** serving third-party models and regulators require proof that the audited model is what's actually running in production. You use Conf AI to provide verifiable attestation of model integrity.\n* You are an **AI agent company** whose agents handle credentials and API keys that must never be exposed in plaintext. You use Conf AI to enforce hardware-level isolation so secrets never exist outside the TEE.\n* You are building **multi-agent systems** and agents need to verify each other's identity and code before establishing trust. You use Conf AI to provide cryptographic attestation between agents.\n\n\n## What We Solve\n\nConfidential computing protects data while it's being processed, not just at rest or in transit. The core technology is Trusted Execution Environments (TEEs), a hardware feature built into modern CPUs and GPUs. TEEs turn existing VMs into fully encrypted, hardened, tamper-proof compute environments.\n\nA TEE by itself is just a primitive. Running production workloads inside TEEs and scaling them is a serious engineering challenge. You have to solve attestation, key management, build verifiability, networking, autoscaling, and logging, among others.\n\nConf AI solves all of these. We built a set of independent components that each address a specific problem. Use them all together or integrate individual pieces into your existing stack. Conf AI is confidential computing that just works, without building the infrastructure yourself.\n\nSee the complete [component catalogue](/components.md).\n\n## How To Use Conf AI\n\n### Enterprise / Licensed\n\n**For AI labs, infrastructure providers, and large organizations with existing hardware/infrastructure.**\n\nConf AI's software stack deploys on your infrastructure. Components are modular: use the full platform or integrate specific pieces into your existing architecture.\n\nStart with a pilot to map components onto your stack. Components work end-to-end or individually. On-prem, bare metal, all major clouds.\n\nWe explain how we work with enterprises in depth [here](/enterprise.md) or [contact us](mailto:founders@confidential.ai).\n\n### Hosted Platform\n\n**For teams that want to run workloads privately without managing TEE infrastructure.**\n\nBring your workload: inference, training, fine-tuning, any application. Conf AI runs it on TEE-backed infrastructure. You get an endpoint with attestation built in.\n\nNo code changes required. Your existing applications, containers, and models work as-is. The full platform is included: attestation, key management, autoscaling, private networking, CI/CD, encrypted logging. Global deployment.\n\n[Contact us](mailto:founders@confidential.ai).\n\n### AI Agents\n\n**For teams building AI agents that need access to credentials, tools, and external services.**\n\nAgents run inside TEEs with hardware-enforced credential isolation. Tokens and API keys never exist in plaintext outside the TEE. Multi-agent systems verify each other through attestation. Each agent proves what code it's running before others trust it.\n\n[Contact us](mailto:founders@confidential.ai).\n\n## Get Started\n\n[Say hi](mailto:founders@confidential.ai). See [enterprise](/enterprise.md) for licensed deployments, [components](/components.md) for our stack breakdown, or the [docs](/docs/) for technical depth.","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flunal-dev%2Fhome","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Flunal-dev%2Fhome","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flunal-dev%2Fhome/lists"}