{"id":18064353,"url":"https://github.com/natanielf/lecsum","last_synced_at":"2025-07-27T04:38:31.922Z","repository":{"id":260363642,"uuid":"868211637","full_name":"natanielf/lecsum","owner":"natanielf","description":"Automatically transcribe and summarize lecture recordings completely on-device using AI","archived":false,"fork":false,"pushed_at":"2025-02-05T05:49:39.000Z","size":14,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-02-05T06:31:04.720Z","etag":null,"topics":["ollama","ollama-python","whisper","whisper-ai"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/natanielf.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-10-05T19:04:10.000Z","updated_at":"2025-02-05T05:49:42.000Z","dependencies_parsed_at":"2024-12-18T20:32:02.727Z","dependency_job_id":"58efc016-64cf-4110-8af4-a66da8628980","html_url":"https://github.com/natanielf/lecsum","commit_stats":null,"previous_names":["natanielf/lecsum"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/natanielf%2Flecsum","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/natanielf%2Flecsum/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/natanielf%2Flecsum/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/natanielf%2Flecsum/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/natanielf","download_url":"https://codeload.github.com/natanielf/lecsum/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247345842,"owners_count":20924102,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ollama","ollama-python","whisper","whisper-ai"],"created_at":"2024-10-31T06:05:40.714Z","updated_at":"2025-04-05T14:13:31.077Z","avatar_url":"https://github.com/natanielf.png","language":"Python","readme":"# lecsum\n\nAutomatically transcribe and summarize lecture recordings completely on-device using AI.\n\n## Environment Setup\n\nInstall [Ollama](https://ollama.com/download).\n\nCreate a virtual Python environment:\n\n```sh\npython3 -m venv .venv\n```\n\nActivate the virtual environment:\n\n```sh\nsource .venv/bin/activate\n```\n\nInstall dependencies:\n\n```sh\npip install -r requirements.txt\n```\n\n## Configuration (optional)\n\nEdit `lecsum.yaml`:\n\n| **Field**       | **Default Value** | **Possible Values**                                                                    | **Description**                                                  |\n| --------------- | ----------------- | -------------------------------------------------------------------------------------- | ---------------------------------------------------------------- |\n| `whisper_model` | \"base.en\"         | [Whisper model name](https://github.com/openai/whisper#available-models-and-languages) | Specifies which Whisper model to use for transcription           |\n| `ollama_model`  | \"llama3.1:8b\"     | [Ollama model name](https://ollama.com/library)                                        | Specifies which Ollama model to use for summarization            |\n| `prompt`        | \"Summarize: \"     | Any string                                                                             | Instructs the large language model during the summarization step |\n\n## Usage\n\nRun the Ollama server:\n\n```sh\nollama serve\n```\n\n### Command-line\n\nIn a new terminal, run:\n\n```sh\n./lecsum.py -c [CONFIG_FILE] [AUDIO_FILE]\n```\n\nUse any file format supported by [Whisper](https://platform.openai.com/docs/guides/speech-to-text) (`mp3`, `mp4`, `wav`, `webm`, etc.).\n\n### Server\n\nTo start the `lecsum` server in a development environment, run:\n\n```sh\nfastapi dev server.py\n```\n\n### Testing\n\nAutomated testing is performed using the `pytest` framework:\n\n```sh\npytest\n```\n\n## References\n\n- https://pyyaml.org/wiki/PyYAMLDocumentation\n- https://github.com/openai/whisper\n- https://github.com/ollama/ollama-python\n- https://fastapi.tiangolo.com\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnatanielf%2Flecsum","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnatanielf%2Flecsum","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnatanielf%2Flecsum/lists"}