Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/lepisma/emacs-speech-input
Set of packages for speech and voice inputs in Emacs
https://github.com/lepisma/emacs-speech-input
emacs speech-api speech-recognition
Last synced: 3 months ago
JSON representation
Set of packages for speech and voice inputs in Emacs
- Host: GitHub
- URL: https://github.com/lepisma/emacs-speech-input
- Owner: lepisma
- License: gpl-3.0
- Created: 2019-10-03T18:11:41.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2024-09-17T07:40:36.000Z (5 months ago)
- Last Synced: 2024-09-17T10:14:49.658Z (5 months ago)
- Topics: emacs, speech-api, speech-recognition
- Language: C
- Homepage:
- Size: 6.46 MB
- Stars: 27
- Watchers: 5
- Forks: 2
- Open Issues: 1
-
Metadata Files:
- Readme: README.org
- License: LICENSE
Awesome Lists containing this project
README
#+TITLE: Emacs Speech Input
#+HTML:
Set of packages for speech and voice inputs in Emacs. Use cases are explained
next with the packages they are available in:** Dictation with Real-Time Editing
~esi-dictate~ is aimed to help you input faster[fn::Needs more work and empirical
validation.] than simply using voice with post-edits or typing in. It allows
blended workflows of voice and keyboard inputs assisted via LLMs for real time
edits.At the very basic, it's a dictation tool allowing real-time edits. Most of the
content gets inserted in the buffer via voice at a /voice cursor/. Utterances are
automatically inferred as having edit suggestions, in which case, the current
/voice context/ (displayed differently) is edited via an LLM.While working, the keyboard cursor is kept free so you can keep making finer
edits independent of the dictation work.Some more notes on the design are in my blog post [[https://lepisma.xyz/2024/09/12/emacs-dictation-mode/index.html][here]].
*** Usage
Presently this systems uses Deepgram via a Python script ~dg.py~ that you need to
put in your ~PATH~ somewhere. Python >=3.10 is required. Afterwards, you would need
to set the following configuration:#+begin_src emacs-lisp
(use-package esi-dictate
:vc (:fetcher github :repo lepisma/emacs-speech-input)
:custom
(esi-dictate-dg-api-key "")
(esi-dictate-llm-provider )
; Here is an example configuration for using OpenAI's
; (esi-dictate-llm-provider (make-llm-openai :key "" :chat-model "gpt-4o-mini"))
:bind (:map esi-dictate-mode-map
("C-g" . esi-dictate-stop))
:config
(setq llm-warn-on-nonfree nil)
:hook (esi-dictate-speech-final . esi-dictate-fix-context))
#+end_srcStart dictation mode using ~esi-dictate-start~. Use ~esi-dictate-stop~ to exit
dictation mode. ~esi-dictate-fix-context~ is hooked to utterance end and does
general fixes without asking for explicit commands. If you mark a region and
call ~esi-dictate-move-here~, the voice context and cursor will move to the region
bounds. If you just need to move the voice cursor, call move here function after
moving point to the appropriate position.As of now this uses non-free models for both transcriptions (Deepgram) and edits
(GPT4o-mini). This will change in the future.*** Customization
To improve LLM performance, you can customize the ~esi-dictate-llm-prompt~ and add
few shot examples in ~esi-dictate-fix-examples~.For visual improvements, customize ~esi-dictate-cursor~, ~esi-dictate-cursor-face~,
~esi-dictate-intermittent-face~, and ~esi-dictate-context-face~.** Flight-mode Recording
This is available as a dynamic module (~esi-core~) which needs a bit of clean up
before I document the usage.