Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/googlecreativelab/obvi

A Polymer 3+ webcomponent / button for doing speech recognition
https://github.com/googlecreativelab/obvi

automatic-speech-recognition button polymer polymer2 speech-recognition webcomponent

Last synced: about 1 month ago
JSON representation

A Polymer 3+ webcomponent / button for doing speech recognition

Awesome Lists containing this project

README

        

[![Published on webcomponents.org](https://img.shields.io/badge/webcomponents.org-published-blue.svg)](https://www.webcomponents.org/element/googlecreativelab/obvi)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)

# OBVI
**O**ne **B**utton for **V**oice **I**nput is a customizable [webcomponent](https://developer.mozilla.org/en-US/docs/Web/Web_Components) built with [Polymer 3+](https://www.polymer-project.org/) to make it easy for including speech recognition in your web-based projects. It uses the [Speech Recognition](https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition) API, and for unsupported browsers it will fallback to a client-side [Google Cloud Speech API](https://cloud.google.com/speech/) solution.

![example](https://storage.googleapis.com/readme-assets/voice-button.gif)

## Run example

With npm installed, in the root of this repo:

```
npm install
npm start
```

## Setting up your project

As of Polymer 3, all dependencies are managed through NPM and module script tags. You can simply add obvi to your project using:

```
npm install --save obvi-component
```

And then the following:

```








import './node_modules/obvi/voice-button.js';

var voiceEl = document.querySelector('voice-button'),
transcriptionEl = document.getElementById('transcription');

// can check the supported flag, and do something if it's disabled / not supported
console.log('does this browser support WebRTC?', voiceEl.supported);

voiceEl.addEventListener('mousedown', function(event){
transcriptionEl.innerHTML = '';
})

var transcription = '';
voiceEl.addEventListener('onSpeech', function(event){
transcription = event.detail.speechResult;
transcriptionEl.innerHTML = transcription;
console.log('Speech response: ', event.detail.speechResult)
transcriptionEl.classList.add('interim');
if(event.detail.isFinal){
transcriptionEl.classList.remove('interim');
}
})

voiceEl.addEventListener('onStateChange', function(event){
console.log('state:',event.detail.newValue);
})


```

*Note: You must run your app from a web server for the HTML Imports polyfill to work properly. This requirement goes away when the API is available natively.*

*Also Note: If your app is running from SSL (https://), the microphone access permission will be persistent. That is, users won't have to grant/deny access every time.*

### For a single-build with one bundled file:

Static hosting services like GitHub Pages and Firebase Hosting don't support serving different files to different user agents. If you're hosting your application on one of these services, you'll need to serve a single build like so:

```

```

or

```
import './node_modules/obvi/dist/voice-button.js'
```

You can also customize the ```polymer build``` command in ```package.json``` and create your own build file to futher suit your needs.

## Usage

Basic usage is:

``

### Options

| Name | Description | Type | Default | Options / Examples|
| ----------- | :-----------:| :-----------:| :-----------:|---------:|
| **cloudSpeechApiKey** | Cloud Speech API is the fallback when the Web Speech API isn't available. Provide this key to cover more browsers. | String | null | ``
| **flat** | Whether or not to include the shadow.| Boolean | *false* |``
| **autodetect** | By default, the user needs to press & hold to capture speech. If this is set to true, it will auto-detect silence to finish capturing speech. | Boolean | *false* | ``
| **language** | Language for [SpeechRecognition](https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition) interface. If not set, will default to user agent's language setting. [See here](https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition/lang) for more info. | String | 'en-US' | ``
| **disabled** | Disables the button for being pressed and capturing speech. | Boolean | *false* | ``
| **keyboardTrigger** | How the keyboard will trigger the button | String | `'space-bar'` | `` `space-bar`, `all-keys`, `none`
| **clickForPermission** | If set to true, will only ask for browser microphone permissions when the button is clicked (instead of immediately) | Boolean | false | `