meSpeak.js

Text-To-Speech on the Web

Text: Amplitude: Pitch: Speed: Word gap:
Voice:


About

meSpeak.js (modulary enhanced speak.js) is a 100% client-side JavaScript text-to-speech library based on the speak.js project (see below).
meSpeak.js adds support for Webkit and Safari and introduces loadable voice modules. Also there is no more need for an embedding HTML-element.
Separating the code of the library from config-data and voice definitions should help future optimizations of the core part of speak.js.
All separated data has been compressed to base64-encoded strings from the original binary files to save some bandwidth (compared to JS-arrays of raw 8-bit data).
Browser requirements: Firefox, Chrome/Opera, Webkit, and Safari (MSIE11 is expected to be compliant).

Important Changes:

v.1.1 adds support for the Web Audio API (AudioContext), which is now the preferred option for playback with the HTMLAudioElement as a fall-back.
Thanks to the new method of playback meSpeak.js was tested successfully with iOS/Safari (iOS 6).
Also starting with v.1.1 there is now an option to rather export the raw speech-data than playing the sound (see: options, "rawdata").

v.1.2 adds volume control and the capacity to play back cached streams generated using the "rawdata" option.

v.1.5 adds an optional callback-argument to the methods meSpeak.speak() and meSpeak.play().

meSpeak.js 2011-2013 by Norbert Landsteiner, mass:werk – media environments; http://www.masswerk.at/mespeak/

Usage

meSpeak.loadConfig("mespeak_config.json"); meSpeak.loadVoice('en-us.json'); meSpeak.speak('hello world'); meSpeak.speak('hello world', { option1: value1, option2: value2 .. }); meSpeak.speak('hello world', { option1: value1, option2: value2 .. }, callback); options: * amplitude: How loud the voice will be (default: 100) * pitch: The voice pitch (default: 50) * speed: The speed at which to talk (words per minute) (default: 175) * voice: Which voice to use (default: last voice loaded or defaultVoice, see below) * wordgap: Additional gap between words in 10 ms units (default: 0) * volume: Volume relative to the global volume (number, 0..1, default: 1) Note: the relative volume has no effect on the export using option 'rawdata'. * rawdata: Do not play, return data only. The type of the returned data is derived from the value (case-insensitive) of 'rawdata': - 'base64': returns a base64-encoded string. - 'mime': returns a base64-encoded data-url (including the MIME-header). (synonyms: 'data-url', 'data-uri', 'dataurl', 'datauri') - 'array': returns a plain Array object with uint 8 bit data. - default (any other value): returns the generated wav-file as an ArrayBuffer (8-bit unsigned). Note: The value of 'rawdata' must evaluate to boolean 'true' in order to be recognized. callback: An optional callback function to be called after the sound output ended. The callback will be called with a single boolean argument indicating success. if (meSpeak.isVoiceLoaded('de') meSpeak.setDefaultVoice('de'); // note: the default voice is always the the last voice loaded meSpeak.loadVoice('fr.json', userCallback); // userCallback is an optional callback-handler. The callback will receive two arguments: // * a boolean flag for success // * either the id of the voice, or a reason for errors ('network error', 'data error', 'file error') alert(meSpeak.getDefaultVoice()) // 'fr' if (meSpeak.isConfigLoaded()) meSpeak.speak('Configuration data has been loaded.'); // note: any calls to speak() will be deferred, if no valid config-data has been loaded yet. meSpeak.setVolume(0.5); // note: sets the global playback-volume, any sounds currently playing will be updated immediately // with respect to their relative volume (if specified). alert(meSpeak.getVolume()) // 0.5 var browserCanPlayWavFiles = meSpeak.canPlay(); // test for compatibility // export speech-data as a stream (no playback): var myUint8Array = meSpeak.speak('hello world', { 'rawdata': true }); // typed array var base64String = meSpeak.speak('hello world', { 'rawdata': 'base64' }); var myDataUrl = meSpeak.speak('hello world', { 'rawdata': 'data-url' }); var myArray = meSpeak.speak('hello world', { 'rawdata': 'array' }); // simple array // playing cached streams (any of the export formats): meSpeak.play( stream [, relativeVolume [, callback]] ); var stream1 = meSpeak.speak('hello world', { 'rawdata': true }); var stream2 = meSpeak.speak('hello again', { 'rawdata': true }); var stream3 = meSpeak.speak('hello yet again', { 'rawdata': 'data-url' }); meSpeak.play(stream1); // using global volume meSpeak.play(stream2, 0.75); // 75% of global volume meSpeak.play(stream3); // v.1.4.2: play data-urls or base64-encoded Optional arguments: volume: Volume relative to the global volume (number, 0..1, default: 1) callback: A callback function to be called after the sound output ended. The callback will be called with a single boolean argument indicating success. (See also: meSpeak.speak().)

Note on export formats, ArrayBuffer (typed array, defaul) vs. simple array:
The ArrayBuffer (8-bit unsigned) provides a stream ready to be played by the Web Audio API (as a value for a BufferSourceNode), while the plain array (JavaScript Array object) may be best for export (e.g. sending the data to Flash via Falsh's ExternalInterface). The default raw format (ArrayBuffer) is the preferred format for caching streams to be played later by meSpeak by calling meSpeak.play(), since it provides the least overhead in processing.

Note on iOS Limitations

iOS (currently supported only using Safari) provides a single audio-slot, playing only one sound at a time.
Thus, any concurrent calls to meSpeak.speak() or meSpeak.play() will stop any other sound playing.
Further, iOS reserves volume control to the user exclusively. Any attempt to change the volume by a script will remain without effect.
Please note that you still need a user-interaction at the very beginning of the chain of events in order to have a sound played by iOS.

Voices Currently Available

JSON File Formats

1) Config-data: "mespeak_config.json":
The config-file includes all data to configure the tone (e.g.: male or female) of the electronic voice.

{ "config": "<base64-encoded octet stream>", "phontab": "<base64-encoded octet stream>", "phonindex": "<base64-encoded octet stream>", "phondata": "<base64-encoded octet stream>", "intonations": "<base64-encoded octet stream>" }

Finally the JSON object may include an optional voice-object (see below), that will be set up together with the config-data:

{ ... "voice": { <voice-data> } }

2) Voice-data: "voice.json":
A voice-file includes the ids of the voice and the dictionary used by this voice, and the binary data of theses two files.

{ "voice_id": "<voice-identifier>", "dict_id": "<dict-identifier>", "dict": "<base64-encoded octet stream>", "voice": "<base64-encoded octet stream>" }

Alternatively the value of "voice" may be a text-string, if an additional property "voice_encoding": "text" is provided.
This shold allow for quick changes and testing:

{ "voice_id": "<voice-identifier>", "dict_id": "<dict-identifier>", "dict": "<base64-encoded octet stream>", "voice": "<text-string>", "voice_encoding": "text" }

Both config-data and voice-data may be loaded and switched on the fly to (re-)configure meSpeak.js.

For a guide to customizing languages and voices, see meSpeak – Voices & Languages.

Deferred Calls

In case that speak() is called before configuration and/or voice data has been loaded, the call will be deferred and executed after set up.
See this page for an example. You may reset the queue manually by calling

meSpeak.resetQueue();

Amplitude and Volume

There are now two separate parameters or options to control the volume of the spoken text: amplitude and volume.
While amplitude affects the generation of the sound stream by the TTS-algorithm, volume controls the playback volume of the browser. By the use of volume you can cache a generated stream and still provide an individual volume level at playback time. Please note that there is a global volume (controlled by setVolume()) and an individual volume level relative to the global one. Both default to 1 (max volume).

Notes on Chinese Languages and Voices

Please note that the Chinese voices do only support Pinyin input (phonetic transcript like "zhong1guo2" for 中 + 国, China) for "zh" and simple one-to-one translation from single Simplified Chinese characters or Jyutping romanised text for "zh-yue".

The eSpeak documentation provides the following notes:

*) zh (Mandarin Chinese):
This speaks Pinyin text and Chinese characters. There is only a simple one-to-one translation of Chinese characters to a single Pinyin pronunciation. There is no attempt yet at recognising different pronunciations of Chinese characters in context, or of recognising sequences of characters as "words". The eSpeak installation includes a basic set of Chinese characters. More are available in an additional data file for Mandarin Chinese at: http://espeak.sourceforge.net/data/.
**) zh-yue (Cantonese Chinese, Provisional):
Just a naive simple one-to-one translation from single Simplified Chinese characters to phonetic equivalents in Cantonese. There is limited attempt at disambiguation, grouping characters into words, or adjusting tones according to their surrounding syllables. This voice needs Chinese character to phonetic translation data, which is available as a separate download for Cantonese at: http://espeak.sourceforge.net/data/.
The voice can also read Jyutping romanised text.

For a simple zh-to-Pinyin translation in JavaScript see: http://www.masswerk.at/mespeak/zh-pinyin-translator.zip

Flash-Fallback for Wave Files

(m)eSpeak produces internally wav-files, which are then played. Internet Explorer 10 supports typed arrays (which are required for the binary logic), but does not provide native playback of wav-files. To provide compatibility for this browser, you could try the experimental meSpeak Flash Fallback.

Source

Download (all code under GPL): mespeak.zip
(v.1.5, last update: 2013-09-24 12:40 GMT)

v.1.5
Added an optional callback to meSpeak.speak() and meSpeak.play().
Added some clean-up code to prevent any memory leaks with some implementations of the Web Audio API.
Removed any references to "window" in favor for "self".
v.1.4.4
Cleaned up a bit of the Emscripten-generated code, changed wording in this page.
v.1.4.3
Better handling for base64-imports when using the HTMLAudioElement for playback with meSpeak.play(). (Less overhead.)
v.1.4.2
Added base64 or data-url as import-format for meSpeak.play().
v.1.4.1
Added a guide to voices and languages and an experimental Flash-fallback for MSIE10. No changes to the meSpeak-code.
v.1.4
Added an option to export data as a plain array.
v.1.3.1
Fixed a bug in the decoding of text-formatted voice data.
v.1.3
Added alternative text format for voices.
v.1.2
Added volume control and capability to play back exported audio-streams.
v.1.1
Added support for the Web Audio API (AudioContext), which is now the preferred method to play the generated sound. Browsers lacking support for the Web Audio API will use the HTMLAudioElement for playback. (v.1.1 was succesfully tested to play on iOS 6/Safari.) Also added an option to export the raw data in various formats.
v.1.04
Demo-page: Auto-speak will now be triggered only, if a URL-parameter "auto" set to "true" or "1" is provided.
(This additional parameter should inhibit any repeated attempts to play in case the script would fail and the demo-form would be sent via GET-parameters.)
v.1.03
Added an instant link for auto-speak to this demo-page.
v.1.02
Added Chinese voice-data (zh, zh-yue) by popular request.
v.1.01
Added an onload-callback to the assignment of the generated audio-data-URL. This should add compatibility to newer versions of WebKit and Chrome.
v.1.0
Initial upload.

About speak.js

speak.js is 100% clientside JavaScript. "speak.js" is a port of eSpeak, an open source speech synthesizer, which was compiled from C++ to JavaScript using Emscripten.
The project page and source code for this demo can be found here.

Browser requirements:

Note that recent versions of these browsers are needed in most cases.