A multilingual multispeaker expressive speech synthesis framework

ERISHA

ERISHA is a multilingual multispeaker expressive speech synthesis framework. It can transfer the expressivity to the speaker’s voice for which no expressive speech corpus is available. The term ERISHA means speech in Sanskrit. The framework of ERISHA includes various deep learning architectures such as Global Style Token (GST), Variational Autoencoder (VAE), and Gaussian Mixture Variational Autoencoder (GMVAE), and X-vectors for building prosody encoder.

Currently, the library is in its initial stage of development and will be updated frequently in the coming days.

Stay tuned for more updates, and we are open to collaboration !!!

Available recipes

Available Features