E3S Web of Conferences (Jan 2023)

TEXT2AV – Automated Text to Audio and Video Conversion

  • Sanjeeva Polepaka,
  • Balasri Nitin Reddy Vanipenta,
  • Indraj Goud Jagirdar,
  • Guru Prasad Aavula,
  • Pathani Ashish

DOI
https://doi.org/10.1051/e3sconf/202343001027
Journal volume & issue
Vol. 430
p. 01027

Abstract

Read online

The paper aims to develop a machine learning-based system that can automatically convert text to audio and text to video as per the user’s request. Suppose Reading a large text is difficult for anyone, but this TTS model makes it easy by converting text into audio by producing the audio output by an avatar with lip sync to make it look more attractive and human-like interaction in many languages. The TTS model is built based on Waveform Recurrent Neural Networks (WaveRNN). It is a type of auto-regressive model that predicts future data based on the present. The system identifies the keywords in the input texts and uses diffusion models to generate high-quality video content. The system uses GAN (Generative Adversarial Network) to generate videos. Frame Interpolation is used to combine different frames into two adjacent frames to generate a slow- motion video. WebVid-20M, Image-Net, and Hugging-Face are the datasets used for Text video and LibriTTS corpus, and Lip Sync are the dataset used for text-to-audio. The System provides a user-friendly and automated platform to the user which takes text as input and produces either a high-quality audio or high-resolution video quickly and efficiently.