Neural Style Transfer on Audio signals

1 0
  • 0 Collaborators

Mixing two audio signals to create new music using Neural Style Transfer. ...learn more

Project status: Under Development

Artificial Intelligence

Code Samples [1]Links [1]

Overview / Usage

New music is created by applying style of an audio to content of another audio using neural style transfer algorithm. This gives a better insight in working of neural style transfer algorithm and how it can be applied to different signals and also how new audio signals can be synthesized.

Methodology / Approach

First preprocessing is done, Fast Fourier transformation is applied on the audios so that we get into frequency domain from style domain because the audios are well represented by frequencies. Then the style of style audio is extracted using gram matrices which gives the correlation between different activations of a signal and captures the style. We try to minimize the style loss between the gram matrix of style audio and content audio so that content audio gets the same style as the style audio. A shallow network gives better results than deeper networks. Through this content audio gets stylized and we get a new music. Post processing is also done, Inverse Fourier transformation is performed so as to get into time domain again to make audios audible.

Technologies Used

Python is used with a deep Learning Framework PyTorch. Other libraries used are librosa, numpy and matplotlib.

Repository

https://github.com/alishdipani/Neural-Style-Transfer-Audio

Comments (0)