Music Generation using Deep Learning
Animesh Sharma
Mesra, Jharkhand
- 0 Collaborators
A Music Generation project in which LSTM and ANNs were used to predict the next notes in a song from previous notes to recreate a piece of music. ...learn more
Project status: Under Development
Intel Technologies
AI DevCloud / Xeon,
Intel Opt ML/DL Framework,
Intel Python
Overview / Usage
Being a music freak I was very intrigued by how deep learning and music can be connected together. In this project, my main goal was to recreate a piece of music by predicting the next note based on the previous notes and to observe if a machine can learn to play music or not. Seems like it can!
Methodology / Approach
I have used python along with Keras running on Tensorflow backend to develop my model. I used pandas to create a dataframe and then implemented a function to get the data in the desired format for my model, i.e. having 3 samples as training data and the 4th sample as the label. For mp3 files, I used the pydub library to convert it into wav format so that I can read it through the scipy ‘read’ function. I compared the results of ANN and LSTM models. Since there were only 3 samples, the results were good in both the cases. Then I used the scipy ‘write’ function to create the generated music file. I first experimented with the tanh and sigmoid activations but they were giving me distorted outputs. Using ReLU was not viable as the notes also had negative values which were not possible in ReLU. So I used LeakyReLU which gave me the best results without any distortion.
Technologies Used
- Intel AI DevCloud
- Intel optimized Python
- Intel optimized Tensorflow
- Keras
- SciPy
- NumPy
- Pandas
- Pydub
- Jupyter Notebook