Google’ magenta developers have recently introduced a virtual room in the browser which is known as a Lo-Fi player that can play various musical beats of instruments. Lo-Fi is a music generating tool that helps to select and create music of users’ choice. The developer of the AI system said that if anyone has listened to the popular Lo-Fi hip hop streams while doing work and imagine that they are the producers then it will allow them to create music and vibe. These developers have selected Lo-Fi hip hop because it is a popular genre where the structure of music is simple.
How it works
In the lo-Fi player, the user can create their custom music room and the music can be played when interacting with the elements in the room. Users can listen to music and also share a room with others. While clicking the start button the music will start playing and once it had started user can change the music real-time such as the tune, tempo, and the bass with tinkering around with the objects present in the room. There are several other features than changing music such as viewing outside the window. In this feature, the user can correlate to the background sound in the music track and can change the visuals and the music by clicking the window. The main aim of this AI-system is to replace existing Lo-Fi hip hop streams to bring a prototype for interactive music piece to the genre.
The ML behind
Machine learning model from Magenta.js which is an open-source javascript API for using the pre-trained Magenta models using in the browsers. This was built with TensorFlow which is faster and GPU accelerated inference. The AI system developers incorporated many music machine learning models which were developed by the magenta teams to help the users to make the novel and dynamic experience. There is also a TV in the center of the virtual music room that will be representing MusicVAE. This MusicVAE is a machine learning model that helps the user to make palettes for blending and exploring musical scores. The user can click the TV to make new melodies by combining the existing ones. Other than these melodies include a recurrent neural network for generating music and also serves as an end-to-end primer.
Then there is a radio behind the TV which represents Melody RNN which is a machine learning that applies language modeling for melody generations by using LSTM. This radio is a virtual room that is used to generate new melodies.
Sharing on YouTube
And in the second phase, the developers transform Lo-Fi music into the interactive YouTube streams which will be running for a few weeks. Also, they said that by clicking on the elements in the room user can type the command into the live chat. This command helps the user to perform different tasks such as changing color, switch instrument, etc. also every time the beat loops the system will randomly select comments from the live chat for modifications.