METAVERSE

Stable Diffusion can create new music by generating spectrograms based on text

Since the early days of AI, scientists have been trying to employ it to generate new and interesting music. The team behind the Riffusion project has found a very original use of AI for image generation in music production. They trained the open Stable Diffusion model on spectrogram images depicting the frequency and amplitude of a sound wave over time, as well as a text description. As a result, the AI can generate new spectrograms based on your text requests, and when you play them, music is played.

Stable Diffusion can create new music by generating spectrograms based on text
AI can generate new music by modifying spectrograms in response to your requests.

Similar to image modification in Stable Diffusion, the method can be used to change existing sound compositions and sample music synthesis. You can also combine different styles, make a smooth transition from one style to another, or modify an existing sound to solve problems like increasing the volume of individual instruments, changing the rhythm, and replacing instruments.

The Stable Diffusion algorithm is already showing a lot of promise for music generation. And, since it is open source and licensed under the MIT license, anyone can use it to create their own music. On the project website, you can listen to samples of generated music.

Listen to these freshly generated music examples by Stable Diffusion:

Read more about music and AI:




Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Hey, wait!

Before you go, Subscribe to our mailing list to get the new updates!

%d bloggers like this: