Replies: 3 comments
-
Hi there! Let me try to rephrase the question to make sure I am on the same page. The visual is not a spectrogram, is it? It is an activation matrix (Frequency x time). You would like to condition audio synthesis on this information and, hopefully, get the audio with a similar activation matrix. Do I understand it correctly? |
Beta Was this translation helpful? Give feedback.
-
The visual is not a spectrogram, is it? - that's right. |
Beta Was this translation helpful? Give feedback.
-
Ok, I see. This seems to be quite interesting. I think I saw something similar before: https://magenta.tensorflow.org/music-vae – it is more like a MIDI player. Regarding the Spectrogram VQGAN. I don't think this image (activation matrix) is a good choice as an input here because you will need to quantize (encode) it as a sequence of codes that the transformer will be using as a prime. This would require training another VQGAN to reconstruct these activation matrices. What you can do instead is to assume that for each time step you have only one frequency. Check our the visual, most of the time you have one activation per time. With this, you can simply take the sequence of frequencies and train the transformer to generate audio given this list. Maybe you can also add a class (style: male/female) into this condition to stylize the output. For this idea you will need:
|
Beta Was this translation helpful? Give feedback.
-
ciaua/unagan#8
Is it possible? I want to take above visual and mash it around (change the shapes) to create new vocals....
UPDATE
basically - i think I want to condition the SpecVQGAN on these images - (not video a video frame per se')
Beta Was this translation helpful? Give feedback.
All reactions