Hi,
I am trying to implement a VAE on audio and want to listen to the reconstructed audio via tensorboard. The input and output of my network is a spectrogram (computed with torchaudio.transforms.Spectrogram
) so I assume I should use torchaudio.transforms.GriffinLim
to get a listenable waveform.
However, when I send the output of torchaudio.transforms.GriffinLim
to TensorBoard, I get warning: audio amplitude out of range, auto clipped.
and there is no “audio” tab in TensorBoard.
It seems that GriffinLim outputs a waveform with values between -2 and 2, but I do not know what kind of audio format is supported by TensorBoard so I do not know what kind of transform I should perform after applying GriffinLim.
Does anyone have an idea?