Missing keys & unexpected keys in state_dict when loading pretrained model

Hey guys hoping anyone can help.
I am using this guys real time voice cloning project to be able to make a recreation of my voice in TTS. He provides pre-trained encoder models for this and it should be as simple as inputting a recording of my voice for it to train itself.

I run into a problem on run-time which is as follows (Sorry it’s a screenshot, I am not home to run this again to paste the code) :

Does anybody have experience with this error? I can provide more detail as needed, I am pretty inexperienced with this stuff and am not sure where to even begin debugging this.

Thanks all!

Could you post the code, which creates this issue?
Are you sure you are loading the right state_dict for the model?

Okay here’s the code I’m running that throws the error:

from encoder.params_model import model_embedding_size as speaker_embedding_size
from utils.argutils import print_args
from synthesizer.inference import Synthesizer
from encoder import inference as encoder
from vocoder import inference as vocoder
from pathlib import Path
import numpy as np
import librosa
import argparse
import torch
import sys


if __name__ == '__main__':
    ## Info & args
    parser = argparse.ArgumentParser(
        formatter_class=argparse.ArgumentDefaultsHelpFormatter
    )
    parser.add_argument("-e", "--enc_model_fpath", type=Path, 
                        default="encoder/saved_models/pretrained.pt",
                        help="Path to a saved encoder")
    parser.add_argument("-s", "--syn_model_dir", type=Path, 
                        default="synthesizer/saved_models/logs-pretrained/",
                        help="Directory containing the synthesizer model")
    parser.add_argument("-v", "--voc_model_fpath", type=Path, 
                        default="vocoder/saved_models/pretrained/pretrained.pt",
                        help="Path to a saved vocoder")
    parser.add_argument("--low_mem", action="store_true", help=\
        "If True, the memory used by the synthesizer will be freed after each use. Adds large "
        "overhead but allows to save some GPU memory for lower-end GPUs.")
    parser.add_argument("--no_sound", action="store_true", help=\
        "If True, audio won't be played.")
    args = parser.parse_args()
    print_args(args, parser)
    if not args.no_sound:
        import sounddevice as sd
        
    
    ## Print some environment information (for debugging purposes)
    print("Running a test of your configuration...\n")
    if not torch.cuda.is_available():
        print("Your PyTorch installation is not configured to use CUDA. If you have a GPU ready "
              "for deep learning, ensure that the drivers are properly installed, and that your "
              "CUDA version matches your PyTorch installation. CPU-only inference is currently "
              "not supported.", file=sys.stderr)
        quit(-1)
    device_id = torch.cuda.current_device()
    gpu_properties = torch.cuda.get_device_properties(device_id)
    print("Found %d GPUs available. Using GPU %d (%s) of compute capability %d.%d with "
          "%.1fGb total memory.\n" % 
          (torch.cuda.device_count(),
           device_id,
           gpu_properties.name,
           gpu_properties.major,
           gpu_properties.minor,
           gpu_properties.total_memory / 1e9))
    
    
    ## Load the models one by one.
    print("Preparing the encoder, the synthesizer and the vocoder...")
    encoder.load_model(args.enc_model_fpath)
    synthesizer = Synthesizer(args.syn_model_dir.joinpath("taco_pretrained"), low_mem=args.low_mem)
    vocoder.load_model(args.voc_model_fpath)
    
    
    ## Run a test
    print("Testing your configuration with small inputs.")
    # Forward an audio waveform of zeroes that lasts 1 second. Notice how we can get the encoder's
    # sampling rate, which may differ.
    # If you're unfamiliar with digital audio, know that it is encoded as an array of floats 
    # (or sometimes integers, but mostly floats in this projects) ranging from -1 to 1.
    # The sampling rate is the number of values (samples) recorded per second, it is set to
    # 16000 for the encoder. Creating an array of length <sampling_rate> will always correspond 
    # to an audio of 1 second.
    print("\tTesting the encoder...")
    encoder.embed_utterance(np.zeros(encoder.sampling_rate))
    
    # Create a dummy embedding. You would normally use the embedding that encoder.embed_utterance
    # returns, but here we're going to make one ourselves just for the sake of showing that it's
    # possible.
    embed = np.random.rand(speaker_embedding_size)
    # Embeddings are L2-normalized (this isn't important here, but if you want to make your own 
    # embeddings it will be).
    embed /= np.linalg.norm(embed)
    # The synthesizer can handle multiple inputs with batching. Let's create another embedding to 
    # illustrate that
    embeds = [embed, np.zeros(speaker_embedding_size)]
    texts = ["test 1", "test 2"]
    print("\tTesting the synthesizer... (loading the model will output a lot of text)")
    mels = synthesizer.synthesize_spectrograms(texts, embeds)
    
    # The vocoder synthesizes one waveform at a time, but it's more efficient for long ones. We 
    # can concatenate the mel spectrograms to a single one.
    mel = np.concatenate(mels, axis=1)
    # The vocoder can take a callback function to display the generation. More on that later. For 
    # now we'll simply hide it like this:
    no_action = lambda *args: None
    print("\tTesting the vocoder...")
    # For the sake of making this test short, we'll pass a short target length. The target length 
    # is the length of the wav segments that are processed in parallel. E.g. for audio sampled 
    # at 16000 Hertz, a target length of 8000 means that the target audio will be cut in chunks of
    # 0.5 seconds which will all be generated together. The parameters here are absurdly short, and 
    # that has a detrimental effect on the quality of the audio. The default parameters are 
    # recommended in general.
    vocoder.infer_waveform(mel, target=200, overlap=50, progress_callback=no_action)
    
    print("All test passed! You can now synthesize speech.\n\n")
    
    
    ## Interactive speech generation
    print("This is a GUI-less example of interface to SV2TTS. The purpose of this script is to "
          "show how you can interface this project easily with your own. See the source code for "
          "an explanation of what is happening.\n")
    
    print("Interactive generation loop")
    num_generated = 0
    while True:
        try:
            # Get the reference audio filepath
            message = "Reference voice: enter an audio filepath of a voice to be cloned (mp3, " \
                      "wav, m4a, flac, ...):\n"
            in_fpath = Path(input(message).replace("\"", "").replace("\'", ""))
            
            
            ## Computing the embedding
            # First, we load the wav using the function that the speaker encoder provides. This is 
            # important: there is preprocessing that must be applied.
            
            # The following two methods are equivalent:
            # - Directly load from the filepath:
            preprocessed_wav = encoder.preprocess_wav(in_fpath)
            # - If the wav is already loaded:
            original_wav, sampling_rate = librosa.load(in_fpath)
            preprocessed_wav = encoder.preprocess_wav(original_wav, sampling_rate)
            print("Loaded file succesfully")
            
            # Then we derive the embedding. There are many functions and parameters that the 
            # speaker encoder interfaces. These are mostly for in-depth research. You will typically
            # only use this function (with its default parameters):
            embed = encoder.embed_utterance(preprocessed_wav)
            print("Created the embedding")
            
            
            ## Generating the spectrogram
            text = input("Write a sentence (+-20 words) to be synthesized:\n")
            
            # The synthesizer works in batch, so you need to put your data in a list or numpy array
            texts = [text]
            embeds = [embed]
            # If you know what the attention layer alignments are, you can retrieve them here by
            # passing return_alignments=True
            specs = synthesizer.synthesize_spectrograms(texts, embeds)
            spec = specs[0]
            print("Created the mel spectrogram")
            
            
            ## Generating the waveform
            print("Synthesizing the waveform:")
            # Synthesizing the waveform is fairly straightforward. Remember that the longer the
            # spectrogram, the more time-efficient the vocoder.
            generated_wav = vocoder.infer_waveform(spec)
            
            
            ## Post-generation
            # There's a bug with sounddevice that makes the audio cut one second earlier, so we
            # pad it.
            generated_wav = np.pad(generated_wav, (0, synthesizer.sample_rate), mode="constant")
            
            # Play the audio (non-blocking)
            if not args.no_sound:
                sd.stop()
                sd.play(generated_wav, synthesizer.sample_rate)
                
            # Save it on the disk
            fpath = "demo_output_%02d.wav" % num_generated
            print(generated_wav.dtype)
            librosa.output.write_wav(fpath, generated_wav.astype(np.float32), 
                                     synthesizer.sample_rate)
            num_generated += 1
            print("\nSaved output as %s\n\n" % fpath)
            
            
        except Exception as e:
            print("Caught exception: %s" % repr(e))
            print("Restarting\n")

Along with:

Blockquote
from encoder.params_data import *
from encoder.model import SpeakerEncoder
from encoder.audio import preprocess_wav   # We want to expose this function from here
from matplotlib import cm
from encoder import audio
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import torch

_model = None # type: SpeakerEncoder
_device = None # type: torch.device


def load_model(weights_fpath: Path, device=None):
    """
    Loads the model in memory. If this function is not explicitely called, it will be run on the 
    first call to embed_frames() with the default weights file.
    
    :param weights_fpath: the path to saved model weights.
    :param device: either a torch device or the name of a torch device (e.g. "cpu", "cuda"). The 
    model will be loaded and will run on this device. Outputs will however always be on the cpu. 
    If None, will default to your GPU if it"s available, otherwise your CPU.
    """
    # TODO: I think the slow loading of the encoder might have something to do with the device it
    #   was saved on. Worth investigating.
    global _model, _device
    if device is None:
        _device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    elif isinstance(device, str):
        _device = torch.device(device)
    _model = SpeakerEncoder(_device, torch.device("cpu"))
    checkpoint = torch.load(weights_fpath)
    _model.load_state_dict(checkpoint["model_state"])
    _model.eval()
    print("Loaded encoder \"%s\" trained to step %d" % (weights_fpath.name, checkpoint["step"]))
    
    
def is_loaded():
    return _model is not None


def embed_frames_batch(frames_batch):
    """
    Computes embeddings for a batch of mel spectrogram.
    
    :param frames_batch: a batch mel of spectrogram as a numpy array of float32 of shape 
    (batch_size, n_frames, n_channels)
    :return: the embeddings as a numpy array of float32 of shape (batch_size, model_embedding_size)
    """
    if _model is None:
        raise Exception("Model was not loaded. Call load_model() before inference.")
    
    frames = torch.from_numpy(frames_batch).to(_device)
    embed = _model.forward(frames).detach().cpu().numpy()
    return embed


def compute_partial_slices(n_samples, partial_utterance_n_frames=partials_n_frames,
                           min_pad_coverage=0.75, overlap=0.5):
    """
    Computes where to split an utterance waveform and its corresponding mel spectrogram to obtain 
    partial utterances of <partial_utterance_n_frames> each. Both the waveform and the mel 
    spectrogram slices are returned, so as to make each partial utterance waveform correspond to 
    its spectrogram. This function assumes that the mel spectrogram parameters used are those 
    defined in params_data.py.
    
    The returned ranges may be indexing further than the length of the waveform. It is 
    recommended that you pad the waveform with zeros up to wave_slices[-1].stop.
    
    :param n_samples: the number of samples in the waveform
    :param partial_utterance_n_frames: the number of mel spectrogram frames in each partial 
    utterance
    :param min_pad_coverage: when reaching the last partial utterance, it may or may not have 
    enough frames. If at least <min_pad_coverage> of <partial_utterance_n_frames> are present, 
    then the last partial utterance will be considered, as if we padded the audio. Otherwise, 
    it will be discarded, as if we trimmed the audio. If there aren't enough frames for 1 partial 
    utterance, this parameter is ignored so that the function always returns at least 1 slice.
    :param overlap: by how much the partial utterance should overlap. If set to 0, the partial 
    utterances are entirely disjoint. 
    :return: the waveform slices and mel spectrogram slices as lists of array slices. Index 
    respectively the waveform and the mel spectrogram with these slices to obtain the partial 
    utterances.
    """
    assert 0 <= overlap < 1
    assert 0 < min_pad_coverage <= 1
    
    samples_per_frame = int((sampling_rate * mel_window_step / 1000))
    n_frames = int(np.ceil((n_samples + 1) / samples_per_frame))
    frame_step = max(int(np.round(partial_utterance_n_frames * (1 - overlap))), 1)

    # Compute the slices
    wav_slices, mel_slices = [], []
    steps = max(1, n_frames - partial_utterance_n_frames + frame_step + 1)
    for i in range(0, steps, frame_step):
        mel_range = np.array([i, i + partial_utterance_n_frames])
        wav_range = mel_range * samples_per_frame
        mel_slices.append(slice(*mel_range))
        wav_slices.append(slice(*wav_range))
        
    # Evaluate whether extra padding is warranted or not
    last_wav_range = wav_slices[-1]
    coverage = (n_samples - last_wav_range.start) / (last_wav_range.stop - last_wav_range.start)
    if coverage < min_pad_coverage and len(mel_slices) > 1:
        mel_slices = mel_slices[:-1]
        wav_slices = wav_slices[:-1]
    
    return wav_slices, mel_slices


def embed_utterance(wav, using_partials=True, return_partials=False, **kwargs):
    """
    Computes an embedding for a single utterance.
    
    # TODO: handle multiple wavs to benefit from batching on GPU
    :param wav: a preprocessed (see audio.py) utterance waveform as a numpy array of float32
    :param using_partials: if True, then the utterance is split in partial utterances of 
    <partial_utterance_n_frames> frames and the utterance embedding is computed from their 
    normalized average. If False, the utterance is instead computed from feeding the entire 
    spectogram to the network.
    :param return_partials: if True, the partial embeddings will also be returned along with the 
    wav slices that correspond to the partial embeddings.
    :param kwargs: additional arguments to compute_partial_splits()
    :return: the embedding as a numpy array of float32 of shape (model_embedding_size,). If 
    <return_partials> is True, the partial utterances as a numpy array of float32 of shape 
    (n_partials, model_embedding_size) and the wav partials as a list of slices will also be 
    returned. If <using_partials> is simultaneously set to False, both these values will be None 
    instead.
    """
    # Process the entire utterance if not using partials
    if not using_partials:
        frames = audio.wav_to_mel_spectrogram(wav)
        embed = embed_frames_batch(frames[None, ...])[0]
        if return_partials:
            return embed, None, None
        return embed
    
    # Compute where to split the utterance into partials and pad if necessary
    wave_slices, mel_slices = compute_partial_slices(len(wav), **kwargs)
    max_wave_length = wave_slices[-1].stop
    if max_wave_length >= len(wav):
        wav = np.pad(wav, (0, max_wave_length - len(wav)), "constant")
    
    # Split the utterance into partials
    frames = audio.wav_to_mel_spectrogram(wav)
    frames_batch = np.array([frames[s] for s in mel_slices])
    partial_embeds = embed_frames_batch(frames_batch)
    
    # Compute the utterance embedding from the partial embeddings
    raw_embed = np.mean(partial_embeds, axis=0)
    embed = raw_embed / np.linalg.norm(raw_embed, 2)
    
    if return_partials:
        return embed, partial_embeds, wave_slices
    return embed


def embed_speaker(wavs, **kwargs):
    raise NotImplemented()


def plot_embedding_as_heatmap(embed, ax=None, title="", shape=None, color_range=(0, 0.30)):
    if ax is None:
        ax = plt.gca()
    
    if shape is None:
        height = int(np.sqrt(len(embed)))
        shape = (height, -1)
    embed = embed.reshape(shape)
    
    cmap = cm.get_cmap()
    mappable = ax.imshow(embed, cmap=cmap)
    cbar = plt.colorbar(mappable, ax=ax, fraction=0.046, pad=0.04)
    cbar.set_clim(*color_range)
    
    ax.set_xticks([]), ax.set_yticks([])
    ax.set_title(title)

And the full error is:

Preparing the encoder, the synthesizer and the vocoder...
Traceback (most recent call last):
  File "demo_cli.py", line 61, in <module>
    encoder.load_model(args.enc_model_fpath)
  File "C:\Users\Marc\Desktop\Real-Time-Voice-Cloning-master\Real-Time-Voice-Cloning-master\encoder\inference.py", line 34, in load_model
    _model.load_state_dict(checkpoint["model_state"])
  File "C:\Users\Marc\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 845, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for SpeakerEncoder:
        Missing key(s) in state_dict: "similarity_weight", "similarity_bias", "lstm.weight_ih_l0", "lstm.weight_hh_l0", "lstm.bias_ih_l0", "lstm.bias_hh_l0", "lstm.weight_ih_l1", "lstm.weight_hh_l1", "lstm.bias_ih_l1", "lstm.bias_hh_l1", "lstm.weight_ih_l2", "lstm.weight_hh_l2", "lstm.bias_ih_l2", "lstm.bias_hh_l2", "linear.weight", "linear.bias".
        Unexpected key(s) in state_dict: "step", "upsample.resnet.conv_in.weight", "upsample.resnet.batch_norm.weight", "upsample.resnet.batch_norm.bias", "upsample.resnet.batch_norm.running_mean", "upsample.resnet.batch_norm.running_var", "upsample.resnet.batch_norm.num_batches_tracked", "upsample.resnet.layers.0.conv1.weight", "upsample.resnet.layers.0.conv2.weight", "upsample.resnet.layers.0.batch_norm1.weight", "upsample.resnet.layers.0.batch_norm1.bias", "upsample.resnet.layers.0.batch_norm1.running_mean", "upsample.resnet.layers.0.batch_norm1.running_var", "upsample.resnet.layers.0.batch_norm1.num_batches_tracked", "upsample.resnet.layers.0.batch_norm2.weight", "upsample.resnet.layers.0.batch_norm2.bias", "upsample.resnet.layers.0.batch_norm2.running_mean", "upsample.resnet.layers.0.batch_norm2.running_var", "upsample.resnet.layers.0.batch_norm2.num_batches_tracked", "upsample.resnet.layers.1.conv1.weight", "upsample.resnet.layers.1.conv2.weight", "upsample.resnet.layers.1.batch_norm1.weight", "upsample.resnet.layers.1.batch_norm1.bias", "upsample.resnet.layers.1.batch_norm1.running_mean", "upsample.resnet.layers.1.batch_norm1.running_var", "upsample.resnet.layers.1.batch_norm1.num_batches_tracked", "upsample.resnet.layers.1.batch_norm2.weight", "upsample.resnet.layers.1.batch_norm2.bias", "upsample.resnet.layers.1.batch_norm2.running_mean", "upsample.resnet.layers.1.batch_norm2.running_var", "upsample.resnet.layers.1.batch_norm2.num_batches_tracked", "upsample.resnet.layers.2.conv1.weight", "upsample.resnet.layers.2.conv2.weight", "upsample.resnet.layers.2.batch_norm1.weight", "upsample.resnet.layers.2.batch_norm1.bias", "upsample.resnet.layers.2.batch_norm1.running_mean", "upsample.resnet.layers.2.batch_norm1.running_var", "upsample.resnet.layers.2.batch_norm1.num_batches_tracked", "upsample.resnet.layers.2.batch_norm2.weight", "upsample.resnet.layers.2.batch_norm2.bias", "upsample.resnet.layers.2.batch_norm2.running_mean", "upsample.resnet.layers.2.batch_norm2.running_var", "upsample.resnet.layers.2.batch_norm2.num_batches_tracked", "upsample.resnet.layers.3.conv1.weight", "upsample.resnet.layers.3.conv2.weight", "upsample.resnet.layers.3.batch_norm1.weight", "upsample.resnet.layers.3.batch_norm1.bias", "upsample.resnet.layers.3.batch_norm1.running_mean", "upsample.resnet.layers.3.batch_norm1.running_var", "upsample.resnet.layers.3.batch_norm1.num_batches_tracked", "upsample.resnet.layers.3.batch_norm2.weight", "upsample.resnet.layers.3.batch_norm2.bias", "upsample.resnet.layers.3.batch_norm2.running_mean", "upsample.resnet.layers.3.batch_norm2.running_var", "upsample.resnet.layers.3.batch_norm2.num_batches_tracked", "upsample.resnet.layers.4.conv1.weight", "upsample.resnet.layers.4.conv2.weight", "upsample.resnet.layers.4.batch_norm1.weight", "upsample.resnet.layers.4.batch_norm1.bias", "upsample.resnet.layers.4.batch_norm1.running_mean", "upsample.resnet.layers.4.batch_norm1.running_var", "upsample.resnet.layers.4.batch_norm1.num_batches_tracked", "upsample.resnet.layers.4.batch_norm2.weight", "upsample.resnet.layers.4.batch_norm2.bias", "upsample.resnet.layers.4.batch_norm2.running_mean", "upsample.resnet.layers.4.batch_norm2.running_var", "upsample.resnet.layers.4.batch_norm2.num_batches_tracked", "upsample.resnet.layers.5.conv1.weight", "upsample.resnet.layers.5.conv2.weight", "upsample.resnet.layers.5.batch_norm1.weight", "upsample.resnet.layers.5.batch_norm1.bias", "upsample.resnet.layers.5.batch_norm1.running_mean", "upsample.resnet.layers.5.batch_norm1.running_var", "upsample.resnet.layers.5.batch_norm1.num_batches_tracked", "upsample.resnet.layers.5.batch_norm2.weight", "upsample.resnet.layers.5.batch_norm2.bias", "upsample.resnet.layers.5.batch_norm2.running_mean", "upsample.resnet.layers.5.batch_norm2.running_var", "upsample.resnet.layers.5.batch_norm2.num_batches_tracked", "upsample.resnet.layers.6.conv1.weight", "upsample.resnet.layers.6.conv2.weight", "upsample.resnet.layers.6.batch_norm1.weight", "upsample.resnet.layers.6.batch_norm1.bias", "upsample.resnet.layers.6.batch_norm1.running_mean", "upsample.resnet.layers.6.batch_norm1.running_var", "upsample.resnet.layers.6.batch_norm1.num_batches_tracked", "upsample.resnet.layers.6.batch_norm2.weight", "upsample.resnet.layers.6.batch_norm2.bias", "upsample.resnet.layers.6.batch_norm2.running_mean", "upsample.resnet.layers.6.batch_norm2.running_var", "upsample.resnet.layers.6.batch_norm2.num_batches_tracked", "upsample.resnet.layers.7.conv1.weight", "upsample.resnet.layers.7.conv2.weight", "upsample.resnet.layers.7.batch_norm1.weight", "upsample.resnet.layers.7.batch_norm1.bias", "upsample.resnet.layers.7.batch_norm1.running_mean", "upsample.resnet.layers.7.batch_norm1.running_var", "upsample.resnet.layers.7.batch_norm1.num_batches_tracked", "upsample.resnet.layers.7.batch_norm2.weight", "upsample.resnet.layers.7.batch_norm2.bias", "upsample.resnet.layers.7.batch_norm2.running_mean", "upsample.resnet.layers.7.batch_norm2.running_var", "upsample.resnet.layers.7.batch_norm2.num_batches_tracked", "upsample.resnet.layers.8.conv1.weight", "upsample.resnet.layers.8.conv2.weight", "upsample.resnet.layers.8.batch_norm1.weight", "upsample.resnet.layers.8.batch_norm1.bias", "upsample.resnet.layers.8.batch_norm1.running_mean", "upsample.resnet.layers.8.batch_norm1.running_var", "upsample.resnet.layers.8.batch_norm1.num_batches_tracked", "upsample.resnet.layers.8.batch_norm2.weight", "upsample.resnet.layers.8.batch_norm2.bias", "upsample.resnet.layers.8.batch_norm2.running_mean", "upsample.resnet.layers.8.batch_norm2.running_var", "upsample.resnet.layers.8.batch_norm2.num_batches_tracked", "upsample.resnet.layers.9.conv1.weight", "upsample.resnet.layers.9.conv2.weight", "upsample.resnet.layers.9.batch_norm1.weight", "upsample.resnet.layers.9.batch_norm1.bias", "upsample.resnet.layers.9.batch_norm1.running_mean", "upsample.resnet.layers.9.batch_norm1.running_var", "upsample.resnet.layers.9.batch_norm1.num_batches_tracked", "upsample.resnet.layers.9.batch_norm2.weight", "upsample.resnet.layers.9.batch_norm2.bias", "upsample.resnet.layers.9.batch_norm2.running_mean", "upsample.resnet.layers.9.batch_norm2.running_var", "upsample.resnet.layers.9.batch_norm2.num_batches_tracked", "upsample.resnet.conv_out.weight", "upsample.resnet.conv_out.bias", "upsample.up_layers.1.weight", "upsample.up_layers.3.weight", "upsample.up_layers.5.weight", "I.weight", "I.bias", "rnn1.weight_ih_l0", "rnn1.weight_hh_l0", "rnn1.bias_ih_l0", "rnn1.bias_hh_l0", "rnn2.weight_ih_l0", "rnn2.weight_hh_l0", "rnn2.bias_ih_l0", "rnn2.bias_hh_l0", "fc1.weight", "fc1.bias", "fc2.weight", "fc2.bias", "fc3.weight", "fc3.bias".
TerminateHostApis in
TerminateHostApis out

Blockquote

It looks like you would like to load a state_dict from WaveRNN to an instance of SpeakerEncoder.

Make sure to pass the right checkpoint to the corresponding model.

While we’re on this topic, I had issues with the encoder model. I was getting the following error when trying to load the correct model:

RuntimeError: Error(s) in loading state_dict for SpeakerEncoder:
	Unexpected key(s) in state_dict: "similarity_weight", "similarity_bias". 

At the end, it turns out I was using another pytorch version

torch==1.5.1 doesn’t work
torch==1.5.0 does work