How to do inference on new sample

Hello, I’m working on a clone of the following project (document classification) in a virtual machine :

I trained the model and created both model files as well as checkpoint files using the following collab notebook (I only use DBpedia and YahooAnswers):

I then took the model/checkpoint files which I generated from this collab for use in the project, loaded the model from the checkpoint (for example, with DBPedia):

dbpedia_ckp_path = “./checkpoint/dbpedia_current_checkpoint.pt”
dbpedia_PATH = “./best_model/dbpedia_best_model.pt”

dbpedia_model = VDCNN(n_classes=14, num_embedding=len(“”“abcdefghijklmnopqrstuvwxyz0123456789,;.!?:'"/\|_@#$%^&*~`±=<>(){}”“”) + 1, embedding_dim=16,depth=9, n_fc_neurons=2048, shortcut=False)

dbpedia_model.load_state_dict(torch.load(dbpedia_PATH, map_location=torch.device(‘cpu’))[‘state_dict’])

dbpedia_model.eval()

optimizer = torch.optim.Adam(dbpedia_model.parameters(), lr=0.001)
loaded_dbpedia_model, optimizer, start_epoch, valid_loss_min = load_ckp(dbpedia_ckp_path, dbpedia_model, optimizer)

print("model = ", loaded_dbpedia_model)
print("optimizer = ", optimizer)
print("start_epoch = ", start_epoch)
print("valid_loss_min = ", valid_loss_min)
print(“valid_loss_min = {:.6f}”.format(valid_loss_min))

loaded_dbpedia_model = loaded_dbpedia_model.to(“cpu”)

Now I want to inference on a new sample text and make a prediction, for example:

ex_text_str = “Brekke Church (Norwegian: Brekke kyrkje) is a parish church in Gulen Municipality in Sogn og Fjordane county, Norway. It is located in the village of Brekke. The church is part of the Brekke parish in the Nordhordland deanery in the Diocese of Bjørgvin. The white, wooden church, which has 390 seats, was consecrated on 19 November 1862 by the local Dean Thomas Erichsen. The architect Christian Henrik Grosch made the designs for the church, which is the third church on the site.”

I’ve tried following this article in order to classify it (using its predict method):
link 3 in the picture

and using this chunk of code:

dbpedia_train_iter = DBpedia(split=‘train’)
dbpedia_tokens = yield_tokens(dbpedia_train_iter)
vocab = build_vocab_from_iterator(dbpedia_tokens, specials=[“”])
vocab.set_default_index(vocab[“”])
text_pipeline = lambda x: vocab(tokenizer(x))

But on one vm I get the following error : RuntimeError: Internal error: headers don’t contain content-disposition.

On another vm, where the error does not happen, I get another error:
index out of range in self (in torch.embedding(…))

So I tried following another article to classify the new sample for DBpedia:
link 4 in the picture
but it seems to be outdated and so I didnt make any progress either.

In short, I’d like to write code to inference a new text sample, and thus print its classification for the user, but all my attempts so far didnt amount to anything. If succesful, the result for the above sample should be
This text belongs to NaturalPlace class

Any help would be appreciated.

~Regards Harel.