Build_vocab_from_iterator does not work in notebook

Hi. I was trying to run this notebook, but the following line times out:

vocab_transform[ln] = build_vocab_from_iterator(yield_tokens(train_iter, ln),
                                                    min_freq=1,
                                                    specials=special_symbols,
                                                    special_first=True)

Specifically, it raises a TimeoutError: [Errno 110] Connection timed out, and the last line of the trace is:
Exception: Could not get the file at http://www.quest.dcs.shef.ac.uk/wmt16_files_mmt/training.tar.gz. [RequestException] None.

What can be done to circumvent this issue? Thanks in advance for any help you can provide.

Met the same question today.
I try to directly access the file link. But it fails either, saying " The requested URL could not be retrieved".
I guess the database server is down or their network is broken.

Hope it can recover soon… Or are there any alternatives?

Already tried archive.org, but sadly the files are unavailable there too

I am having exactly the same problem. I am new to NLP so I am not sure what to do. Perhaps somebody knows another source of English/German (or another language) word pairs that we can use instead. I will check back later.

The maintainer of this GitHub file: PyTorch-NLP/multi30k.py at master · PetrochukM/PyTorch-NLP · GitHub

claims the following:

Status:
    Host ``www.quest.dcs.shef.ac.uk`` forgot to update their SSL
    certificate; therefore, this dataset does not download securely.
References:
    * http://www.statmt.org/wmt16/multimodal-task.html
    * http://shannon.cs.illinois.edu/DenotationGraph/

He seems to have constructed a workaround program but I have not managed to get it to work.

the page is not delivering any content though

Same problem… Tried to email the site owner but no feedbacks…

I have those files but don’t know where to put them to make them available for everyone.

is it possible to put them in the dropbox or google drive and share them using a public link?

First author of the Multi30K dataset here :wave:.

I didn’t know these were being used in a PyTorch tutorial so we are working on hosting these files elsewhere. Alternatively, if someone understands how the files are being used by torchtext.datasets.Multi30K, would one solution be to re-route the data loading to the Multi30K Github repository?

1 Like

Im a beginner but I found the source code of torchtext.datasets.multi30k here. One may change the URLs and MD5s to make it work ~

that is preciously what I was thinking but I do not own that repo so couldn’t do it.

Please note that I have been working on the following code:

http://nlp.seas.harvard.edu/annotated-transformer/

This code uses the same Multi30K database. I was able to get the code to work by using another data file. The basic idea is that the training, validation, and test sets are all lists of tuples. The tuples consist of sentence pairs in each language. This insight is nice since it makes it easy to create any language pairing you would like. Here is my implementation in Colab along with lots of notes:

Hope this helps. Any comments are welcome.

The slightly different way the dataset is downloaded here is working right now.

1 Like

Thank you for this link, Yaniel. This is a very nice, compact, and up-to-date implementation of a transformer using Pytorch!

-Alex