Pytorch in google-cloud-ml engine

how to setup the pytorch in googld cloud ml engnie…?
i try to make a “” file and throw job to the ml engine,
but it is not work…!

this is error message…

how can i use pytorch in google cloud ml engine?

As far as I know Cloud ML was developed to work specifically with Tensorflow. They support several runtimes for TF versions up to 1.4, as well as specifying a “custom” TF version.
I have never heard that this service is available to other frameworks, however if you do manage to make it work do let me know how. It is a nice service and it would be great to use it with pytorch!
(However you could also always set up a VM with their compute engine and gpu’s to use pytorch)

i find solution about setting up PYTORCH in google-cloud-ml


you have to get a .whl file about pytorch and store it to google storage bucket.
and you will get the link for bucket link.


.whl file is depend on your python version or cuda version…


you write the command line and because you have to set up the google-cloud-ml setting.
related link is this submit_job_to_ml-engine
you write the file to describe your setup.
the related link is this write_setup.py_file

this is my command code and file

“command code”

JOB_NAME="run_ml_engine_pytorch_test_$(date +%Y%m%d_%H%M%S)"
gcloud ml-engine jobs submit training $JOB_NAME \
    --job-dir $OUTPUT_PATH \
    --runtime-version 1.4 \
    --module-name models.pytorch_test \
    --package-path models/ \
    --packages gs://yourbucket/directory/torch-0.3.0.post4-cp27-cp27mu-linux_x86_64.whl \
    --region $REGION \
    -- \
    --verbosity DEBUG” code

from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = ['torchvision']
    description='My pytorch trainer application package.'


if you have experience submitting job to the ml-engine.
you might know the file structure about submitting ml-engine

you have to follow above link and know how to pack files.


Can anyone confirm if using PyTorch can be done (easily) on google-cloud-ml? Probably one doesn’t have to pull the above tricks anymore since PyTorch is now on PyPi?

I’m specifically worried about data I/O and multiprocessing… we would basically want to use a folder dataset, but not sure if Google will support that.