Pytorch in google-cloud-ml engine

how to setup the pytorch in googld cloud ml engnie…?
i try to make a “setup.py” file and throw job to the ml engine,
but it is not work…!

this is error message…


how can i use pytorch in google cloud ml engine?

Hi!!
As far as I know Cloud ML was developed to work specifically with Tensorflow. They support several runtimes for TF versions up to 1.4, as well as specifying a “custom” TF version.
I have never heard that this service is available to other frameworks, however if you do manage to make it work do let me know how. It is a nice service and it would be great to use it with pytorch!
(However you could also always set up a VM with their compute engine and gpu’s to use pytorch)

i find solution about setting up PYTORCH in google-cloud-ml

first

you have to get a .whl file about pytorch and store it to google storage bucket.
and you will get the link for bucket link.

gs://bucketname/directory/torch-0.3.0.post4-cp27-cp27mu-linux_x86_64.whl

.whl file is depend on your python version or cuda version…

second

you write the command line and setup.py because you have to set up the google-cloud-ml setting.
related link is this submit_job_to_ml-engine
you write the setup.py file to describe your setup.
the related link is this write_setup.py_file

this is my command code and setup.py file

“command code”

JOB_NAME="run_ml_engine_pytorch_test_$(date +%Y%m%d_%H%M%S)"
REGION=us-central1
OUTPUT_PATH=gs://yourbucket
gcloud ml-engine jobs submit training $JOB_NAME \
    --job-dir $OUTPUT_PATH \
    --runtime-version 1.4 \
    --module-name models.pytorch_test \
    --package-path models/ \
    --packages gs://yourbucket/directory/torch-0.3.0.post4-cp27-cp27mu-linux_x86_64.whl \
    --region $REGION \
    -- \
    --verbosity DEBUG

setup.py” code

from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = ['torchvision']
setup(
    name='trainer',
    version='0.1',
    install_requires=REQUIRED_PACKAGES,
    packages=find_packages(),
    include_package_data=True,
    description='My pytorch trainer application package.'
)

third

if you have experience submitting job to the ml-engine.
you might know the file structure about submitting ml-engine

packaging_training_model
you have to follow above link and know how to pack files.

2 Likes

Can anyone confirm if using PyTorch can be done (easily) on google-cloud-ml? Probably one doesn’t have to pull the above tricks anymore since PyTorch is now on PyPi?

I’m specifically worried about data I/O and multiprocessing… we would basically want to use a folder dataset, but not sure if Google will support that.