LibTorch on WatchOS?

Hi all,

Does anyone have a guide/tutorial on how to get LibTorch (C++) installed/setup for a WatchOS project?
I was hoping it would be as easy as installing a Pod like the iOS version… but I get

The platform of the target XXX WatchKit Extension(watchOS 6.1) is not compatible withLibTorch (1.4.0), which does not support watchOS.

I’ve also tried pulling the source and building using:


But I keep running into various build issues… and even then I’m not totally sure how to get that built version into my XCode project if I did manage to get it built.

Has anyone out there successfully set up a WatchOS project with LibTorch?

Thanks in advance!!

Hi @mapes911, we haven’t run any tests on WatchOS yet. But you should be able to compile the code for that architecture. Can you double check your build command? The one below works on my machine.

1 Like

Hi @xta0,

Thanks! That did it. The build worked this time :slight_smile:

I’m running up against another issue now though… not sure if it’s because I was messing around with Eigen libraries earlier or not… but I’m getting these math errors after importing pytorch into my XCode library.

.../pytorch/include/caffe2/perfkernels/math.h:3:10: 'cstdint' file not found
.../WatchSimulator6.1.sdk/usr/include/dispatch/dispatch.h:25:10: Could not build module 'Darwin'
.../WatchSimulator6.1.sdk/System/Library/Frameworks/Foundation.framework/Headers/Foundation.h:6:10: Could not build module 'CoreFoundation'

I’ve uninstalled and re-installed XCode as per some Stack Overflow posts I’ve found with similar issues and no luck.

Any ideas?

Thanks again for responding!

Scratch that, I found the problem. I had my Header Search Paths set to recursive and that caused my issues.

For anyone that finds this… here’s what I did to get this installed on WatchOS.

  • cloned the pytorch repo
  • cd pytorch
  • git checkout v1.4.0
  • git submodule sync
  • git submodule update --init --recursive
  • add files from build_ios/install/include/ and build_ios/install/lib/ to my XCode project
  • in XCode target build settings, add the include directory to Header Search Paths, make sure it’s non-recursive.
  • in XCode target build settings, Other Linker Flags, -force_load “path to lib directory”

That should do it!

1 Like

That’s awesome news. Thanks for sharing this.

1 Like

@xta0 I’m afraid I spoke too soon. I thought my build was ok because it was just failing on one of my own bugs, but once I fixed that I’ve run into a couple of issues that I’m not sure how to solve.

I’m getting this error.

libtorch.a(SparseCPUType.cpp.o)’ does not contain bitcode. You must rebuild it with bitcode enabled (Xcode setting ENABLE_BITCODE), obtain an updated library from the vendor, or disable bitcode for this target. for architecture armv7k

there is no setting for WatchOS to disable bitcode in the build like there is for iOS. is there a way to build the pytorch libs with bitcode enabled?

another issue I was seeing was…

building for watchOS-arm64_32 but attempting to link with file built for watchOS-armv7k

I am attempting to load onto a series 4 watch… any ideas?

Thanks again for your help!!

@mapes911, Below is my command that work on my machine. Feel free to try it out.


As for the bitcode, you can use the commands below to verify

#check the architecure
> lipo -i libcpuinfo.a
#Non-fat file: libcpuinfo.a is architecture: arm64_32

#verify bitcode
> otool -l libtorch_cpu.a | grep bitcode
# sectname __bitcode

Again, I haven’t run any tests on watchOS platforms. So I’m not sure whether they’ll work or not. But I’m curious to see your results. Please let me know if you have any questions. THanks.


feels like we’re getting closer!
re-built using the command you sent.

the lipo command shows all the libraries built properly for arm64_32
the otool command you gave is showing “-i functionality obsolete” but the bitcode errors went away so I’m assuming that worked ok.

after doing that I still got a bunch of errors, but i was able to get rid of most by doing:

  • Build Settings -> Valid Architectures. Removing armv7k and just leaving it as arm64_32. Is this ok to do? Or will I run into issues later because I removed armv7k.

But… now my (hopefully) last hurdle is this error:

libpytorch_qnnpack.a(8x8-aarch64-neon.S.o), building for watchOS, but linking in object file ( …pytorch/lib//libpytorch_qnnpack.a(8x8-aarch64-neon.S.o)) built for macOS, for architecture arm64_32

so… I’m not sure how to resolve this issue. any ideas?

Thanks Tao!!

@mapes911 that’s because the QNNPACK kernel was written in assembly which has conficts with Apple’s LLVM IR (bitcode). If your model is not quantized, you don’t have to include QNNPACK.

Go to the root CMakeLists.txt, search for set(USE_PYTORCH_QNNPACK ON), and turn it off



awesome. it’s compiling, linking and loading on my watch now!! thank you.

however, I am now getting this error when I attempt to load up a trained model.

libc++abi.dylib: terminating with uncaught exception of type torch::jit::script::ErrorReport:
Unknown builtin op: aten::mul.
Could not find any similar ops to aten::mul. This op may not exist or may not be currently supported in TorchScript

I’ve tried a couple of models, including one that I was able to load on my phone with LibTorch 1.4.0 using the Podfile install.

How would I look into what this aten::mul method does? Any idea if this has to do with the way I’ve built the libraries? I’ve found a few other forum posts with similar errors but no immediate solution.

Just in case, here’s the code that loads the model.

torch::jit::script::Module module;
module = torch::jit::load(file_path);

@mapes911 That’s good news to hear. aten::mul is a very basic op that should be registered by default. Did you apply the -all_load to the static libraries? (force_load on libtorch_cpu.a should also work). Also, does that model work on your phone?

1 Like

yes! thank you so much. it is compiled/linked and i can now read in my model.

i appreciate all your help @xta0

1 Like

@mapes911 No problem, were you able to run your model on your watch?

Yes and no :slight_smile:

I was able to load the model and run one of my data points through… but the second time through my event handler crashes. Assuming this is me not really knowing C++ very well anymore… and I’m probably doing something bad with memory or not setting up my tensor properly.

So… I’m not fully sure that the data is going through the model yet. Will try more when I get home tonight.


@xta0 looks like i’m able to run data through my model just fine on my watch now, thank you very much for your help.

any idea when a WatchOS version will be supported in the LibTorch Podfile? would love to be able to keep up with new versions without going through this pain.

Hey @mapes911 That’s good news to hear! Thanks for letting me know. We haven’t discussed about supporting watchos via Cocoapods yet. However, we did simplify the build script -

Hey, I am facing the same issue. Where is CMakeLists.txt file located?

Thanks for the reply. That’s the path of PyTorch repo. How can changes there resolves the error in XCode? Can you please elaborate the process ?

ld: warning: building for iOS, but linking in object file (/path-to-project/Pods/LibTorch/install/lib/libpytorch_qnnpack.a(8x8-aarch64-neon.S.o)) built for macOS
ld: warning: building for iOS, but linking in object file (/path-to-project/Pods/LibTorch/install/lib/libpytorch_qnnpack.a(8x8-aarch64-neon.S.o)) built for macOS
Undefined symbols for architecture arm64:
OBJC_CLASS$_TorchModule”, referenced from:
objc-class-ref in ViewController.o
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Hey there,

The CMakeLists.txt changes I made were to turn off QNNPACK which was causing issues for me when I loaded the model in my XCode project.
I’m not sure what your issue is, but if you read through the thread I had to go through several iterations (and issues) to arrive at a LibTorch build that allowed me to get my trained model running in a WatchOS project.

What is the process you are going through that is getting you to this error?
(I’m no expert in this, but @xta0 's comments really helped )