PyTorch and Bazel


I’m creating PyTorch C++ extensions and building them with bazel.

The documentation (source) explains how to build your extensions with either setuptools or JIT. However, Bazel likes to take the building into its own hands.

I’d like to get some advise on how to proceed. I currently have two working solutions:

  1. Hacky solution
    Per extension, create a bazel genrule, that just invokes a python build, and set the resulting .so file as the output artifact. This can then be loaded in the code. This leverages all the nice abstractions and build arguments that are set through the torch utilities (torch.utils.cpp_extension.$)

  2. Proper solution
    Create the library through bazel’s cc_library. This is nice, but everything (arguments, flags, includes, directories, etc) need to be set manually.

Is anyone using pytorch extensions with bazel already, or does anyone have any general advice here.


Just to help out some other people, here is the gist of it. The solution does prerequires having setup for bazel the (1) python headers and (2) pip requirements and (3) cuda.

Create a .bzl file containing something like

load("@local_config_cuda//cuda:build_defs.bzl", "if_cuda")
load("@local_config_cuda//cuda:build_defs.bzl", "cuda_default_copts")

load("@pip_deps//:requirements.bzl", "requirement")

def pytorch_cpp_extension(name, srcs=[], gpu_srcs=[], deps=[], copts=[], defines=[],  
                          binary=True, linkopts=[]):
    """Create a pytorch cpp extension as a cpp and importable python library.
    All options defined below should stay close to the official torch cpp extension options as
    defined in
    name_so = name + ".so"
    torch_deps = [
        requirement("torch", target = "cpp"),

    cuda_deps = [

    copts = copts +[
        "-DTORCH_EXTENSION_NAME=" + name

    if gpu_srcs:
            name = name_so + "_gpu",
            srcs = gpu_srcs,
            deps = deps + torch_deps + if_cuda(cuda_deps),
            copts = copts + cuda_default_copts(),
            defines = defines,
            linkopts = linkopts,
        cuda_deps.extend([":" + name_so + "_gpu"])

    if binary:
            name = name_so,
            srcs = srcs,
            deps = deps + torch_deps + if_cuda(cuda_deps),
            linkshared = 1,
            copts = copts,
            defines = defines,
            linkopts = linkopts,
            name = name_so,
            srcs = srcs,
            deps = deps + torch_deps + if_cuda(cuda_deps),
            copts = copts,
            defines = defines,
            linkopts = linkopts,

        name = name,
        data = [":" + name_so],

And be sure you can actually require torch as a cpp target library like like so;

    name = "include",
    srcs = [":extracted"],
    cmd = "mkdir -p $@ && cp -a $</torch/lib/include/. $@",

# NOTE: Make sure this yields the same includes as `include_paths()`:
# See

    name = "cpp",
    hdrs = [":include"],
    visibility = ["//visibility:public"],
    includes = [
    deps = [

I don’t understand where :extracted comes from?

Does anyone else have more context on this.

@TimZaman I’d be extremely interested in understanding your solution to this and helping write a post about this to help other people who might face this issue.