Error with Pytorch Mobile on Vulkan backend during prediction

Hello,

We are trying to use a model on Android on Vulkan backend.

However, we encounter the following error:

PlatformException (PlatformException(DemixingError, Could not run 'aten::reflection_pad1d.out' with arguments from the 'Vulkan' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::reflection_pad1d.out' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, Functionalize].

CPU: registered at /home/marc/pytorch/build_android_arm64-v8a/aten/src/ATen/RegisterCPU.cpp:20948 [kernel]
QuantizedCPU: registered at /home/marc/pytorch/build_android_arm64-v8a/aten/src/ATen/RegisterQuantizedCPU.cpp:1223 [kernel]
BackendSelect: fallthrough registered at /home/marc/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
ADInplaceOrView: fallthrough registered at /home/marc/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: fallthrough registered at /home/marc/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at /home/marc/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at /home/marc/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:47 [backend fallback]
AutogradXLA: fallthrough registered at /home/marc/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:51 [backend fallback]
AutogradLazy: fallthrough registered at /home/marc/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:55 [backend fallback]
AutogradXPU: fallthrough registered at /home/marc/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:43 [backend fallback]
AutogradMLC: fallthrough registered at /home/marc/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:59 [backend fallback]
AutogradHPU: fallthrough registered at /home/marc/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:68 [backend fallback]
Functionalize: registered at /home/marc/pytorch/build_android_arm64-v8a/aten/src/ATen/RegisterFunctionalization_1.cpp:5316 [kernel]

  
  Debug info for handle(s): debug_handles:{-1}, was not found.
  
Exception raised from reportError at /home/marc/pytorch/aten/src/ATen/core/dispatch/OperatorEntry.cpp:434 (most recent call first):
(no backtrace available), null, null))

We followed PyTorch Vulkan Backend User Workflow — PyTorch Tutorials 1.10.1+cu102 documentation.

Does this mean our model can’t be used with Vulkan at this time?

You can find our app/build.gradle file here:

def localProperties = new Properties()
def localPropertiesFile = rootProject.file('local.properties')
if (localPropertiesFile.exists()) {
    localPropertiesFile.withReader('UTF-8') { reader ->
        localProperties.load(reader)
    }
}

def flutterRoot = localProperties.getProperty('flutter.sdk')
if (flutterRoot == null) {
    throw new GradleException("Flutter SDK not found. Define location with flutter.sdk in the local.properties file.")
}

def flutterVersionCode = localProperties.getProperty('flutter.versionCode')
if (flutterVersionCode == null) {
    flutterVersionCode = '1'
}

def flutterVersionName = localProperties.getProperty('flutter.versionName')
if (flutterVersionName == null) {
    flutterVersionName = '1.0'
}

apply plugin: 'com.android.application'
apply from: "$flutterRoot/packages/flutter_tools/gradle/flutter.gradle"

android {
    compileSdkVersion flutter.compileSdkVersion

    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8
    }

    defaultConfig {
        applicationId "com.demixr.demixr_app"
        minSdkVersion 24
        targetSdkVersion flutter.targetSdkVersion
        versionCode flutterVersionCode.toInteger()
        versionName flutterVersionName
        externalNativeBuild {
            cmake {
                cppFlags ''
            }
        }
    }

    buildTypes {
        release {
            minifyEnabled false
            shrinkResources false
            signingConfig signingConfigs.debug
            applicationVariants.all { variant ->
                variant.outputs.all { output ->
                    project.ext { appName = 'demixr' }
                    outputFileName = "${appName}_${output.name}_${variant.versionName}.apk"
                }
            }
        }
    }
    externalNativeBuild {
        cmake {
            path file('src/main/cpp/CMakeLists.txt')
            version '3.10.2'
        }
    }
}

dependencies {
    implementation 'com.facebook.fbjni:fbjni-java-only:0.2.2'
    implementation 'com.facebook.soloader:nativeloader:0.10.3'

    implementation files('libs/pytorch_android-release.aar')
}

flutter {
    source '../..'
}

We are also not sure why we need to add implementation 'com.facebook.fbjni:fbjni-java-only:0.2.2' and implementation 'com.facebook.soloader:nativeloader:0.10.3' as we built Pytorch Android from source with the lite module.

Thanks in advance!

Hi Marc, Vulkan GPU backend is still in beta. Unfortunately, we don’t support ReflectionPad1d operator (aten::reflection_pad1d.out) yet. We’re still working on our operator coverage.

You can create a feature request for us to support the missing operator. Or you may want to implement it.

cc: @ssjia

Hi @marc_d @beback4u, I’m facing a similar issue. In my case the error is occurring when trying to load the model:

Following ops cannot be found. Check fburl.com/missing_ops for the fix.{vulkan_prepack::conv2d_transpose_clamp_run, vulkan_prepack::conv2d_transpose_clamp_prepack, } ()
Exception raised from print_unsupported_ops_and_throw at /home/sergio/pytorch/torch/csrc/jit/mobile/parse_operators.cpp:65 (most recent call first):
(no backtrace available)

Maybe it’s worth mentioning that I needed to backport the model to a previous version using _backport_for_mobile function.

My environment for building Pytorch from source with Vulkan backend was:

  • Ubuntu 20.04 LTS
  • Python 3.9.12
  • Pytorch 1.10.0a0+git71f889c
  • Vulkan 1.2.189.0
  • Android SDK 29.0.2
  • Android NDK 21.1.6352462

Any idea what might be causing the error?

Hi,
I have the exact same issue @marc_d , also missing aten::reflection_pad1d.out when using my model on Vulkan backend even with last PyTorch version, so I’m interested in any news about it.

Hi @beback4u @ssjia just curious: I have the same issue missing aten::reflection_pad1d.out with Vulkan backend on Android, but when I export the list of used ops with torch.jit.export_opnames(model) this op is not in the list. Am I missing something?
Thanks