Backend-agnostic serialization / inference workflow

Is there currently a workflow for torch.exporting / serializing a nn.Module and being able to load and run inference on it on all of the available executorch backends (like it is possible with torchscript). Or do I need one artifact per target backend?