How to get FP16 working with torch_tensorrt?

When trying to use torch.compile with the tensorrt backend, I get the following error:

[2024-06-17 17:25:08,351][torch_tensorrt [TensorRT Conversion Context]][ERROR] - 4: [network.cpp::validate::3399] Error Code 4: Internal Error (fp16 precision has been set
 for a layer or layer output, but fp16 is not configured in the builder)      

for this code:

    with torch.no_grad(), torch.cuda.amp.autocast(dtype=torch.float16, enabled=True):
        model = torch.compile(model, fullgraph=True, backend="tensorrt")
        outputs = model(inputs)

How can I configure the backend to allow FP16?

i came up with this:

modelh=model.half()
modelhctrt=torch.compile(modelh, backend="torch_tensorrt", dynamic=False,
                                 options={"truncate_long_and_double": True,
                                         "enabled_precisions": {torch.float,torch.half},
                                         "debug": False,
                                         "optimization_level": 5})