Unpackbits from tensor

Hello people,

I am trying to compute bit error rate as metric. Currently I am using numpy and more specific np.unpackbits(x)

I am wondering if there is a similar operation in pytorch to take advantage of gpu.

    def __call__(self, output, target):
        t_target = target.detach().cpu().numpy().astype(np.uint8)
        t_output = output.detach().cpu().numpy().astype(np.uint8)

        target_bits = np.unpackbits(t_target).reshape(
            -1, 8)

        output_bits = np.unpackbits(t_output).reshape(
            -1, 8)

        output_bits = output_bits.reshape(-1)
        target_bits = target_bits.reshape(-1)

        absolute = np.abs(
            output_bits.astype(np.int) - target_bits.astype(np.int))

        errors = np.sum(absolute)

        return errors