Quantized GLU not implemented?

Hi there,
I’m currently working on quantizing to int8 denoiser speech enhacement model by facebook research, here : denoiser/quantization/linear-q.ipynb at main · iliasslasri/denoiser · GitHub
and I could not find an implementation of nn.GLU() activation in the quantized module in pytorch. Does anyone have an idea of how that works or if there’s any implementation out there or if I should just implement it ?

Thank you for the help guys.

I wrote a quick implementation. Please fell free to comment!

import torch
import torch.nn as nn
import torch.nn.functional as F

class QuantizedGLU(nn.Module):
    def __init__(self, last_layer=False):
        super().__init__()

    def forward(self, x) -> torch.Tensor:
        a, b = x.chunk(2, dim=1) # dim to adapt
        output = a * torch.sigmoid(b)
        return output.to(torch.int8)