When you say ’ You could either use pure PyTorch methods’, do you mean for the `#Filter`

block, instead of using numpy?

I guess I can do that for the 2 first lines, but how can I do that for the `signal.lfilter`

function? Is there any equivalent in pytorch?

**Edit:**

I replaced the lines:

```
tmp = 1 + np.exp(-outputs.detach().numpy() * sig)
dat = 2 / tmp - 1
```

by

```
tmp = 1 + torch.exp(torch.mul(outputs, -sig))
dat = torch.div(2, tmp) - 1
```

So I guess that now, the autograd will automatically take care of them. I still need to find how to compute the low pass filter.

I see here that there exists a `torchaudio.functional.lfilter`

function:

```
torchaudio.functional.lfilter(waveform: Tensor,
a_coeffs: Tensor,
b_coeffs: Tensor,
clamp: bool = True,
batching: bool = True) → Tensor
```

If I use it, will it be automatically managed by the autograd?

**Edit2:**

So I changed the filter block with:

```
# Filter
tmp = 1 + torch.exp(torch.mul(outputs, -sig))
dat = torch.div(2, tmp) - 1
outputs = functional.lfilter(dat, a_tensor, b_tensor)
```

where:

```
b, a = signal.iirfilter(order, cutoff / nyq, rs=att,
btype=btype, ftype=ftype)
a_tensor = torch.from_numpy(a).to(torch.float32)
b_tensor = torch.from_numpy(b).to(torch.float32)
```

Now I get a new error message:

```
outputs = functional.lfilter(dat, a_tensor, b_tensor)
```

File “C:\Users\me\AppData\Local\Programs\Spyder\pkgs\torchaudio\functional.py”, line 594, in lfilter

o0.addmv_(windowed_output_signal, a_coeffs_flipped, alpha=-1)

RuntimeError: Output 0 of UnbindBackward is a view and is being modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.

What does this mean?

**Edit3:**

As I want this processing block to remain unchanged during the training process, I tried to add a no_grad() instruction:

```
# Filter
with torch.no_grad():
tmp = 1 + torch.exp(torch.mul(outputs, -sig))
dat = torch.div(2, tmp) - 1
outputs = functional.lfilter(dat, a_tensor, b_tensor)
```

but this leads to the following error:

```
loss.backward()
File "C:\Users\me\AppData\Local\Programs\Spyder\pkgs\torch\tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\me\AppData\Local\Programs\Spyder\pkgs\torch\autograd\__init__.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
```

I’m back to the beginning…