# How to max pool an input to specific length

suppose I have an input (a sentence) with varied lengths. Suppose a = (29 x 512) where 512 is embedding dim.

How can I shrink it to specific length, for example 4 tokens. => (4 x 512). I thought max pooling can do this, however don’t know how it compress the input to a specific size. Any other solution is also welcomed.

An AdaptiveMaxPool1d will return the largest 4 values. The alternative is an AdaptiveAvgPool1d.

1 Like

Thanks. Do you know how is it applied on a dimension? I have `inputs_embeds` in batch (bs, tokens, dim). I guess I should apply it on tokens, right? Before, I had the following function:

``````    avg_input_embeds, _ = torch.max(inputs_embeds, 1)
``````

I expect it match with a MaxPool with size 1:

``````
avg_input_embeds2  = pool(inputs_embeds.permute(0, 2,1)).permute(0,2,1)
``````

Yet, not equal, however I guess I need some permute or something

Update: Yes, they are equal with unsqueeze

``````
pool(inputs_embeds.permute(0,2,1)).permute(0,2,1) == avg_input_embeds.unsqueeze(1)
tensor([[[True, True, True,  ..., True, True, True]],

[[True, True, True,  ..., True, True, True]],

[[True, True, True,  ..., True, True, True]],
``````
1 Like

When you want to check if a tensor has all True values, you can use `torch.all()`:

``````import torch

all_true=torch.ones(3,4,5, dtype=torch.bool)
mixed_true=torch.randint(0,1,(3,4,5), dtype=torch.bool)
print(torch.all(all_true))
print(torch.all(mixed_true))
``````
1 Like