Better solution for torch.max_pool1d input/output size mismatch?

Hi, I have been struggling to get the protein-interaction prediction tool TagPPI to work on our cluster, and one of the recent errors I’ve “solved” has been this one:

RuntimeError: max_pool1d() Invalid computed output size: -21 # pytorch 2.0.0/python 3.10
# or similarly  given by a older py3.6 environment I've been testing too:
RuntimeError: Given input size: (128x1x108). Calculated output size: (128x1x-21). Output size is too small

As far as my novice detective skills go, this seemed to be culprit in one of the TagPPI scripts:

class ConvsLayer(torch.nn.Module):

    def __init__(self,emb_dim):
        super(ConvsLayer,self).__init__() 
        self.embedding_size = emb_dim
        self.conv1 = nn.Conv1d(in_channels=self.embedding_size,out_channels = 128, kernel_size = 3)
        self.mx1 = nn.MaxPool1d(3, stride=3)
        self.conv2 = nn.Conv1d(in_channels=128,out_channels = 128, kernel_size = 3)
        self.mx2 = nn.MaxPool1d(3, stride=3)
        self.conv3 = nn.Conv1d(in_channels=128,out_channels = 128, kernel_size = 3)
        self.mx3 = nn.MaxPool1d(130, stride=1)
# the torch error is telling me it is expecting 128x1x108, but this is giving 130 instead of 108.
# added some print statements helped me double check this, showing:
self.mx1 is MaxPool1d(kernel_size=3, stride=3, padding=0, dilation=1, ceil_mode=False)
self.mx2 is MaxPool1d(kernel_size=3, stride=3, padding=0, dilation=1, ceil_mode=False)
self.mx3 is MaxPool1d(kernel_size=130, stride=1, padding=0, dilation=1, ceil_mode=False)

My “solution” was to go in and manually set 108 in mx3 as self.mx3 = nn.MaxPool1d(108, stride=1).
What I am trying to do is recreate a model that these folks published (using data they have provided online), so messing with how model training works is not at all my preferred solution. Any thoughts on doing this better?

Thank you!

Full error with a variety of print statements above that I added to help check myself:

Running EPOCH 1
### these print outs relate to "my_train_and_validation.py", line 60 in the traceback
dgl.batch(G1) Graph(num_nodes=14930, num_edges=112248,
      ndata_schemes={'feat': Scheme(shape=(1024,), dtype=torch.float32)}
      edata_schemes={})
pad_dmap(dmap1) tensor([[[[-0.3469, -0.1030, -0.0718,  ..., -0.2345,  0.4426,  0.0797],
          [ 0.1085, -0.0611, -0.1024,  ..., -0.0500,  0.3607, -0.1045],
          [ 0.0805,  0.0769,  0.0290,  ..., -0.1969,  0.1243, -0.2154],
          ...,
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000]]],


        [[[-0.3403, -0.0963, -0.0831,  ..., -0.3054,  0.1340,  0.2470],
          [-0.0479,  0.5333,  0.0684,  ..., -0.1885,  0.0833,  0.4106],
          [-0.0664, -0.2176, -0.2110,  ...,  0.1280, -0.0693, -0.1546],
          ...,
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000]]],


        [[[-0.3445, -0.0879, -0.0829,  ..., -0.1769,  0.1397,  0.7050],
          [ 0.1603, -0.0887,  0.0640,  ..., -0.2235,  0.0660,  0.5303],
          [ 0.2122, -0.1481,  0.0569,  ..., -0.0799, -0.0210,  0.5258],
          ...,
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000]]],


        ...,


        [[[-0.3542, -0.1041, -0.0670,  ..., -0.3833,  0.2471,  0.0417],
          [ 0.1535, -0.1094,  0.0764,  ..., -0.3788,  0.3410,  0.2778],
          [ 0.0354, -0.0368,  0.0938,  ..., -0.3806, -0.0117,  0.1559],
          ...,
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000]]],


        [[[-0.3407, -0.0985, -0.0931,  ..., -0.2835,  0.1780,  0.5286],
          [-0.0179,  0.5550, -0.2875,  ...,  0.3830, -0.3479,  0.5591],
          [-0.0747,  0.0952,  0.1908,  ...,  0.0210, -0.2931,  0.6168],
          ...,
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000]]],


        [[[-0.3386, -0.0636, -0.0460,  ..., -0.2490, -0.2427,  0.1807],
          [ 0.0399,  0.4853,  0.0982,  ...,  0.3455, -0.0461, -0.0297],
          [ 0.1974,  0.0380,  0.3333,  ...,  0.1819, -0.0764, -0.0061],
          ...,
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
          [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000]]]])
dgl.batch(G2) Graph(num_nodes=17289, num_edges=130715,
      ndata_schemes={'feat': Scheme(shape=(1024,), dtype=torch.float32)}
      edata_schemes={})
pad_dmap(dmap2) tensor([[[[-3.4296e-01, -1.0437e-01, -8.6030e-02,  ..., -2.7150e-01,
            1.1905e-01,  4.2295e-01],
          [-1.1256e-01,  1.6886e-01,  2.7677e-01,  ...,  1.5115e-01,
           -1.3917e-01,  1.6033e-01],
          [ 1.5157e-01,  2.0652e-01,  2.2229e-01,  ..., -1.4840e-01,
            6.1150e-01, -7.4094e-02],
          ...,
          [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00]]],


        [[[-3.4905e-01, -1.0518e-01, -8.1914e-02,  ..., -2.9440e-01,
            4.5290e-01,  3.0468e-01],
          [-7.0793e-02,  2.2520e-01, -2.7818e-01,  ..., -1.2918e-01,
            1.0218e-01, -1.1715e-02],
          [ 1.7016e-01, -3.1002e-04,  7.5324e-02,  ...,  6.0687e-02,
            3.1016e-01, -1.4433e-02],
          ...,
          [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00]]],


        [[[-3.3856e-01, -9.2911e-02, -6.6687e-02,  ..., -3.4701e-01,
            1.5960e-01,  5.3272e-01],
          [ 3.5949e-02,  4.8382e-01,  8.6300e-02,  ..., -4.2484e-02,
            2.4221e-01,  3.5637e-01],
          [-1.2725e-02, -1.6474e-01,  7.7850e-02,  ..., -1.6610e-01,
            1.5854e-01,  8.6666e-03],
          ...,
          [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00]]],


        ...,


        [[[-3.3537e-01, -7.8216e-02, -5.7447e-02,  ..., -3.8560e-01,
            5.0229e-01,  3.5356e-01],
          [-2.1677e-01,  2.6332e-01, -1.5412e-01,  ..., -6.8163e-02,
            2.5022e-01, -4.3618e-02],
          [ 7.1193e-02, -2.3421e-03, -1.0856e-01,  ...,  1.5919e-01,
            3.0727e-02,  1.5879e-01],
          ...,
          [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00]]],


        [[[-3.4001e-01, -9.4568e-02, -7.8474e-02,  ..., -2.8688e-01,
            2.5699e-01,  9.3964e-02],
          [ 2.8536e-02,  4.6803e-01,  7.2026e-02,  ..., -8.3363e-02,
            8.8944e-02,  2.3592e-01],
          [ 7.1293e-02,  7.3217e-02, -7.7345e-02,  ..., -2.3135e-02,
            7.3054e-02, -1.1380e-01],
          ...,
          [ 5.8943e-01,  3.8829e-01, -4.3912e-01,  ..., -1.3340e-01,
           -2.1967e-01,  5.6242e-02],
          [-1.7995e-01,  1.6144e-02, -3.7933e-01,  ..., -3.4695e-01,
           -2.5938e-01,  1.4060e-01],
          [-9.6776e-02,  5.6427e-01,  5.7345e-01,  ..., -5.2647e-01,
           -2.0869e-03,  2.6649e-01]]],


        [[[-3.3989e-01, -8.7599e-02, -8.1388e-02,  ..., -5.5179e-01,
            2.3058e-01,  5.2333e-01],
          [ 2.4501e-02,  4.7858e-01,  7.8572e-02,  ...,  1.0152e-01,
            5.4223e-02,  6.6108e-01],
          [ 3.0809e-01, -1.2880e-01,  1.6423e-01,  ..., -1.8422e-01,
           -2.0530e-01,  3.6330e-01],
          ...,
          [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00]]]])
### these print outs relate to "TAGlayer.py" in the traceback
self.mx1 is MaxPool1d(kernel_size=3, stride=3, padding=0, dilation=1, ceil_mode=False)
self.mx2 is MaxPool1d(kernel_size=3, stride=3, padding=0, dilation=1, ceil_mode=False)
self.mx3 is MaxPool1d(kernel_size=130, stride=1, padding=0, dilation=1, ceil_mode=False)
con3 features is tensor([[[-3.5460e-02,  9.5047e-03,  7.2449e-03,  ..., -9.8082e-03,
          -9.8082e-03, -9.8082e-03],
         [ 2.0526e-02,  1.7570e-02, -1.4385e-02,  ..., -5.8594e-02,
          -5.8594e-02, -5.8594e-02],
         [-1.5109e-02, -5.1880e-02,  1.1706e-03,  ..., -4.1274e-02,
          -4.1274e-02, -4.1274e-02],
         ...,
         [-5.3063e-02,  1.2701e-02, -5.3389e-02,  ...,  7.1492e-03,
           7.1492e-03,  7.1492e-03],
         [-1.3950e-02,  1.1025e-02, -2.8088e-02,  ..., -3.9051e-02,
          -3.9051e-02, -3.9051e-02],
         [-1.0820e-02, -4.0106e-02, -7.3718e-02,  ...,  3.7265e-02,
           3.7265e-02,  3.7265e-02]],

        [[ 6.7878e-02,  1.8754e-02,  5.7398e-02,  ..., -9.8082e-03,
          -9.8082e-03, -9.8082e-03],
         [-9.6443e-03,  1.8430e-02, -2.4235e-02,  ..., -5.8594e-02,
          -5.8594e-02, -5.8594e-02],
         [-2.5470e-02, -1.8970e-02, -5.2288e-03,  ..., -4.1274e-02,
          -4.1274e-02, -4.1274e-02],
         ...,
         [ 6.6258e-03, -3.7796e-02, -9.3790e-03,  ...,  7.1492e-03,
           7.1492e-03,  7.1492e-03],
         [ 5.4426e-06, -3.3611e-02, -3.1695e-02,  ..., -3.9051e-02,
          -3.9051e-02, -3.9051e-02],
         [-1.0794e-02, -1.8953e-02, -2.7104e-02,  ...,  3.7265e-02,
           3.7265e-02,  3.7265e-02]],

        [[ 3.0020e-02,  9.7433e-02,  9.6451e-02,  ..., -9.8082e-03,
          -9.8082e-03, -9.8082e-03],
         [ 4.1893e-03, -2.2828e-02,  1.9784e-02,  ..., -5.8594e-02,
          -5.8594e-02, -5.8594e-02],
         [ 1.6866e-02, -2.4822e-02, -4.9824e-02,  ..., -4.1274e-02,
          -4.1274e-02, -4.1274e-02],
         ...,
         [ 5.7842e-02,  3.9461e-02, -2.4759e-02,  ...,  7.1492e-03,
           7.1492e-03,  7.1492e-03],
         [-1.4926e-02,  1.4677e-02, -2.3474e-02,  ..., -3.9051e-02,
          -3.9051e-02, -3.9051e-02],
         [-9.2741e-02, -5.1284e-02,  3.0013e-02,  ...,  3.7265e-02,
           3.7265e-02,  3.7265e-02]],

        ...,

        [[ 7.7606e-02,  5.8441e-02,  1.6512e-02,  ..., -9.8082e-03,
          -9.8082e-03, -9.8082e-03],
         [ 2.0511e-01,  9.9564e-02,  5.4930e-02,  ..., -5.8594e-02,
          -5.8594e-02, -5.8594e-02],
         [ 1.3711e-02,  1.1491e-01,  1.2591e-01,  ..., -4.1274e-02,
          -4.1274e-02, -4.1274e-02],
         ...,
         [-6.8299e-02,  2.5369e-02, -1.0667e-01,  ...,  7.1492e-03,
           7.1492e-03,  7.1492e-03],
         [-8.1573e-02, -3.8201e-02, -1.0485e-01,  ..., -3.9051e-02,
          -3.9051e-02, -3.9051e-02],
         [-1.2982e-01,  1.1035e-01, -8.8959e-02,  ...,  3.7265e-02,
           3.7265e-02,  3.7265e-02]],

        [[ 6.9649e-03,  1.4423e-02,  2.2068e-02,  ..., -9.8082e-03,
          -9.8082e-03, -9.8082e-03],
         [ 4.6818e-03, -7.4856e-02, -7.0889e-02,  ..., -5.8594e-02,
          -5.8594e-02, -5.8594e-02],
         [-4.8453e-02, -2.3658e-02, -4.8073e-02,  ..., -4.1274e-02,
          -4.1274e-02, -4.1274e-02],
         ...,
         [-1.3201e-02, -1.5449e-02,  1.4243e-03,  ...,  7.1492e-03,
           7.1492e-03,  7.1492e-03],
         [-3.6712e-03,  7.9446e-02,  6.8218e-02,  ..., -3.9051e-02,
          -3.9051e-02, -3.9051e-02],
         [-6.5673e-02,  2.9596e-02, -4.0985e-02,  ...,  3.7265e-02,
           3.7265e-02,  3.7265e-02]],

        [[ 3.8939e-02,  1.0975e-02,  7.3704e-02,  ..., -9.8082e-03,
          -9.8082e-03, -9.8082e-03],
         [ 1.4174e-02,  1.4082e-05, -3.6082e-02,  ..., -5.8594e-02,
          -5.8594e-02, -5.8594e-02],
         [ 2.5455e-02, -4.9701e-02,  3.5501e-02,  ..., -4.1274e-02,
          -4.1274e-02, -4.1274e-02],
         ...,
         [ 2.7597e-02,  8.1525e-03,  1.4479e-02,  ...,  7.1492e-03,
           7.1492e-03,  7.1492e-03],
         [ 1.3918e-02,  3.3297e-02,  8.9154e-02,  ..., -3.9051e-02,
          -3.9051e-02, -3.9051e-02],
         [ 1.8239e-02,  2.6213e-02,  5.6790e-02,  ...,  3.7265e-02,
           3.7265e-02,  3.7265e-02]]], grad_fn=<SqueezeBackward1>)
Traceback (most recent call last):
  File "my_main.py", line 25, in <module>
    main()
  File "my_main.py", line 22, in main
    train(trainArgs)
  File "/lustre/fs0/home/iwill/TAGPPI/TAGPPI-main/my_train_and_validation.py", line 60, in train
    y_pred = attention_model(dgl.batch(G1),pad_dmap(dmap1),dgl.batch(G2), pad_dmap(dmap2))
  File "/home/iwill/my-envs/tagppi_5/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/lustre/fs0/home/iwill/TAGPPI/TAGPPI-main/TAGlayer.py", line 84, in forward
    seq1 = self.textcnn(pad_dmap1)
  File "/home/iwill/my-envs/tagppi_5/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/lustre/fs0/home/iwill/TAGPPI/TAGPPI-main/TAGlayer.py", line 34, in forward
    features = self.mx3(features)
  File "/home/iwill/my-envs/tagppi_5/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/iwill/my-envs/tagppi_5/lib/python3.6/site-packages/torch/nn/modules/pooling.py", line 90, in forward
    self.return_indices)
  File "/home/iwill/my-envs/tagppi_5/lib/python3.6/site-packages/torch/_jit_internal.py", line 422, in fn
    return if_false(*args, **kwargs)
  File "/home/iwill/my-envs/tagppi_5/lib/python3.6/site-packages/torch/nn/functional.py", line 653, in _max_pool1d
    return torch.max_pool1d(input, kernel_size, stride, padding, dilation, ceil_mode)
RuntimeError: Given input size: (128x1x108). Calculated output size: (128x1x-21). Output size is too small

Here are the contents of the most relevant TagPPI py script, TAGlayer.py:

import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.nn.functional as F
import numpy as np
import dgl
from dgl.nn import GATConv
from dgl.nn.pytorch.glob import MaxPooling,AvgPooling

class ConvsLayer(torch.nn.Module):

    def __init__(self,emb_dim):
        super(ConvsLayer,self).__init__() 
        self.embedding_size = emb_dim
        self.conv1 = nn.Conv1d(in_channels=self.embedding_size,out_channels = 128, kernel_size = 3)
        self.mx1 = nn.MaxPool1d(3, stride=3)
        self.conv2 = nn.Conv1d(in_channels=128,out_channels = 128, kernel_size = 3)
        self.mx2 = nn.MaxPool1d(3, stride=3)
        self.conv3 = nn.Conv1d(in_channels=128,out_channels = 128, kernel_size = 3)
        self.mx3 = nn.MaxPool1d(108, stride=1) ##### I changed this from 130 to 108, 130 gives -21 error, 128 gives -19, 109 gives 0, 108 works
        
    
    def forward(self,x):
        x = x.squeeze(1)
        x = x.permute(0, 2, 1)
        print('self.mx1 is ' +str(self.mx1)) # I added this when troubleshooting
        print('self.mx2 is ' +str(self.mx2)) # I added this when troubleshooting
        print('self.mx3 is ' +str(self.mx3)) # I added this when troubleshooting
        features = self.conv1(x)       
        features = self.mx1(features)
        print('self.mx1(features) is ' +str(self.mx1(features))) # I added this when troubleshooting
        features = self.mx2(self.conv2(features))
        features = self.conv3(features)
        print('conv3 features is ' +str(self.conv3(features))) # I added this when troubleshooting
        print('pre-mx3 features var is ' +str(features)) # I added this when troubleshooting
        print('torch.max_pool1d is ' + str(torch.max_pool1d)) # I added this when troubleshooting
        features = self.mx3(features)
        print('post-mx3 features is ' +str(features)) # I added this when troubleshooting
        features = features.squeeze(2)
        return features


class GATPPI(torch.nn.Module):

    def __init__(self,args):
        super(GATPPI,self).__init__()
        torch.backends.cudnn.enabled = False
        self.batch_size = args['batch_size']
        self.type = args['task_type']
        self.embedding_size = args['emb_dim']
        self.drop = args['dropout']
        self.output_dim = args['output_dim']
        # gcn
        self.gcn1 = GATConv(self.embedding_size,self.embedding_size,3)
        self.gcn2 = GATConv(self.embedding_size*3,self.embedding_size*3,3)
        self.gcn3 = GATConv(self.embedding_size*9,self.embedding_size*9,1)
        self.relu = nn.ReLU()
        self.fc_g1 = torch.nn.Linear(self.embedding_size*9, self.output_dim)

        self.maxpooling = MaxPooling()
        self.avgpooling = AvgPooling()
        self.dropout = nn.Dropout(self.drop)

        #textcnn
        self.textcnn = ConvsLayer(self.embedding_size)
        self.textflatten = nn.Linear(128,self.output_dim)
        # combined layers
        self.w1 = nn.Parameter(torch.FloatTensor([0.5]), requires_grad=True)
        self.fc1 = nn.Linear(self.output_dim*2, 512)
        self.fc2 = nn.Linear(512,256)
        self.out = nn.Linear(256, 1)

    # input1 input2
    def forward(self,G1,pad_dmap1,G2,pad_dmap2):
        # protein1
        g1 = self.relu(self.gcn1(G1,G1.ndata['feat']))
        g1 = g1.reshape(-1,self.embedding_size*3)
        g1 = self.relu(self.gcn2(G1, g1))
        g1 = g1.reshape(-1,self.embedding_size*9)
        g1 = self.relu(self.gcn3(G1, g1))
        g1 = g1.reshape(-1,self.embedding_size*9)
        G1.ndata['feat']=g1
        g1_maxpooling = self.maxpooling(G1,G1.ndata['feat'])  
        # flatten
        g1 = self.relu(self.fc_g1(g1_maxpooling))

        seq1 = self.textcnn(pad_dmap1)
        seq1 = self.relu(self.textflatten(seq1))
        # combine g1 and pic1 
        w1 = F.sigmoid(self.w1)
        gc1 = torch.add((1-w1)*g1,w1*seq1) 

        #protein2
        g2 = F.relu(self.gcn1(G2,G2.ndata['feat']))
        g2 = g2.reshape(-1,self.embedding_size*3)
        #g2 = self.n1(g2)
        g2 = F.relu(self.gcn2(G2, g2))
        g2 = g2.reshape(-1,self.embedding_size*9)
        #g2 = self.n2(g2)
        g2 = F.relu(self.gcn3(G2, g2))
        g2 = g2.reshape(-1,self.embedding_size*9)
        #g2 = self.n3(g2)
        G2.ndata['feat']=g2
        g2_maxpooling = self.maxpooling(G2,G2.ndata['feat'])
        # flatten
        g2 = self.relu(self.fc_g1(g2_maxpooling))

        seq2 = self.textcnn(pad_dmap2)
        seq2 = self.relu(self.textflatten(seq2))
        # combine g1 and pic1 
        gc2 = torch.add((1-w1)*g2,w1*seq2)
        #gc2 = torch.add(g2,pic2)   

        # combine gc1 and gc2
        gc = torch.cat([gc1,gc2],dim=1) 
        # add some dense layers
        gc = self.fc1(gc)
        gc = self.relu(gc)
        gc = self.dropout(gc)
        gc = self.fc2(gc)
        gc = self.relu(gc)
        gc = self.dropout(gc)
        out = self.out(gc)
        output = F.sigmoid(out)
        return output

Checking the stacktrace to see that self.mx3 causes the issue:

  File "/lustre/fs0/home/iwill/TAGPPI/TAGPPI-main/TAGlayer.py", line 34, in forward
    features = self.mx3(features)
  File "/home/iwill/my-envs/tagppi_5/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/iwill/my-envs/tagppi_5/lib/python3.6/site-packages/torch/nn/modules/pooling.py", line 90, in forward
    self.return_indices)
  File "/home/iwill/my-envs/tagppi_5/lib/python3.6/site-packages/torch/_jit_internal.py", line 422, in fn
    return if_false(*args, **kwargs)
  File "/home/iwill/my-envs/tagppi_5/lib/python3.6/site-packages/torch/nn/functional.py", line 653, in _max_pool1d
    return torch.max_pool1d(input, kernel_size, stride, padding, dilation, ceil_mode)
RuntimeError: Given input size: (128x1x108). Calculated output size: (128x1x-21). Output size is too small

is a valid approach and allows you to check the shape of the input features tensor in the next rerun.
I’m unsure is you are looking for a more informative stacktrace or another approach.

Hi,
I’m looking for another approach. I am not entirely sure how/where 108 got set in the first place, so adjusting that to fit the 130 given in the TagPPI script might be preferable. Or, maybe this type of error is a hallmark of a more fundamental problem folks have encountered before, and my manually changing a value is an imperfect fix.

In principle, TagPPI should work as-is since I pulled these scripts from the Github associated with a publication. So my thinking is that I more likely made a mistake in setting things up somewhere, rather than their file having a strange error like an incorrect value that prevents the whole tool from working.

But I am running what I got, and will see if it can still replicate the published model accuracy close enough for my needs. Seems odd though.