Conv2dTranspose vs Conv2d

Are they the same thing if i want to apply a convolution layer with kernel_size 1 and stride 1?

No, as the input and output channels will be transposed in the transposed conv layer compared to the plain conv one.
If you permute it back, the operations would be equal for this setup:

conv = nn.Conv2d(3, 6, 1)
conv_trans = nn.ConvTranspose2d(3, 6, 1)

with torch.no_grad():
    conv_trans.weight.copy_(conv.weight.permute(1, 0, 2, 3).contiguous())
    conv_trans.bias.copy_(conv.bias)
    
x = torch.randn(2, 3, 24, 24)
out1 = conv(x)
out2 = conv_trans(x)
print((out1 - out2).abs().max())
# > tensor(2.3842e-07, grad_fn=<MaxBackward1>)