What is the difference between ConvTranspose2d and Upsample in Pytorch?

To implement UNet in Pytorch based on the model in this paper for the first upsampling layer some people used

```
self.upSample1 = nn.Upsample(size=(1024, 1024), scale_factor=(2, 2), mode="bilinear")
self.up1 = nn.Sequential(
ConvRelu2d(1024, 512, kernel_size=(3, 3), stride=1, padding=0),
ConvRelu2d(512, 512, kernel_size=(3, 3), stride=1, padding=0)
)
```

while some people used

```
self.up = nn.ConvTranspose2d(in_size, out_size, 2, stride=2)
self.conv = nn.Conv2d(in_size, out_size, kernel_size)
self.conv2 = nn.Conv2d(out_size, out_size, kernel_size)
self.activation = F.relu
```

I am confused is both Upsample and ConvTranspose2d do the same things?