1119
June 7, 2017, 12:17pm
1
what I want do is:
RGB_images = netG(input) #netG is a pretrained model and not change during training,RGB_images is a batch of RGB images
YCbCr_images = f(RGB_images) # YCbCr_images is a batch of YCbCr mode images
# do things with YCbCr_images
Is there any function f in pytorch can achieve what i want?
1 Like
smth
June 21, 2017, 11:35pm
2
there isn’t an in-built way to do this.
However, you can simply write this as an autograd function
def rgb_to_ycbcr(input):
# input is mini-batch N x 3 x H x W of an RGB image
output = Variable(input.data.new(*input.size()))
output[:, 0, :, :] = input[:, 0, :, :] * 65.481 + input[:, 1, :, :] * 128.553 + input[:, 2, :, :] * 24.966 + 16
# similarly write output[:, 1, :, :] and output[:, 2, :, :] using formulas from https://en.wikipedia.org/wiki/YCbCr
return output
def rgb_to_ycbcr(image: torch.Tensor) -> torch.Tensor:
r"""Convert an RGB image to YCbCr.
Args:
image (torch.Tensor): RGB Image to be converted to YCbCr.
Returns:
torch.Tensor: YCbCr version of the image.
"""
if not torch.is_tensor(image):
raise TypeError("Input type is not a torch.Tensor. Got {}".format(
type(image)))
if len(image.shape) < 3 or image.shape[-3] != 3:
raise ValueError("Input size must have a shape of (*, 3, H, W). Got {}"
.format(image.shape))
r: torch.Tensor = image[..., 0, :, :]
g: torch.Tensor = image[..., 1, :, :]
b: torch.Tensor = image[..., 2, :, :]
delta = .5
y: torch.Tensor = .299 * r + .587 * g + .114 * b
cb: torch.Tensor = (b - y) * .564 + delta
cr: torch.Tensor = (r - y) * .713 + delta
return torch.stack((y, cb, cr), -3)
1 Like