Hi,
I’m currently working on object detection using RGB and depth data. More specifically, I’d like to start by applying contrastive learning, i.e. training two networks to pass from RGB to depth and vice versa.
Normalizing an RGB image is easy, but for depth data, I don’t know what the best method is for normalizing the data.
- Convert depth map to grayscale and normalize ?
- Convert a tensor directly to meters ?
- Normalize all values between 0 and 1.0 directly ?
Or is there another way ?
Do you know if any paper has been issued on this issue?
Thanks in advance