"real" datatype in cpp cuda sumantics

Hi,

When we pass a number to any torch function, e.g. addcmul, internally the cpp calls have a datatype called real. Does the code internally transfer the real value to the GPU at runtime if the other tensors are in the GPU?

Regards
Nabarun

“real” is the type of your data. It could be float, double, int, etc. There is code in THC that acts on real. I’m not sure if that answers your question, please feel free to post a followup.

I will try to put my question in more simple terms:

Let’s say I want to multiply the loss with a weight before doing backward which I modify after every batch.

For example, in below pseudo code, assume all tensors are in GPU and w is just a variable (not a torch tensor)

w = 5
loop for all batches:
    loss = get_loss(inp)
    loss =  w * loss
    loss.backward()
    opt.step()
    w += 1

In this scenario, how is w handled on the GPU. I mean, does it involve a GPU transfer for every iteration since the value of w keeps changing?