Adding and Concatenating layers

Is z = torch.add(x, y) and z =, y) in pytorch same as
z = keras.layers.add([x, y]) and z = keras.layers.concatenate([x, y]) in keras?


torch.add(x, y) is equivalent to z = x + y. Similar to keras but only accepts 2 tensors., y), dim) (note that you need one more pair of parentheses like brackets in keras) will concatenate in given dimension, same as keras.


You said that torch.add(x, y) can add only 2 tensors. Why are we not able to person the operation on 3 or more tensors at once? Is it because the value may be affected or is it just how this thing is defined?

Actually, I do not understand why keras has implemented addition as an independent layer. In PyTorch, you can add any number of tensors by simply using + sign between them. w = x + y + z will do the same as w = torch.add(x, torch.add(y, z)).