For my loss function, I need to calculate the forward kinematics of a robot’s end effector (the tip or hand of a robot) given joint angles returned by my network.
Basically: the network returns a vector describing the joint angle of each motor of the robot arm. I need to take those joint angles and calculate where the tip (end effector) of the robot is. The pose (position + rotation) of the end effector is then compared to a target pose in order to calculate a loss term.
for _, target_poses in dataset:
joint_solutions = model(target_poses)
poses_from_generated_solutions = forward_kinematics(joint_solutions) # non-pytorch solver
loss = fit_loss(poses_from_generated_solutions, target_poses)
The process of calculating the robots’ tip’s pose is called forward kinematics. I’m currently using a third party calculator that doesn’t use pytorch (Klampt: https://github.com/krishauser/Klampt).
The issue that i’m facing is that pytorch can’t calculate the gradient of the loss term w.r.t. the model parameters, because the computation chain is broken when the poses are calculated with the forwardkinematics calculator.
Is there anyway to get around this problem and still use the same setup (external non-pytorch forward kinematics solver)?
If using a non-pytorch forward kinematics solver isn’t possible, are there any pytorch forward kinematics solvers that I can use? (I’ve looked and haven’t found any myself)
Any other feedback here?
I’ll write my own forward kinematics calulcator with pytorch if all else fails
Let me preface this by saying I’m practically a noob with regard to pytorch in general, and less than that for robotics so feel free to ignore my thoughts…
My first thought: why are ML techniques required to infer the position of any part of the robot? Isn’t this mathematically deterministic?
My second thought: granted my pytorch experience is limited, but I’m not sure why a custom function as a network layer wouldn’t work. With that said, I’d need to see more of your code to give you solid feedback.
EDIT: ahh, I think I get it… You’re using the net to give you the positions of your servos that will be required to meet each pose??
’ You’re using the net to give you the positions of your servos that will be required to meet each pose’ - Yes exactly.
The network returns a vector of joint angles/ I.e. ‘positions of [my] servos’. What I need to do is convert (using pytorch) that joint angle vector into the position of the tip of the robot. That conversion is deterministic, it’s mostly trig and linear algebra/transformations.
My current approach is failing because the forward kinematics calculator doesn’t use pytorch, so the computation graph is broken, and the gradient w.r.t the loss can’t be backpropogated