I think you tricked yourself with the indexing.
As y[:, 2] only depends on the last column in x, only the last column of the gradient is nonzero (do print(duzdxyz)). Then you take the sec_der as the grad of an all-zero column of duzdxyz.
Now you tricked yourself with too much linearity.
While relu is nonlinear globally (else we would famously not have the universal representation property) it is linear in a neighborhood of almost every input (if you pardon the mathematics lingo), rendering your network linear in a neighborhood of your inputs.
Taking a more global view, the derivative is piecewise constant, and the second derivative 0 where defined.
Thanks for the explanation @tom. It makes sense. In this case, what type of layers can I use to capture the second derivative of data? Maybe, I keep the linear function but use other activations? Any recommendations?
At the risk of being a math nerd: Don’t say “the (second) derivative of the data”. It is the "derivative of with respect to " (well actually you typically compute hession-vector products when using second derivatives with backprop, but hey).
Many (most?) of the loss functions are sufficiently nonlinear, but you would, of course also get second derivatives of the function specified by your neural network if you used an activation function that is curved (e.g. tanh where it is not saturated).