Freezing Part of The Computation Graph

Suppose that you have a PyTorch model (can be any model) and that your input tensor X = [x_1, x_2, x_3, …, x_n] is made of n tensors {x_i}. Now, if I wanna perform some kind of adversarial attack I wanna modify my input X such that I can only change one of {x_i} at a time. So my question is the following: say I want to change x_1 and keep other ones fixed, obviously now computing model(x_1) has a parts of computation graph dependent only on x_2, …, x_n and not x_1. Is it possible to somehow freeze this computation graph and create a new model, let’s call it model_1(x) = model(x, x_2, …, x_n) (changed x_1 to x to indicate that x is now a free parameters whereas other values are fixed), such that this model_1 only once computes parts of computational graph independent of x? This would make iterating and optimising over all x_i much quicker.