Apply transform to neural network output

Hey, I am training a PINN model, which is basically a FCN. I want to apply hard boundary conditions by transforming the neural network output. I am currently doing it in this way, but am not getting the expected results. What would be the best way to do it ?

class HardPINN(Model):

    def initial_condition(self, x,y):
        return torch.exp(-0.5 * (((x - 2) / 0.1) ** 2 + ((y - 2) / 0.1) ** 2))

    def forward(self, x):
        # Extract inputs
        t_in = x[:, 0:1]
        x_in = x[:, 1:2]
        y_in = x[:, 2:3]
        
        # Take the input, pass it into the first input layer and pass it to the hidden layers from there.           
        x = self.input_layer(x)
        
        # Get from first layer and pass it on to the hidden layers. 
        x = self.hidden_layers(x)
        
        # get the output from last hidden layer and get the output. 
        x = self.output_layer(x)
        

        # Define sigmaX (example value, replace with actual value)
        sigmaX = 0.1  # Adjust this value as needed

        # Apply initial condition ansatz
        term1 = torch.sigmoid( 5 * (2 - (t_in/sigmaX)) ) * self.initial_condition(x_in, y_in)
        term2 = torch.tanh( t_in / sigmaX ) ** 2 * x
        u = term1 + term2

        return u

Basic idea here is to remove the first term from the neural network output at t>>2*sigmaX.

I am not getting the expected results, can you please tell me if I am doing something wrong here in terms of the output transform?

why you are not using the newtork output can you explain more your prob

I am training a physics informed neural network for solving wave equation. For it to be well posed, it has to exactly satisfy the initial condition, at t=0. With soft constraint, it never exactly matches that condition. So I want to apply it using hard boundaries or constraints. Idea here is to transform the neural network output by a function F(x,y,t) which matches / satisfies boundary conditions.