PINN for 2D Heat Conduction Always Converges to a Constant Solution

Hello everyone,

I’m a graduate student from Korea working on applying Physics-Informed Neural Networks (PINNs) to mechanical engineering problems. Currently, I’m focusing on a 2D heat conduction problem, but I’m running into a significant issue: my PINN model always converges to a constant temperature field instead of satisfying the boundary conditions or matching the reference data.

Any suggestions on how to keep the model from collapsing into a constant solution, or tips on diagnosing boundary-condition conflicts, would be greatly appreciated. If you need more details or logs, please let me know!

Thank you so much for any help or insight. I’ve been stuck on this for a while and would really value some fresh eyes on the approach.

for epoch in range(epochs):
    
    X_int_ = X_int.clone().detach().requires_grad_(True)
    X_left_ = X_left.clone().detach().requires_grad_(True)
    X_right_ = X_right.clone().detach().requires_grad_(True)
    X_top_ = X_top.clone().detach().requires_grad_(True)
    X_bottom_chip_ = X_bottom_chip.clone().detach().requires_grad_(True)
    X_bottom_nonchip_ = X_bottom_nonchip.clone().detach().requires_grad_(True)

    optimizer.zero_grad()

    # PDE (T_xx + T_zz = 0)
    T_int = model(X_int_)
    grad_T = gradients(T_int, X_int_)
    T_x = grad_T[:, 0:1]
    T_z = grad_T[:, 1:2]
    T_xx = gradients(T_x, X_int_)[:, 0:1]
    T_zz = gradients(T_z, X_int_)[:, 1:2]
    f_int = T_xx + T_zz
    loss_PDE = loss_fn(f_int, torch.zeros_like(f_int))

    # Left boundary(x = 0): Thermal Insulation (T_x = 0)
    T_left = model(X_left_)
    T_left_x = gradients(T_left, X_left_)[:, 0:1]
    loss_left = loss_fn(T_left_x, torch.zeros_like(T_left_x))

    # Right boundary (x = L): Thermal Insulation (T_x = 0)
    T_right = model(X_right_)
    T_right_x = gradients(T_right, X_right_)[:, 0:1]
    loss_right = loss_fn(T_right_x, torch.zeros_like(T_right_x))

    # Top boundary (z = h_plate): Convection condition, k_al * T_z + htc * (T - T_inf) = 0
    T_top = model(X_top_)
    T_top_z = gradients(T_top, X_top_)[:, 1:2]
    loss_top = loss_fn(k_al * T_top_z + htc * (T_top - T_inf), torch.zeros_like(T_top))

    # bottom boundary (z = 0)
    # (1) Heated area (Dirichlet: T = T_chip)
    T_bottom_chip = model(X_bottom_chip_)
    loss_bottom_chip = loss_fn(T_bottom_chip, torch.ones_like(T_bottom_chip) * T_chip)
    # (2) else (Neumann: T_z = 0)
    T_bottom_nonchip = model(X_bottom_nonchip_)
    T_bottom_nonchip_z = gradients(T_bottom_nonchip, X_bottom_nonchip_)[:, 1:2]
    loss_bottom_nonchip = loss_fn(T_bottom_nonchip_z, torch.zeros_like(T_bottom_nonchip_z))
    loss_bottom = loss_bottom_chip + loss_bottom_nonchip

    # total loss
    loss_BC = loss_left + loss_right + loss_top + loss_bottom
    loss = loss_PDE + 1e5 * loss_BC
    loss.backward()
    optimizer.step()
    scheduler.step()

    # loss history
    loss_history.append(loss.item())
    pde_loss_history.append(loss_PDE.item())
    bc_loss_history.append((loss_left + loss_right + loss_top + loss_bottom).item())

    if epoch % 500 == 0:
        print(f"[Epoch {epoch}] Total: {loss.item():.3e}, PDE: {loss_PDE.item():.3e}")
        print(
            f"   BC Loss -> Left: {loss_left.item():.3e}, Right: {loss_right.item():.3e}, Top: {loss_top.item():.3e}, Bottom: {loss_bottom.item():.3e}")

Without getting into this too much, I think this is a fundamental issue with PINNs, as opposed to an issue with your implementation or PyTorch.

There are a number of papers about how PINNs often fails. Here are some examples: 1, 2, 3.

To summarize some of these, the optimizer gets stuck with a constant solution due to conflicting terms in the loss function. There are many works that promise to fix this, but it’s generally an increase in complexity. I’ve had this issue with “vanilla PINNs” and have sometimes fixed it through lucky hyperparameter tuning (e.g. your 1e5 weight on the BC loss, or even changes to the architecture)

1 Like

Thank you very much for your helpful feedback! As you mentioned, adjusting the weight values luckily resolved the issue in my case as well. I’ll carefully review the materials you shared to better understand the underlying principles. Thanks again for your help!