SoA methods to restrict regression predictions to a closed interval

I would like to predict a numerical value from the range 0 <= y <= 1, where the values are approximately evenly distributed in that interval, and more importantly, the values 0 and 1 actually occur quite often (unlike the theoretical case of a continuous distribution, the values to predict are obviously from a final set and the interval border values have a non-zero probability of occurring)

I will see values from that interval in the training set and I expect values from that interval (including the interval boundaries) to also occur in the output.

Is there any scientific insight into which approach would be best for this? The main recommendation I have seen so far are:

  • sigmoid function: will restrict the output to 0…1 but warps the input in a way where exponentially bigger and bigger inputs are needed to get close to 0 or 1 and those values can actually never be reached
  • clamping: just pretend nothing is special and clamp the output
  • penalize out of interval values in the cost function, maybe combine with clamping
  • I believe there are more advanced techniques to train a NN with constraints, but I am not sure how exactly they work and which would apply here.

I assume the basic approach should apply to any NN architecture where the target variable value is from a closed interval, ultimately I want to apply this to a convolutional RNN for predicting time series data where the prediction is sensor measurement from a fixed range.