Wasserstein loss layer/criterion

Hi @tom,

Wow, I really liked the post! Your knowledge is very deep, thank you very much for sharing it!

Perhaps it’s possible to compute gradients of gradients, using .backward twice? I could definitely be wrong, but here’s a couple of posts, that you might like,

The toy examples in the improved GAN paper are cute, and it would be fun to implement them :smile:

I guessed you had a physics background, and we both too have/work in credit/actuarial science :smile: - seems like we are almost twins :smile:

I’m sure that it would be possible to train a RNN or MLP to emulate many internal/proprietary credit/actual loss distribution models, which involve costly repeated stochastic sampling. Conditional Wasserstein GAN seems to be perfect for that !!! Once a slow and detailed stochastic model is built by the analyst, it seems like emulating it by training a conditional GAN on it’s output, and then sampling from the trained conditional GAN, is a good strategy for your industry? They should give roughly the same numbers?

I share your view that the accuracy is not so good now, but I guess in the future that will change. For the time being perhaps for reporting purposes, they should be adequate, (nobody reads those reports anyway :wink:).

I wish you the best holidays too my friend,

best regards,

Ajay