I’m face following problem: I got models from every cross validation fold and I want to put it all together in nice way. I think thats something like ensamble model problem.
I’m familiar with some solutions, but I will be happy with every answer, which is why this question is open.
I think you are mistaking what K-fold cross validation is for: remember the difference between the process of training a model which returns a model (trained, hopefully) and the process of evaluating a model, which returns an evaluation of a model (a number usually).
You can dig into K-fold briefly by reading this post.
Basically you use K-fold CV to evaluate the performance of a model. The final performance is the mean performance (± standard deviation) over all the folds.
If you want to obtain the final trained model you can retrain from scratch your model on all your training set (without K-split) and the evaluation you have done will tell you the expected performance on the separate test set.
Maybe also this link can clarify: