Hello.
Is it possible to use pytorch to run simulations? I would like to utilize my GPU if possible to make this faster.
Here is an example of one of my simulations using a student T distribution:
starting_cash = 38927.72
sim = pd.DataFrame()
iterations = 50000
time_horizon = 504
annual_investment = 0
for x in range(iterations):
#expected average return
expected_return = expected_return
#expected volatility based on mean absolute deviation
expected_volatility = expected_volatility
#collect the data
stream = []
for i in range(time_horizon):
#random weights for starting allocation
weights = np.asarray(sharpe_portfolio.iloc[:,3:].values)
# print(type(weights))
starting_allocation = starting_cash * weights
return_change = np.random.standard_t(df = df_return.shape[0] - 1)
end = np.round(starting_allocation.T * (1 + return_change) + annual_investment, 2)
stream.append(end.sum())
starting_allocation = end
# print(x, i)
sim[x] = stream
How could I turn that into a pytorch-ran module utilizing the gpu?