import torch
import pyautogui as mouse
import cv2
from ScreenRecorder import Record,IniRecord,Frame
def start(model):
sc_ini = Frame()
monitor = sc_ini.get()
sc = IniRecord(monitor,1.6)
while True:
frame = sc.getFrame()
cv2.imshow('frame',frame)
output_xy,output_click = model.forward(frame)
#print(output_xy,output_click)
if cv2.waitKey(1) & 0xFF == ord('q'):
print("break")
break
model = torch.load_state_dict(torch.load('Model/model_save'))
But it says
File "D:/Nextcloud/Python/Gamebot/Bot.py", line 31, in <module>
model = torch.load_state_dict(torch.load('Model/model_save'))
AttributeError: module 'torch' has no attribute 'load_state_dict'
I donât know what kind of model and use case you are working on, but it might be worth starting a new thread if you encounter any errors to keep this topic clean.
Hi , Many thnaks fo ryour reply. muy model is
import torch
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import TensorDataset, DataLoader
import torch.optim as optim
import torch.nn as nn
from torch.utils.data.dataset import random_split
from torch.nn import functional as F
import matplotlib.pyplot as plt
from torch.autograd import Variable
Path=root_dir1+â/Fold_â+str(FoldNum)+âNumDarw=â+str(NumDraw)+âIterationâ+str(Iteration)+".pth"
checkpoint = {âmodel_state_dictâ: model.state_dict(),âoptimizer_state_dictâ: optimizer.state_dict()}
# print(checkpoint)
torch.save(checkpoint,Path) ## save for each 10 iteration
You are currently initializing model and optimizer as an empty Python list, which creates this error.
Initialize both as you have done in your training script, i.e.:
model = ConvNet(...)
optimizer = optim.SGD(model.parameters(), ...)
# Now load the state_dicts
model.load_state_dict(...)
optimizer.load_state_dict(...)
I used this but give me this error:(Error(s) in loading state_dict for ConvNet:
size mismatch for layer1.0.weight: copying a param with shape torch.Size([32, 1, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([2, 1, 3, 3, 3]).
size mismatch for layer1.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([2]).)
model = ConvNet(2,64,3,3,300,20)
optimizer =torch.optim.Adam(model.parameters(), lr=LR)
TargetWholev2=[]
for Iteration1 in range(9):
Path1=root_dir2+â/Fold_â+str(FoldNum)+âNumDarw=â+str(NumDraw)+âIterationâ+str(Iteration1+1)+â.pthâ
The error points to different shapes of your parameters, which means that youâve initialized the model in a different way.
Could you check, how youâve initialized the ConvNet before saving the state_dict and use the same arguments?
excuse me, the time of training, despite the CNN was shallow was 4 days for just one draw. How I can speed up the GPU? The number of workers in data loader are important for speed?