Yes, the derived class will override methods, in this case __init__
and forward
.
Note that inside the __init__
we are calling the super().__init__()
method to initialize the base class.
@ptrblck
So then, for overriding base class,
all I need to do is just put the base class into argument when I define the class?
In here, class myVgg16Headed(myVgg16):
<- (myVgg16)
is part.
I don’t understand the model_headed = myVgg16Headed()
.
Because unlike class definition, when I make myVgg16Headed
model object,
it doesn’t have a argument, just void.
And one more question.
Can I check saved weight file’s real weight values?
If so, how can I do it?
Yes, this is called inheritance and is described here.
What do you mean by “void”?
You can define the arguments in its __init__
class as you would do it deriving from nn.Module
.
Just load the state_dict
and print the values:
state_dict = torch.load(PATH(
print(state_dict)
Maybe “empty” is better word than “void”.
I mean when I define myVgg16Headed
class, it has argument. class myVgg16Headed(myVgg16):
(myVgg16) <- like this.
But when I make it as object like model_headed = myVgg16Headed()
,
myVgg16Headed
has no argument. () <- like this, “empty”.
This point was what I cannot understand.
The arguments you are passing into the initialization of your class, are defined in the __init__
method as seen here:
class myVgg16Headed(myVgg16):
def __init__(self, num_classes=10):
You can create the model as model_headed = myVgg16Headed(num_classes=10)
.
If you don’t pass the argument, the default value will be used.
Umm, so then, what if I erase (myVgg16)
class myVgg16Headed():
def __init__(self, num_classes=10):
like this. What happen? Can’t I override myVgg16
class’s forward
method?
Sure you can create a new class, but you will lose all modules defined in the base class.
I got it ^^.
Thank you for all your kind.
Sincerely.
@ptrblck
Hello^^.
Now I am trying to crop from 2x16x32 to 32x1x32.
So, how can I change under code more efficiently?
Thank you.
class MyDataset():
def __init__(self, cropped_img_vectors, targets):
self.data_0 = cropped_img_vectors[0]
self.data_1 = cropped_img_vectors[1]
self.data_2 = cropped_img_vectors[2]
self.data_3 = cropped_img_vectors[3]
self.data_4 = cropped_img_vectors[4]
self.data_5 = cropped_img_vectors[5]
self.data_6 = cropped_img_vectors[6]
self.data_7 = cropped_img_vectors[7]
self.data_8 = cropped_img_vectors[8]
self.data_9 = cropped_img_vectors[9]
self.data_10 = cropped_img_vectors[10]
self.data_11 = cropped_img_vectors[11]
self.data_12 = cropped_img_vectors[12]
self.data_13 = cropped_img_vectors[13]
self.data_14 = cropped_img_vectors[14]
self.data_15 = cropped_img_vectors[15]
self.data_16 = cropped_img_vectors[16]
self.data_17 = cropped_img_vectors[17]
self.data_18 = cropped_img_vectors[18]
self.data_19 = cropped_img_vectors[19]
self.data_20 = cropped_img_vectors[20]
self.data_21 = cropped_img_vectors[21]
self.data_22 = cropped_img_vectors[22]
self.data_23 = cropped_img_vectors[23]
self.data_24 = cropped_img_vectors[24]
self.data_25 = cropped_img_vectors[25]
self.data_26 = cropped_img_vectors[26]
self.data_27 = cropped_img_vectors[27]
self.data_28 = cropped_img_vectors[28]
self.data_29 = cropped_img_vectors[29]
self.data_30 = cropped_img_vectors[30]
self.data_31 = cropped_img_vectors[31]
self.targets = targets
How would you like to crop the first tensor so that you get the second one?
It looks more like a reshape/view operation than a crop.
@ptrblck
I finished crop process already.
From half crop (2x(16x32))
def crop_half(img):
up = transforms.functional.crop(img,0,0,16,32)
down = transforms.functional.crop(img,16,0,16,32)
return up, down
#train
train_up = []
train_down = []
for i in range(train_dataset.data.shape[0]):
img = ToPIL(train_dataset.data[i])
up,down = crop_half(img)
train_up.append(ToTensor(up))
train_down.append(ToTensor(down))
train_up = torch.stack(train_up)
train_down = torch.stack(train_down)
To 1pixel crop (32x(1x32))
def crop_1pixel(img):
cropped_img_vectors = []
for i in range(32):
cropped_img_vectors.append(transforms.functional.crop(img,i,0,1,32))
return cropped_img_vectors
#train
for i in range(32):
globals()["train_{}".format(i)] = []
for i in tqdm(range(train_dataset.data.shape[0])):
img = ToPIL(train_dataset.data[i])
cropped_img_vectors = crop_1pixel(img)
for i in range(32):
globals()["train_{}".format(i)].append(ToTensor(cropped_img_vectors[i]))
for i in tqdm(range(32)):
globals()["train_{}".format(i)] = torch.stack(globals()["train_{}".format(i)])
train_cropped_1pixel_dataset = [globals()["train_{}".format(i)] for i in range(32)]
Like this.
But after this, when I try under code,
class MyDataset():
def __init__(self, cropped_1pixel_dataset, targets):
for i in range(32):
globals()["self.data_{}".format(i)] = cropped_1pixel_dataset[i]
self.targets = targets
def __getitem__(self, index):
for i in range(32):
globals()["data_{}".format(i)] = cropped_1pixel_dataset[i][index]
y = self.targets[index]
return [globals()["data_{}".format(i)] for i in range(32)], y
def __len__(self):
return len(self.data_0)
# train
train_dataset = MyDataset(train_cropped_1pixel_dataset, train_dataset.targets)
train_loader = torch.utils.data.DataLoader(dataset = train_dataset,
batch_size = batch_size,
shuffle = True)
it occur this error.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-11-960ee70394c1> in <module>
3 train_loader = torch.utils.data.DataLoader(dataset = train_dataset,
4 batch_size = batch_size,
----> 5 shuffle = True)
~/.local/lib/python3.5/site-packages/torch/utils/data/dataloader.py in __init__(self, dataset, batch_size, shuffle, sampler, batch_sampler, num_workers, collate_fn, pin_memory, drop_last, timeout, worker_init_fn)
800 if sampler is None:
801 if shuffle:
--> 802 sampler = RandomSampler(dataset)
803 else:
804 sampler = SequentialSampler(dataset)
~/.local/lib/python3.5/site-packages/torch/utils/data/sampler.py in __init__(self, data_source, replacement, num_samples)
58
59 if self.num_samples is None:
---> 60 self.num_samples = len(self.data_source)
61
62 if not isinstance(self.num_samples, int) or self.num_samples <= 0:
<ipython-input-10-293dc919d173> in __len__(self)
12
13 def __len__(self):
---> 14 return len(self.data_0)
AttributeError: 'MyDataset' object has no attribute 'data_0'
I think maybe because of globals()["self.data_{}".format(i)]
-> self
.
So I solved it just unefficient way like this.
class MyDataset():
def __init__(self, cropped_img_vectors, targets):
self.data_0 = cropped_img_vectors[0]
self.data_1 = cropped_img_vectors[1]
self.data_2 = cropped_img_vectors[2]
self.data_3 = cropped_img_vectors[3]
self.data_4 = cropped_img_vectors[4]
self.data_5 = cropped_img_vectors[5]
self.data_6 = cropped_img_vectors[6]
self.data_7 = cropped_img_vectors[7]
self.data_8 = cropped_img_vectors[8]
self.data_9 = cropped_img_vectors[9]
self.data_10 = cropped_img_vectors[10]
self.data_11 = cropped_img_vectors[11]
self.data_12 = cropped_img_vectors[12]
self.data_13 = cropped_img_vectors[13]
self.data_14 = cropped_img_vectors[14]
self.data_15 = cropped_img_vectors[15]
self.data_16 = cropped_img_vectors[16]
self.data_17 = cropped_img_vectors[17]
self.data_18 = cropped_img_vectors[18]
self.data_19 = cropped_img_vectors[19]
self.data_20 = cropped_img_vectors[20]
self.data_21 = cropped_img_vectors[21]
self.data_22 = cropped_img_vectors[22]
self.data_23 = cropped_img_vectors[23]
self.data_24 = cropped_img_vectors[24]
self.data_25 = cropped_img_vectors[25]
self.data_26 = cropped_img_vectors[26]
self.data_27 = cropped_img_vectors[27]
self.data_28 = cropped_img_vectors[28]
self.data_29 = cropped_img_vectors[29]
self.data_30 = cropped_img_vectors[30]
self.data_31 = cropped_img_vectors[31]
self.targets = targets
So I just want to know how to reduce this code efficiently using for loop.
Thank you.
Try to use __setattr__
and __getattribute__
:
class MyDataset(Dataset):
def __init__(self):
for i in range(32):
self.__setattr__('data_{}'.format(i), torch.tensor(i))
def __getitem__(self, index):
return [self.__getattribute__('data_{}'.format(i)) for i in range(32)]
def __len__(self):
return len(self.data_0)
dataset = MyDataset()
I’ve removed some undefined code so that I could write this small example.
@ptrblck
It works. Thanks^^
But I think I found wrong code.
class myVgg16Headed(myVgg16):
def __init__(self, num_classes=10):
super(myVgg16Headed, self).__init__(num_classes)
def forward(self, x_up, x_down):
x_up = self.head(x_up)
x_down = self.head(x_down)
x = torch.cat((x_up, x_down), dim=2)
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
In the forward
method, it is right to change from dim=2
to dim=1
?
Because I want to make two 3x16x32
data to one 3x32x32
by cat
.
Inside the forward
method your data will contain an additional batch dimension in dim0, so I think you should use dim=2
to concatenate your image parts.
@ptrblck
Ah!!! got it. Thank you^^
@ptrblck
Hi, I have one question.
Let’s suppose to we use one filter which size is 1x3
.
A situation : Input size is (3x32x32)
-> (CxWxH)
B situation : Input size is (3x32x1)x32
-> Like my cropped images
and concat
before second convolution.
Both A and B performance should be same?
Thank you.
I assume you are reusing the kernel in B for each of the 32 inputs?
The result should be different, since the kernel will operate differently on the inputs.
Compare the kernel size with the input size: the kernel will just use a single column (width) while it has three.
By performance do you mean the accuracy or the speed?