Giving wrong size of input dimension crashes cuda

I am curious why giving wrong size of argument crashes all cuda behavior, instead of throwing only simple size error?

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
embedding = nn.Embedding(10, 3).cuda()
input = torch.LongTensor([[1,2,4,5],[4,3,2,9]]).cuda()
embedding(input) # works fine
e1 = nn.Embedding(4,3).cuda() # has input_dim of 4
# following line does not trigger any error for now
torch.Tensor(torch.randn(4,3)).cuda()
#suppose I forgot my embedding dimensions and called e1 with wrong input
e1(input) # It won't work, as expected with the following error:

RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1535493744281/work/aten/src/THC/generic/THCTensorCopy.cpp:20
# when I tried the following initialization why do I get the same RuntimeError?
torch.Tensor(torch.randn(4,3)).cuda()