Dice loss invalid pointer error

Trying to implement dice loss as a loss function.
Getting invalid pointer error while running Dice loss function. Below is the code for Dice loss implementation.

class My_loss(nn.Module): 
    def __init__(self,weight=None, size_average=True):
        super(My_loss,self).__init__()

    def forward(self,f,r):
        f=(F.sigmoid(f)>threshold).float()
        r=(F.sigmoid(r)>threshold).float()
        intersection=torch.mul(f,r)
        score = 2*(torch.sum(intersection) + 1)/(torch.sum(f) + torch.sum(r) + 1)
        score = 1 - torch.sum(score) 
        return score

Error i am getting:

THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1518238409320/work/torch/lib$
THC/generic/THCStorage.cu line=58 error=2 : out of memory
*** Error in `python': free(): invalid pointer: 0x00007f6688c14ec0 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f66910747e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f669107d37a]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f669108153c]
/data/anaconda2/lib/python2.7/site-packages/torch/_thnn/_THCUNN.so(+0x8$
0a6)[0x7f663a8a70a6]
/data/anaconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x79e$
)[0x7f6691dd6615]
/data/anaconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x7e9)$
0x7f6691dd84e9]
/data/anaconda2/bin/../lib/libpython2.7.so.1.0(+0x6cfda)[0x7f6691d60fda$

/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f669101d830]
python(+0x87f)[0x55e23909d87f]
======= Memory map: ========
10000000-10001000 rw-s 00000000 00:06 646                                /dev/nv
idia1
10001000-10002000 rw-s 00000000 00:06 646                                /dev/nv
idia1
10002000-10003000 rw-s 00000000 00:06 646                                /dev/nv
idia1

7f6645bbe000-7f6645dbd000 ---p 0001d000 fc:00 191169093                  /data/anaconda2/lib/python2.7/site-packages/sip.so
7f6645dbd000-7f6645dbe000 r--p 0001c000 fc:00 191169093                  /data/anaconda2/lib/python2.7/site-packages/sip.soAborted (core dumped)

It looks like an out of memory error. Could you lower your batch size and run the code again?

i am using batch_size = 1, one predicted segmented image and the ground truth image.

Could you check your GPU memory with nvidia-smi?
Your code works fine:

output = torch.randn(1, 1, 24, 24)
target = torch.empty(1, 1, 24, 24).random_(2)
criterion = My_loss()
criterion(output, target)

GPU is free, model is running fine with MSELoss(), here is the function where i am calling the dice loss.

def train_g(optimizer,fake_data,real_data):
    N=28*28*512
    optimizer.zero_grad()
    predict=d(fake_data)
    error=loss(predict,one_target(N).type(dtype)) +dice_loss(fake_data.view(-1,1),real_data.view(-1,1))
    error.backward()
    optimizer.step()
    return error

Ok, thanks for the info.
What are you doing with error after returning it?
Are you storing it somewhere?

Also, could you just set your loss to:

error = dice_loss(fake_data.view(-1,1),real_data.view(-1,1)
error.backward()

Do you get any error?

I am not storing this error, it will be returned to def train() and their i am simply printing it.
If i set my error=dice_loss() then the model is working fine. But i need the other loss also, which is a BCELoss.