Does anyone know how to translate a vectorized version of ||x - w||^2 in pytorch? I have a working version in numpy but it seems there are issue with summing over axis in pytorch so I’m not sure how to translate my code to pytorch:

```
WW = np.sum(np.multiply(W,W), axis=0, dtype=None, keepdims=True)
XX = np.sum(np.multiply(x,x), axis=1, dtype=None, keepdims=True)
Delta_tilde = 2.0*np.dot(x,W) - (WW + XX)
```

e.g.

`WW = (W * W).sum(axis=0, keepdim=True)`

SimonW
(Simon Wang)
January 14, 2018, 2:34am
#3
Assuming batched, `(x - w).pow(2).sum(1, keepdim=True)`

SimonW
(Simon Wang)
February 2, 2018, 4:35pm
#5
I was assuming that w and x are vectors in mini-batches with the first dimension (dim 0) being the batch dimension.

w is the centers (of potentially an RBF), so W might not be the same size as a batch size. Its just like the number of filters for a fully connected network.

iarroyof
(Ignacio Arroyo)
February 11, 2018, 6:43am
#8

SimonW:

dimension

Hi @Brando_Miranda , you solved this question? I’ve the same doubts… I’d like to see a more detailed example than those offered here.

Im just using the original code I posted in my question but translated to pytorch.

iarroyof
(Ignacio Arroyo)
November 9, 2018, 11:55pm
#10
I’ve trained a model like this:

```
# -*- coding: utf-8 -*-
import torch
from torch.autograd import Variable
from pdb import set_trace as st
def kernel_product(w, x, mode = "gaussian", s = 0.1):
w_i = torch.t(w).unsqueeze(1)
x_j = x.unsqueeze(0)
xmy = ((w_i - x_j)**2).sum(2)
#st()
if mode == "gaussian" : K = torch.exp( - (torch.t(xmy) ** 2) / (s**2) )
elif mode == "laplace" : K = torch.exp( - torch.sqrt(torch.t(xmy) + (s**2)))
elif mode == "energy" : K = torch.pow( torch.t(xmy) + (s**2), -.25 )
return K
class MyReLU(torch.autograd.Function):
"""
We can implement our own custom autograd Functions by subclassing
```

This file has been truncated. show original