My understanding was that in PyTorch 1.0 you access the python scalar of a 1x1 tensor by using `.item()`

In this simple example it still keeps the Tensor type when I try and use it in a pandas Series. (For easy graphing purposes) Any idea on why I’m getting this error?

```
s_list = [ s for s in range(-20, 21) ]
p_list = [ 5, 10]
v = []
perms = [ (s, p) for s in s_list for p in p_list ]
for s, p in perms:
_s = torch.Tensor([s])
_p = torch.Tensor([p])
x = torch.pow(10,(_s - _sb)/10 )
v.append(x.item())
print(type(v))
print(type(v[0]))
arr = pd.Series(v)
```

Output:

```
<class 'list'>
<class 'float'>
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-13-a85449a85756> in <module>()
18 print(type(v))
19 print(type(v[0]))
---> 20 arr = pd.Series(v)
AttributeError: 'Tensor' object has no attribute 'Series'
```

On a more in depth note I will precompute all of these values and store them in a dictionary. When training my net I will use these values in my loss function. They will be a unique constants related to every training sample in each batch multiplied against my `function.mse_loss(output, target, reduction='none')`

. I want to make sure that there are no underlying issues with these being modified when I call loss.backwards(). Any ideas on this?