I want to use a 100K dimensional tensor as input，but it is sparse. So ，it uses a lot of memory.Pytorch will be support sparse tensor？
What do you want to give it as an input to?
If you elaborate your use-case, we can help better.
We have some sparse tensor support in
The dataset is a 100K*100K adjacent matrix which represents the relationship of network nodes.The input is some rows(mini-batch) of the adjacent matrix, just like [0 0 0 0 0 0 1 0…… 0 0 0 1 0 0 0 0].I want to input the batch to a fully connected layers.If the tensor is dense, it would use too much memory.
I have read the doc of torch.sparse, but I’m confused to use torch.sparse.FloatTensor as input.just like conventional tensor?And , it doesn’t support cuda now, does it?
@ynyxxy how about the result?
nope~so, i cut the data set
Hello, I have a similar problem: I would like to reuse the code coming from an introductory tutorial (this one: https://pytorch.org/tutorials/beginner/nlp/deep_learning_tutorial.html) but with some real data.
My dataset is the IMDB movies reviews and I transform it into a bag of word representation with tf-idf entries using successively CountVectorizer and TfidfTransformer both coming from scikit-learn.
But I cannot directly use the rows of the sparse matrix I obtain because they don’t have a “dim” attribute. More precisely a row of the output of a TfidfTransformer object is of type
<1x45467 sparse matrix of type ‘<class ‘numpy.float64’>’
and you cannot give it as an input to an object of the class torch.nn.functional.linear.
Any suggestions? Could I transform my input into something like torch.sparse.FloatTensor?
By the way, I’m not trying to build the most efficient model. It is just to get comfortable with Pytorch API’s.