# How to implement low dimension embedding in pytorch

Assume I have an high dimension vector `v` which dimension is `(2000,1)`, which representing a image feature vector. And I want to get an low embedding for `v` as following:

`f=E*v` where `E` is the embedding matrix with size `(30, 2000)`; now the new vector `f` is `(30,1)`, which is much smaller than `v`.

My question is how to implement above idea in pytorch, and I want the embedding matrix `E` can be trained with data with some loss function?

``````import torch
from torch import nn

v = torch.randn(2000,1)
``````

PS: the whole process is that the vector `v` will go through an Embedding layer and then output the low dimension vector `f`.