Pytorch Model to access a Tensor store on GPU

Hello team,

I have a use-case like this. A small tensor store is created on GPU. We can think of this store as a hashmap where the keys represent the entities and the values represent a set of feature tensors for each entity. During inference the model gets a request with some context which has request tensors. For every inference (forward() call), the model needs to run operations using the request tensors and each of the entity tensors which is on GPU. The tensors in the store could be updated continuously by an external process. New entities with its set of feature tensors could also be added externally.

How can I have the forward pass of the model refer to these feature tensors as part of its operations and also get the most updated value and also consider the new entities ? Technically the interface looks like a pointer from model variables to a gpu data structure.

If the store was not managed externally, I would have defined the feature tensors as part of the model and maintained a dictionary of entity to feature tensors, and forward() would have accessed that dictionary.

Thanks