Dear PyTorch Community,
I am writing to propose the addition of a new operator to PyTorch: torch.range_map
. This operator would provide a convenient and efficient way to map the values of a tensor from an arbitrary range to a specified [min_val, max_val]
range.
Proposed Operator: torch.range_map
-
Functionality: This operator would take an input tensor
x
and map its values to a target range defined bymin_val
andmax_val
. The mapping can be performed using different methods, specified by themapping_type
parameter. -
Parameters:
x
(Tensor): The input tensor to be range-mapped.min_val
(float): The minimum value of the target range.max_val
(float): The maximum value of the target range.mapping_type
(str, optional, default:'tanh'
or'linear'
): Specifies the type of mapping to be used. Possible values could include:'linear'
: Linear scaling to the target range.'sigmoid'
: Mapping using the sigmoid function, then scaled to the target range.'tanh'
: Mapping using the tanh function, then scaled to the target range.- (Potentially other mapping types as needed)
-
Return Value: A tensor with the same shape as
x
, where the values have been mapped to the[min_val, max_val]
range.
Example Use Case and Code Comparison:
Currently, to perform range mapping, users often need to manually implement the logic using basic PyTorch operators. For example, to map to the range [-1, 1] using tanh
, one might write:
import torch
def manual_tanh_range_map(x, min_val, max_val):
return ((max_val - min_val) / 2) * torch.tanh(x) + ((max_val + min_val) / 2)
x = torch.randn(5)
min_value = -5.0
max_value = 10.0
manual_result = manual_tanh_range_map(x, min_value, max_value)
print("Manual result:", manual_result)
With the proposed torch.range_map
operator, the code would become much cleaner and more readable:
import torch
def proposed_range_map(x, min_val, max_val):
return torch.range_map(x, min_val, max_val, mapping_type='tanh') # Assuming torch.range_map exists
x = torch.randn(5)
min_value = -5.0
max_value = 10.0
proposed_result = proposed_range_map(x, min_value, max_value) # Assuming torch.range_map exists
print("Proposed result:", proposed_result)
Reasons for Adding torch.range_map
:
- Improved Code Readability and Conciseness: A dedicated
torch.range_map
operator would significantly improve the readability and conciseness of code that involves range mapping, making it easier to understand and maintain. - Potential Performance Optimization: While manual implementations are already efficient, a dedicated operator could potentially be further optimized at the C++/CUDA level by the PyTorch team, leading to slight performance improvements.
- Lower Learning Curve for Beginners: For users new to PyTorch, a dedicated operator would make range mapping more discoverable and easier to use, reducing the learning curve.
- Value in Various Domains: Range mapping is a common operation in various domains, including:
- Reinforcement Learning: Normalizing action spaces or state features.
- Numerical Simulation: Scaling physical quantities to appropriate ranges.
- Data Preprocessing: Feature scaling and normalization.
Acknowledging Existing Alternatives:
I understand that range mapping can already be achieved using existing PyTorch operators. However, I believe that a dedicated torch.range_map
operator would offer significant convenience and clarity, justifying its addition to the library.
Open for Discussion and Feedback:
I would love to hear the community’s thoughts on this proposal.
- Do you think a
torch.range_map
operator would be a useful addition to PyTorch? - In what scenarios would you find this operator helpful?
- Do you have any suggestions for the API design, such as parameter names, default values, or supported
mapping_type
options? - Are there alternative approaches or existing PyTorch functionalities that could achieve similar results with better efficiency or flexibility?
Thank you for your time and consideration. I look forward to your feedback and a fruitful discussion.
Sincerely,
Yucheng Song