

We can have reasonable defaults if dim is not specified (that depends on the shape of the input), to keep backwards compatibility. Something like F.interpolate(image, dim=) for spatial interpolation, or F.interpolate(volume, dim=) for volumetric data.
Nvidia 3d vision controller 340.5 can i uninstall code#
This means that we can have the user specify the dimensions he wants to interpolate as a list, and in the C++/CUDA code we have a loop over the dimensions. make the interpolate kernel separable (it is already separable in most cases).For an example, see how Pillow implements it, and then the interpolation coefficients can be generically computed as in here. factor out the different kernel computations, so that we have a generic filter that is used to compute the interpolation, plus the size of the filter as a struct.This could be achieved in two independent steps (that could be done at the same time): I believe it is possible to refactor the underlying implementation so that we can have a single C++ / CUDA codepath, with minimal code duplication. Those implementation are all very similar one to the other, and there is a lot of code duplication in there.Ĭompare for example nearest with bilinear. The set of different kernels that are dispatched can be seen here. Currently, torch.nn.functional.interpolate has dedicated implementation for 1d, 2d and 3d data, as well as for nearest, bilinear and bicubic interpolation.
