Using Autograd.grad() As A Parameter For A Loss Function (pytorch)
Solution 1:
You are receiving mentioned error because you are trying to feed a slice of tensor X
: X[i]
to grad()
, and it is going to be considered as a separate tensor, outside of your main computational graph. Not sure, but seems it returns new tensor while performing slicing.
But you don't need a for loop to compute gradients:
Code:
import torch
import torch.nn as nn
torch.manual_seed(42)
# Create some data.
X = torch.rand(40, requires_grad=True)
Y = torch.rand(40, requires_grad=True)
# Define loss.
loss_fn = nn.MSELoss()
# Do some computations.
V = Y * X + 2# Compute the norm.
V_norm = V.norm()
print(f'V norm: {V_norm}')
# Computing gradient to calculate the loss
grad_tensor = torch.autograd.grad(outputs=V_norm, inputs=X)[0] # [0] - Because grad returs tuple, so we need to unpack itprint(f'grad_tensor:\n {grad_tensor}')
# Grund truth
gt = grad_tensor * 0 + 1
loss_g = loss_fn(grad_tensor, gt)
print(f'loss_g: {loss_g}')
Output:
V norm:14.54827grad_tensor:tensor([0.1116,0.0584,0.1109,0.1892,0.1252,0.0420,0.1194,0.1000,0.1404,0.0272,0.0007,0.0460,0.0168,0.1575,0.1097,0.1120,0.1168,0.0771,0.1371,0.0208,0.0783,0.0226,0.0987,0.0512,0.0929,0.0573,0.1464,0.0286,0.0293,0.0278,0.1896,0.0939,0.1935,0.0123,0.0006,0.0156,0.0236,0.1272,0.1109,0.1456])loss_g:0.841885
Loss between the grads and the norm
You also mentioned that you want to compute loss between the gradients and the norm, it is possible. And there are two possible options of it:
You want to include your loss calculation to your computational graph, in this case use:
loss_norm_vs_grads = loss_fn(torch.ones_like(grad_tensor) * V_norm, grad_tensor)
You want just to compute loss and you don't want to start backward path from the loss, in this case don't forget to use torch.no_grad()
, otherwise autograd
will track this changes and add loss computation to your computational graph.
with torch.no_grad():
loss_norm_vs_grads = loss_fn(torch.ones_like(grad_tensor) * V_norm, grad_tensor)
Post a Comment for "Using Autograd.grad() As A Parameter For A Loss Function (pytorch)"