PyTorch gradient differs from manually calculated gradient












3















I'm trying to compute the gradient of 1/x without using Pytorch's autograd. I use the formula grad(1/x, x) = -1/x**2. When I compare my result with this formula to the gradient given by Pytorch's autograd, they're different.



Here is my code:



a = torch.tensor(np.random.randn(), dtype=dtype, requires_grad=True)
loss = 1/a
loss.backward()
print(a.grad - (-1/(a**2)))


The output is:



tensor(5.9605e-08, grad_fn=<ThAddBackward>)


Can anyone explain to me what the problem is?










share|improve this question





























    3















    I'm trying to compute the gradient of 1/x without using Pytorch's autograd. I use the formula grad(1/x, x) = -1/x**2. When I compare my result with this formula to the gradient given by Pytorch's autograd, they're different.



    Here is my code:



    a = torch.tensor(np.random.randn(), dtype=dtype, requires_grad=True)
    loss = 1/a
    loss.backward()
    print(a.grad - (-1/(a**2)))


    The output is:



    tensor(5.9605e-08, grad_fn=<ThAddBackward>)


    Can anyone explain to me what the problem is?










    share|improve this question



























      3












      3








      3


      2






      I'm trying to compute the gradient of 1/x without using Pytorch's autograd. I use the formula grad(1/x, x) = -1/x**2. When I compare my result with this formula to the gradient given by Pytorch's autograd, they're different.



      Here is my code:



      a = torch.tensor(np.random.randn(), dtype=dtype, requires_grad=True)
      loss = 1/a
      loss.backward()
      print(a.grad - (-1/(a**2)))


      The output is:



      tensor(5.9605e-08, grad_fn=<ThAddBackward>)


      Can anyone explain to me what the problem is?










      share|improve this question
















      I'm trying to compute the gradient of 1/x without using Pytorch's autograd. I use the formula grad(1/x, x) = -1/x**2. When I compare my result with this formula to the gradient given by Pytorch's autograd, they're different.



      Here is my code:



      a = torch.tensor(np.random.randn(), dtype=dtype, requires_grad=True)
      loss = 1/a
      loss.backward()
      print(a.grad - (-1/(a**2)))


      The output is:



      tensor(5.9605e-08, grad_fn=<ThAddBackward>)


      Can anyone explain to me what the problem is?







      python gradient pytorch derivative






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 13 '18 at 16:34









      blue-phoenox

      4,13691543




      4,13691543










      asked Nov 13 '18 at 5:48









      HOANG GIANGHOANG GIANG

      183




      183
























          1 Answer
          1






          active

          oldest

          votes


















          2














          So I guess you expect zero as result. When you take a closer look you see that it is quite close. When deviding numbers on a binary system (computer) then you often get round-off errors.



          Lets take a look at your example with an additional print-statement added:



          a = torch.tensor(np.random.randn(), requires_grad=True)
          loss = 1/a
          loss.backward()
          print(a.grad, (-1/(a**2)))
          print(a.grad - (-1/(a**2)))


          The because of the random input the output is of course random too (so you won't get this very same numbers, but just repeat and you will have similar examples), sometimes you will also get zero as result, here it is note the case:



          tensor(-0.9074) tensor(-0.9074, grad_fn=<MulBackward>)
          tensor(5.9605e-08, grad_fn=<ThSubBackward>)


          You see even though both are displayed as the same number but they differ in one of the last decimal places. That is why you get this very small difference when subtracting both.



          This problem as a general problem of computers, some fractions just have an large or infinite number of decimal places, but your memory has not. So so they are cut off at some point.



          So what you experience here is actually a lack of precision. And the precision is depending on the numerical data type you are using (i.e. torch.float32 or torch.float64).



          You can also take a look here more information:
          https://en.wikipedia.org/wiki/Double-precision_floating-point_format





          But this is not specific to PyTorch or so, here is a Python example:



          print(29/100*100)


          Results in:



          28.999999999999996




          Edit:



          As @HOANG GIANG pointed out, changing the equation to -(1/a)*(1/a) works well and the result is zero.
          Probably this the case because the calculation in done to calculate the gradient is very similar (or the same) to -(1/a)*(1/a) in this case. Therefore it shares the same round-off errors therefore the difference is zero.



          So then here is another more fitting example than the one above. Even though -(1/x)*(1/x) is mathematically equivalent to -1/x^2 it is not always the same when calculating it on the computer, depending on the value of x:



          import numpy as np
          print('e1 == e2','x value', 't'*2, 'round-off error', sep='t')
          print('='*70)
          for i in range(10):
          x = np.random.randn()
          e1 = -(1/x)*(1/x)
          e2 = (-1/(x**2))
          print(e1 == e2, x, e1-e2, sep='tt')


          Output:



          e1 == e2    x value                 round-off error
          ======================================================================
          True 0.2934154339948173 0.0
          True -1.2881863891014191 0.0
          True 1.0463038021843876 0.0
          True -0.3388766143622498 0.0
          True -0.6915415747192347 0.0
          False 1.3299049850551317 1.1102230246251565e-16
          True -1.2392046539563553 0.0
          False -0.42534236747121645 8.881784197001252e-16
          True 1.407198823994324 0.0
          False -0.21798652132356966 3.552713678800501e-15





          Even though the round-off error seems to be a bit less (I tried different random values, and rarely more than two out of ten had a round-off error), but still there are already small differences when just calculating 1/x:



          import numpy as np
          print('e1 == e2','x value', 't'*2, 'round-off error', sep='t')
          print('='*70)
          for i in range(10):
          x = np.random.randn()
          # calculate 1/x
          result = 1/x
          # apply inverse function
          reconstructed_x = 1/result
          # mathematically this should be the same as x
          print(x == reconstructed_x, x, x-reconstructed_x, sep='tt')


          Output:



          e1 == e2    x value             round-off error
          ======================================================================
          False 0.9382823115235075 1.1102230246251565e-16
          True -0.5081217386356917 0.0
          True -0.04229436058156134 0.0
          True 1.1121100294357302 0.0
          False 0.4974618312372863 -5.551115123125783e-17
          True -0.20409933212316553 0.0
          True -0.6501652554924282 0.0
          True -3.048057937738731 0.0
          True 1.6236075700470816 0.0
          True 0.4936926651641918 0.0





          share|improve this answer


























          • I've found that when I change the order of computation (i.e. my formula becomes -(1/a)*(1/a)), the difference becomes zero (i.e. == 0)

            – HOANG GIANG
            Nov 13 '18 at 13:16











          • @HOANGGIANG Yes, that’s a good point! Even though -(1/x)*(1/x) is mathematically equivalent to -1/x^2 it is not always when calculating it on the computer. I made an edit at the end of my answer.

            – blue-phoenox
            Nov 13 '18 at 15:52











          • @HOANGGIANG It would be great if you could give me some feedback, regarding the question "Can anyone explain to me what the problem is?". If you found the explanation useful, I'd be glad, if you accept the answer, to value the effort done, thx!

            – blue-phoenox
            Nov 14 '18 at 15:38






          • 1





            Sorry :) I just upvoted your answer and forgot to accept it as the right answer

            – HOANG GIANG
            Nov 14 '18 at 16:06











          • @HOANGGIANG Great, thanks! :)

            – blue-phoenox
            Nov 14 '18 at 16:27













          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53274587%2fpytorch-gradient-differs-from-manually-calculated-gradient%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2














          So I guess you expect zero as result. When you take a closer look you see that it is quite close. When deviding numbers on a binary system (computer) then you often get round-off errors.



          Lets take a look at your example with an additional print-statement added:



          a = torch.tensor(np.random.randn(), requires_grad=True)
          loss = 1/a
          loss.backward()
          print(a.grad, (-1/(a**2)))
          print(a.grad - (-1/(a**2)))


          The because of the random input the output is of course random too (so you won't get this very same numbers, but just repeat and you will have similar examples), sometimes you will also get zero as result, here it is note the case:



          tensor(-0.9074) tensor(-0.9074, grad_fn=<MulBackward>)
          tensor(5.9605e-08, grad_fn=<ThSubBackward>)


          You see even though both are displayed as the same number but they differ in one of the last decimal places. That is why you get this very small difference when subtracting both.



          This problem as a general problem of computers, some fractions just have an large or infinite number of decimal places, but your memory has not. So so they are cut off at some point.



          So what you experience here is actually a lack of precision. And the precision is depending on the numerical data type you are using (i.e. torch.float32 or torch.float64).



          You can also take a look here more information:
          https://en.wikipedia.org/wiki/Double-precision_floating-point_format





          But this is not specific to PyTorch or so, here is a Python example:



          print(29/100*100)


          Results in:



          28.999999999999996




          Edit:



          As @HOANG GIANG pointed out, changing the equation to -(1/a)*(1/a) works well and the result is zero.
          Probably this the case because the calculation in done to calculate the gradient is very similar (or the same) to -(1/a)*(1/a) in this case. Therefore it shares the same round-off errors therefore the difference is zero.



          So then here is another more fitting example than the one above. Even though -(1/x)*(1/x) is mathematically equivalent to -1/x^2 it is not always the same when calculating it on the computer, depending on the value of x:



          import numpy as np
          print('e1 == e2','x value', 't'*2, 'round-off error', sep='t')
          print('='*70)
          for i in range(10):
          x = np.random.randn()
          e1 = -(1/x)*(1/x)
          e2 = (-1/(x**2))
          print(e1 == e2, x, e1-e2, sep='tt')


          Output:



          e1 == e2    x value                 round-off error
          ======================================================================
          True 0.2934154339948173 0.0
          True -1.2881863891014191 0.0
          True 1.0463038021843876 0.0
          True -0.3388766143622498 0.0
          True -0.6915415747192347 0.0
          False 1.3299049850551317 1.1102230246251565e-16
          True -1.2392046539563553 0.0
          False -0.42534236747121645 8.881784197001252e-16
          True 1.407198823994324 0.0
          False -0.21798652132356966 3.552713678800501e-15





          Even though the round-off error seems to be a bit less (I tried different random values, and rarely more than two out of ten had a round-off error), but still there are already small differences when just calculating 1/x:



          import numpy as np
          print('e1 == e2','x value', 't'*2, 'round-off error', sep='t')
          print('='*70)
          for i in range(10):
          x = np.random.randn()
          # calculate 1/x
          result = 1/x
          # apply inverse function
          reconstructed_x = 1/result
          # mathematically this should be the same as x
          print(x == reconstructed_x, x, x-reconstructed_x, sep='tt')


          Output:



          e1 == e2    x value             round-off error
          ======================================================================
          False 0.9382823115235075 1.1102230246251565e-16
          True -0.5081217386356917 0.0
          True -0.04229436058156134 0.0
          True 1.1121100294357302 0.0
          False 0.4974618312372863 -5.551115123125783e-17
          True -0.20409933212316553 0.0
          True -0.6501652554924282 0.0
          True -3.048057937738731 0.0
          True 1.6236075700470816 0.0
          True 0.4936926651641918 0.0





          share|improve this answer


























          • I've found that when I change the order of computation (i.e. my formula becomes -(1/a)*(1/a)), the difference becomes zero (i.e. == 0)

            – HOANG GIANG
            Nov 13 '18 at 13:16











          • @HOANGGIANG Yes, that’s a good point! Even though -(1/x)*(1/x) is mathematically equivalent to -1/x^2 it is not always when calculating it on the computer. I made an edit at the end of my answer.

            – blue-phoenox
            Nov 13 '18 at 15:52











          • @HOANGGIANG It would be great if you could give me some feedback, regarding the question "Can anyone explain to me what the problem is?". If you found the explanation useful, I'd be glad, if you accept the answer, to value the effort done, thx!

            – blue-phoenox
            Nov 14 '18 at 15:38






          • 1





            Sorry :) I just upvoted your answer and forgot to accept it as the right answer

            – HOANG GIANG
            Nov 14 '18 at 16:06











          • @HOANGGIANG Great, thanks! :)

            – blue-phoenox
            Nov 14 '18 at 16:27


















          2














          So I guess you expect zero as result. When you take a closer look you see that it is quite close. When deviding numbers on a binary system (computer) then you often get round-off errors.



          Lets take a look at your example with an additional print-statement added:



          a = torch.tensor(np.random.randn(), requires_grad=True)
          loss = 1/a
          loss.backward()
          print(a.grad, (-1/(a**2)))
          print(a.grad - (-1/(a**2)))


          The because of the random input the output is of course random too (so you won't get this very same numbers, but just repeat and you will have similar examples), sometimes you will also get zero as result, here it is note the case:



          tensor(-0.9074) tensor(-0.9074, grad_fn=<MulBackward>)
          tensor(5.9605e-08, grad_fn=<ThSubBackward>)


          You see even though both are displayed as the same number but they differ in one of the last decimal places. That is why you get this very small difference when subtracting both.



          This problem as a general problem of computers, some fractions just have an large or infinite number of decimal places, but your memory has not. So so they are cut off at some point.



          So what you experience here is actually a lack of precision. And the precision is depending on the numerical data type you are using (i.e. torch.float32 or torch.float64).



          You can also take a look here more information:
          https://en.wikipedia.org/wiki/Double-precision_floating-point_format





          But this is not specific to PyTorch or so, here is a Python example:



          print(29/100*100)


          Results in:



          28.999999999999996




          Edit:



          As @HOANG GIANG pointed out, changing the equation to -(1/a)*(1/a) works well and the result is zero.
          Probably this the case because the calculation in done to calculate the gradient is very similar (or the same) to -(1/a)*(1/a) in this case. Therefore it shares the same round-off errors therefore the difference is zero.



          So then here is another more fitting example than the one above. Even though -(1/x)*(1/x) is mathematically equivalent to -1/x^2 it is not always the same when calculating it on the computer, depending on the value of x:



          import numpy as np
          print('e1 == e2','x value', 't'*2, 'round-off error', sep='t')
          print('='*70)
          for i in range(10):
          x = np.random.randn()
          e1 = -(1/x)*(1/x)
          e2 = (-1/(x**2))
          print(e1 == e2, x, e1-e2, sep='tt')


          Output:



          e1 == e2    x value                 round-off error
          ======================================================================
          True 0.2934154339948173 0.0
          True -1.2881863891014191 0.0
          True 1.0463038021843876 0.0
          True -0.3388766143622498 0.0
          True -0.6915415747192347 0.0
          False 1.3299049850551317 1.1102230246251565e-16
          True -1.2392046539563553 0.0
          False -0.42534236747121645 8.881784197001252e-16
          True 1.407198823994324 0.0
          False -0.21798652132356966 3.552713678800501e-15





          Even though the round-off error seems to be a bit less (I tried different random values, and rarely more than two out of ten had a round-off error), but still there are already small differences when just calculating 1/x:



          import numpy as np
          print('e1 == e2','x value', 't'*2, 'round-off error', sep='t')
          print('='*70)
          for i in range(10):
          x = np.random.randn()
          # calculate 1/x
          result = 1/x
          # apply inverse function
          reconstructed_x = 1/result
          # mathematically this should be the same as x
          print(x == reconstructed_x, x, x-reconstructed_x, sep='tt')


          Output:



          e1 == e2    x value             round-off error
          ======================================================================
          False 0.9382823115235075 1.1102230246251565e-16
          True -0.5081217386356917 0.0
          True -0.04229436058156134 0.0
          True 1.1121100294357302 0.0
          False 0.4974618312372863 -5.551115123125783e-17
          True -0.20409933212316553 0.0
          True -0.6501652554924282 0.0
          True -3.048057937738731 0.0
          True 1.6236075700470816 0.0
          True 0.4936926651641918 0.0





          share|improve this answer


























          • I've found that when I change the order of computation (i.e. my formula becomes -(1/a)*(1/a)), the difference becomes zero (i.e. == 0)

            – HOANG GIANG
            Nov 13 '18 at 13:16











          • @HOANGGIANG Yes, that’s a good point! Even though -(1/x)*(1/x) is mathematically equivalent to -1/x^2 it is not always when calculating it on the computer. I made an edit at the end of my answer.

            – blue-phoenox
            Nov 13 '18 at 15:52











          • @HOANGGIANG It would be great if you could give me some feedback, regarding the question "Can anyone explain to me what the problem is?". If you found the explanation useful, I'd be glad, if you accept the answer, to value the effort done, thx!

            – blue-phoenox
            Nov 14 '18 at 15:38






          • 1





            Sorry :) I just upvoted your answer and forgot to accept it as the right answer

            – HOANG GIANG
            Nov 14 '18 at 16:06











          • @HOANGGIANG Great, thanks! :)

            – blue-phoenox
            Nov 14 '18 at 16:27
















          2












          2








          2







          So I guess you expect zero as result. When you take a closer look you see that it is quite close. When deviding numbers on a binary system (computer) then you often get round-off errors.



          Lets take a look at your example with an additional print-statement added:



          a = torch.tensor(np.random.randn(), requires_grad=True)
          loss = 1/a
          loss.backward()
          print(a.grad, (-1/(a**2)))
          print(a.grad - (-1/(a**2)))


          The because of the random input the output is of course random too (so you won't get this very same numbers, but just repeat and you will have similar examples), sometimes you will also get zero as result, here it is note the case:



          tensor(-0.9074) tensor(-0.9074, grad_fn=<MulBackward>)
          tensor(5.9605e-08, grad_fn=<ThSubBackward>)


          You see even though both are displayed as the same number but they differ in one of the last decimal places. That is why you get this very small difference when subtracting both.



          This problem as a general problem of computers, some fractions just have an large or infinite number of decimal places, but your memory has not. So so they are cut off at some point.



          So what you experience here is actually a lack of precision. And the precision is depending on the numerical data type you are using (i.e. torch.float32 or torch.float64).



          You can also take a look here more information:
          https://en.wikipedia.org/wiki/Double-precision_floating-point_format





          But this is not specific to PyTorch or so, here is a Python example:



          print(29/100*100)


          Results in:



          28.999999999999996




          Edit:



          As @HOANG GIANG pointed out, changing the equation to -(1/a)*(1/a) works well and the result is zero.
          Probably this the case because the calculation in done to calculate the gradient is very similar (or the same) to -(1/a)*(1/a) in this case. Therefore it shares the same round-off errors therefore the difference is zero.



          So then here is another more fitting example than the one above. Even though -(1/x)*(1/x) is mathematically equivalent to -1/x^2 it is not always the same when calculating it on the computer, depending on the value of x:



          import numpy as np
          print('e1 == e2','x value', 't'*2, 'round-off error', sep='t')
          print('='*70)
          for i in range(10):
          x = np.random.randn()
          e1 = -(1/x)*(1/x)
          e2 = (-1/(x**2))
          print(e1 == e2, x, e1-e2, sep='tt')


          Output:



          e1 == e2    x value                 round-off error
          ======================================================================
          True 0.2934154339948173 0.0
          True -1.2881863891014191 0.0
          True 1.0463038021843876 0.0
          True -0.3388766143622498 0.0
          True -0.6915415747192347 0.0
          False 1.3299049850551317 1.1102230246251565e-16
          True -1.2392046539563553 0.0
          False -0.42534236747121645 8.881784197001252e-16
          True 1.407198823994324 0.0
          False -0.21798652132356966 3.552713678800501e-15





          Even though the round-off error seems to be a bit less (I tried different random values, and rarely more than two out of ten had a round-off error), but still there are already small differences when just calculating 1/x:



          import numpy as np
          print('e1 == e2','x value', 't'*2, 'round-off error', sep='t')
          print('='*70)
          for i in range(10):
          x = np.random.randn()
          # calculate 1/x
          result = 1/x
          # apply inverse function
          reconstructed_x = 1/result
          # mathematically this should be the same as x
          print(x == reconstructed_x, x, x-reconstructed_x, sep='tt')


          Output:



          e1 == e2    x value             round-off error
          ======================================================================
          False 0.9382823115235075 1.1102230246251565e-16
          True -0.5081217386356917 0.0
          True -0.04229436058156134 0.0
          True 1.1121100294357302 0.0
          False 0.4974618312372863 -5.551115123125783e-17
          True -0.20409933212316553 0.0
          True -0.6501652554924282 0.0
          True -3.048057937738731 0.0
          True 1.6236075700470816 0.0
          True 0.4936926651641918 0.0





          share|improve this answer















          So I guess you expect zero as result. When you take a closer look you see that it is quite close. When deviding numbers on a binary system (computer) then you often get round-off errors.



          Lets take a look at your example with an additional print-statement added:



          a = torch.tensor(np.random.randn(), requires_grad=True)
          loss = 1/a
          loss.backward()
          print(a.grad, (-1/(a**2)))
          print(a.grad - (-1/(a**2)))


          The because of the random input the output is of course random too (so you won't get this very same numbers, but just repeat and you will have similar examples), sometimes you will also get zero as result, here it is note the case:



          tensor(-0.9074) tensor(-0.9074, grad_fn=<MulBackward>)
          tensor(5.9605e-08, grad_fn=<ThSubBackward>)


          You see even though both are displayed as the same number but they differ in one of the last decimal places. That is why you get this very small difference when subtracting both.



          This problem as a general problem of computers, some fractions just have an large or infinite number of decimal places, but your memory has not. So so they are cut off at some point.



          So what you experience here is actually a lack of precision. And the precision is depending on the numerical data type you are using (i.e. torch.float32 or torch.float64).



          You can also take a look here more information:
          https://en.wikipedia.org/wiki/Double-precision_floating-point_format





          But this is not specific to PyTorch or so, here is a Python example:



          print(29/100*100)


          Results in:



          28.999999999999996




          Edit:



          As @HOANG GIANG pointed out, changing the equation to -(1/a)*(1/a) works well and the result is zero.
          Probably this the case because the calculation in done to calculate the gradient is very similar (or the same) to -(1/a)*(1/a) in this case. Therefore it shares the same round-off errors therefore the difference is zero.



          So then here is another more fitting example than the one above. Even though -(1/x)*(1/x) is mathematically equivalent to -1/x^2 it is not always the same when calculating it on the computer, depending on the value of x:



          import numpy as np
          print('e1 == e2','x value', 't'*2, 'round-off error', sep='t')
          print('='*70)
          for i in range(10):
          x = np.random.randn()
          e1 = -(1/x)*(1/x)
          e2 = (-1/(x**2))
          print(e1 == e2, x, e1-e2, sep='tt')


          Output:



          e1 == e2    x value                 round-off error
          ======================================================================
          True 0.2934154339948173 0.0
          True -1.2881863891014191 0.0
          True 1.0463038021843876 0.0
          True -0.3388766143622498 0.0
          True -0.6915415747192347 0.0
          False 1.3299049850551317 1.1102230246251565e-16
          True -1.2392046539563553 0.0
          False -0.42534236747121645 8.881784197001252e-16
          True 1.407198823994324 0.0
          False -0.21798652132356966 3.552713678800501e-15





          Even though the round-off error seems to be a bit less (I tried different random values, and rarely more than two out of ten had a round-off error), but still there are already small differences when just calculating 1/x:



          import numpy as np
          print('e1 == e2','x value', 't'*2, 'round-off error', sep='t')
          print('='*70)
          for i in range(10):
          x = np.random.randn()
          # calculate 1/x
          result = 1/x
          # apply inverse function
          reconstructed_x = 1/result
          # mathematically this should be the same as x
          print(x == reconstructed_x, x, x-reconstructed_x, sep='tt')


          Output:



          e1 == e2    x value             round-off error
          ======================================================================
          False 0.9382823115235075 1.1102230246251565e-16
          True -0.5081217386356917 0.0
          True -0.04229436058156134 0.0
          True 1.1121100294357302 0.0
          False 0.4974618312372863 -5.551115123125783e-17
          True -0.20409933212316553 0.0
          True -0.6501652554924282 0.0
          True -3.048057937738731 0.0
          True 1.6236075700470816 0.0
          True 0.4936926651641918 0.0






          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 14 '18 at 9:12

























          answered Nov 13 '18 at 10:27









          blue-phoenoxblue-phoenox

          4,13691543




          4,13691543













          • I've found that when I change the order of computation (i.e. my formula becomes -(1/a)*(1/a)), the difference becomes zero (i.e. == 0)

            – HOANG GIANG
            Nov 13 '18 at 13:16











          • @HOANGGIANG Yes, that’s a good point! Even though -(1/x)*(1/x) is mathematically equivalent to -1/x^2 it is not always when calculating it on the computer. I made an edit at the end of my answer.

            – blue-phoenox
            Nov 13 '18 at 15:52











          • @HOANGGIANG It would be great if you could give me some feedback, regarding the question "Can anyone explain to me what the problem is?". If you found the explanation useful, I'd be glad, if you accept the answer, to value the effort done, thx!

            – blue-phoenox
            Nov 14 '18 at 15:38






          • 1





            Sorry :) I just upvoted your answer and forgot to accept it as the right answer

            – HOANG GIANG
            Nov 14 '18 at 16:06











          • @HOANGGIANG Great, thanks! :)

            – blue-phoenox
            Nov 14 '18 at 16:27





















          • I've found that when I change the order of computation (i.e. my formula becomes -(1/a)*(1/a)), the difference becomes zero (i.e. == 0)

            – HOANG GIANG
            Nov 13 '18 at 13:16











          • @HOANGGIANG Yes, that’s a good point! Even though -(1/x)*(1/x) is mathematically equivalent to -1/x^2 it is not always when calculating it on the computer. I made an edit at the end of my answer.

            – blue-phoenox
            Nov 13 '18 at 15:52











          • @HOANGGIANG It would be great if you could give me some feedback, regarding the question "Can anyone explain to me what the problem is?". If you found the explanation useful, I'd be glad, if you accept the answer, to value the effort done, thx!

            – blue-phoenox
            Nov 14 '18 at 15:38






          • 1





            Sorry :) I just upvoted your answer and forgot to accept it as the right answer

            – HOANG GIANG
            Nov 14 '18 at 16:06











          • @HOANGGIANG Great, thanks! :)

            – blue-phoenox
            Nov 14 '18 at 16:27



















          I've found that when I change the order of computation (i.e. my formula becomes -(1/a)*(1/a)), the difference becomes zero (i.e. == 0)

          – HOANG GIANG
          Nov 13 '18 at 13:16





          I've found that when I change the order of computation (i.e. my formula becomes -(1/a)*(1/a)), the difference becomes zero (i.e. == 0)

          – HOANG GIANG
          Nov 13 '18 at 13:16













          @HOANGGIANG Yes, that’s a good point! Even though -(1/x)*(1/x) is mathematically equivalent to -1/x^2 it is not always when calculating it on the computer. I made an edit at the end of my answer.

          – blue-phoenox
          Nov 13 '18 at 15:52





          @HOANGGIANG Yes, that’s a good point! Even though -(1/x)*(1/x) is mathematically equivalent to -1/x^2 it is not always when calculating it on the computer. I made an edit at the end of my answer.

          – blue-phoenox
          Nov 13 '18 at 15:52













          @HOANGGIANG It would be great if you could give me some feedback, regarding the question "Can anyone explain to me what the problem is?". If you found the explanation useful, I'd be glad, if you accept the answer, to value the effort done, thx!

          – blue-phoenox
          Nov 14 '18 at 15:38





          @HOANGGIANG It would be great if you could give me some feedback, regarding the question "Can anyone explain to me what the problem is?". If you found the explanation useful, I'd be glad, if you accept the answer, to value the effort done, thx!

          – blue-phoenox
          Nov 14 '18 at 15:38




          1




          1





          Sorry :) I just upvoted your answer and forgot to accept it as the right answer

          – HOANG GIANG
          Nov 14 '18 at 16:06





          Sorry :) I just upvoted your answer and forgot to accept it as the right answer

          – HOANG GIANG
          Nov 14 '18 at 16:06













          @HOANGGIANG Great, thanks! :)

          – blue-phoenox
          Nov 14 '18 at 16:27







          @HOANGGIANG Great, thanks! :)

          – blue-phoenox
          Nov 14 '18 at 16:27




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53274587%2fpytorch-gradient-differs-from-manually-calculated-gradient%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Full-time equivalent

          Bicuculline

          さくらももこ