Skip to content

When running test.py, RuntimeError: Backward is not reentrant #89

@qqpann

Description

@qqpann

Docker from pytorch/pytorch:1.0.1-cuda10.0-cudnn7-devel

Full error:

root@ad6d5fac0fd6:/app/DCN# python test.py
torch.Size([2, 128, 128, 128])
torch.Size([2, 128, 128, 128])
torch.Size([20, 32, 7, 7])
torch.Size([20, 32, 7, 7])
torch.Size([20, 32, 7, 7])
checking
dconv im2col_step forward passed with 0.0
tensor(0., device='cuda:0', grad_fn=<MaxBackward1>)
dconv im2col_step backward passed with 7.450580596923828e-09 = 7.450580596923828e-09+0.0+0.0+0.0
mdconv im2col_step forward passed with 0.0
tensor(0., device='cuda:0', grad_fn=<MaxBackward1>)
mdconv im2col_step backward passed with 3.725290298461914e-09
0.971507, 1.943014
0.971507, 1.943014
tensor(0., device='cuda:0')
dconv zero offset passed with 1.4901161193847656e-07
dconv zero offset identify passed with 0.0
tensor(0., device='cuda:0')
mdconv zero offset passed with 2.384185791015625e-07
mdconv zero offset identify passed with 0.0
check_gradient_conv:  True
Traceback (most recent call last):
  File "test.py", line 624, in <module>
    check_gradient_dconv()
  File "test.py", line 400, in check_gradient_dconv
    eps=1e-3, atol=1e-3, rtol=1e-2, raise_exception=True))
  File "/opt/conda/lib/python3.6/site-packages/torch/autograd/gradcheck.py", line 208, in gradcheck
    return fail_test('Backward is not reentrant, i.e., running backward with same '
  File "/opt/conda/lib/python3.6/site-packages/torch/autograd/gradcheck.py", line 185, in fail_test
    raise RuntimeError(msg)
RuntimeError: Backward is not reentrant, i.e., running backward with same input and grad_output multiple times gives different values, although analytical gradient matches numerical gradient

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions