-
Notifications
You must be signed in to change notification settings - Fork 22.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move autograd metadata from VariableImpl to TensorImpl #13827
Conversation
40e3575
to
18e0875
Compare
eb509d9
to
8c2d05b
Compare
82d723d
to
4e4a9f7
Compare
ac524cd
to
6d74b6a
Compare
a911dfa
to
666bd74
Compare
666bd74
to
918b052
Compare
test/test_sparse.py
Outdated
@@ -1710,6 +1710,16 @@ def test_is_nonzero(self): | |||
with self.assertRaisesRegex(RuntimeError, "bool value of Tensor with no values is ambiguous"): | |||
torch.sparse_coo_tensor(([0, 1],), self.ValueTensor(2, 0), (4, 0)).is_nonzero() | |||
|
|||
def test_allow_size_or_storage_change(self): | |||
def do_test(t): | |||
a = self.SparseTensor(3, 3) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_torch.py
Outdated
@@ -9321,6 +9321,26 @@ def test_reverse_binary_ops_multiple_device(self): | |||
with self.assertRaisesRegex(RuntimeError, "expected both inputs to be on same device"): | |||
torch.tensor(2).to("cuda:1") // torch.tensor(3).to("cuda:0") | |||
|
|||
def test_allow_size_or_storage_change(self): |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
@@ -105,7 +105,7 @@ static int THPVariable_traverse(THPVariable *self, visitproc visit, void *arg) | |||
static int THPVariable_clear(THPVariable *self) | |||
{ | |||
Py_CLEAR(self->backward_hooks); | |||
if (self->cdata.defined()) { | |||
if (self->cdata.defined() && self->cdata.get_autograd_meta()) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aa46050
to
46e7780
Compare
52b58f7
to
ba1abc4
Compare
8159235
to
8a43cf6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Hi Will, do you want a rereview? If so, what's new? |
@@ -4859,7 +4859,7 @@ class DerivedStateModule(torch.jit.ScriptModule): | |||
def __init__(self): | |||
super(TestScript.DerivedStateModule, self).__init__() | |||
self.param = torch.nn.Parameter(torch.ones(3, 4, dtype=torch.float)) | |||
self.register_buffer('derived', torch.neg(self.param).detach()) | |||
self.register_buffer('derived', torch.neg(self.param).detach().clone()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's going on here? Is the detached thing being inplace modified later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gchanan yes self.derived
is being in-place modified in https://github.com/pytorch/pytorch/pull/13827/files#diff-78aab5b4217aeaed920efab0c5f5f5c8R4904 and https://github.com/pytorch/pytorch/pull/13827/files#diff-78aab5b4217aeaed920efab0c5f5f5c8R4909
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would be nice if you could track down more places where we could set allow_tensor_metadata_change (is that even possible?). But that could be a follow-up (again, assuming it's possible).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Ran into a weird use-after-free error in internal tests. Currently investigating. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Changes originally in this PR: 1. Move Variable::Impl data members into TensorImpl as `AutogradMeta` struct 2. Change Variable::Impl functions to use data members in `AutogradMeta` struct 3. Add `shallow_copy_and_detach()` function to each subclass of TensorImpl 4. Do shallow copy when the user calls `make_variable(tensor)` / `make_variable_view(tensor)` / `variable.set_data(tensor)` / `variable.detach()` Changes moved from pytorch/pytorch#13645: 1. Add a flag to Variable to disallow size/stride/storage_ptr changes from in-place operations such as `resize_` / `resize_as_` / `set_` / `transpose_`, and set this flag to true when people call `tensor.data` in Python. 2. Write text in the docs to actively discourage changing the shape or storage of `tensor_detached` and expecting `tensor` to also be updated. This is the 1st+2nd PR mentioned in pytorch/pytorch#13638. Pull Request resolved: pytorch/pytorch#13827 Differential Revision: D13507173 Pulled By: yf225 fbshipit-source-id: b177b08438d534a8197e34e1ad4a837e2db0ed6a
Changes originally in this PR:
AutogradMeta
structAutogradMeta
structshallow_copy_and_detach()
function to each subclass of TensorImplmake_variable(tensor)
/make_variable_view(tensor)
/variable.set_data(tensor)
/variable.detach()
Changes moved from #13645:
resize_
/resize_as_
/set_
/transpose_
, and set this flag to true when people calltensor.data
in Python.tensor_detached
and expectingtensor
to also be updated.This is the 1st+2nd PR mentioned in #13638.