{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":693793673,"defaultBranch":"main","name":"torchrec","ownerLogin":"wpc","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2023-09-19T18:14:33.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/10283?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1695667791.0","currentOid":""},"activityList":{"items":[{"before":"bf8175354d9920f09aeb473c0135d8d40a43808b","after":"98c988f0ae51dd55468d80628e1bb38e0592a7fa","ref":"refs/heads/export-D49600210","pushedAt":"2023-09-26T17:34:37.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"wpc","name":"Pengchao Wang","path":"/wpc","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10283?s=80&v=4"},"commit":{"message":"remove duplicate id_list_features.to_dict calls (#1406)\n\nSummary:\nKJT to_dict calls are quite expensive when we have large number of keys (mostly from heavy memory operations torch.unbind call). This diff make VDD v4 models ebc call and sequence arch calls share the same id_list_features (ro padded) to_dict result.\n\nPrivacy Context Container: 314155190942957\n\n\nReviewed By: 842974287\n\nDifferential Revision: D49600210\n\nPulled By: wpc","shortMessageHtmlLink":"remove duplicate id_list_features.to_dict calls (pytorch#1406)"}},{"before":"251ac4410c018abb06f0610b0bc16eff01b9af75","after":"bf8175354d9920f09aeb473c0135d8d40a43808b","ref":"refs/heads/export-D49600210","pushedAt":"2023-09-26T05:14:12.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"wpc","name":"Pengchao Wang","path":"/wpc","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10283?s=80&v=4"},"commit":{"message":"remove duplicate id_list_features.to_dict calls (#1406)\n\nSummary:\n\nKJT to_dict calls are quite expensive when we have large number of keys (mostly from heavy memory operations torch.unbind call). This diff make VDD v4 models ebc call and sequence arch calls share the same id_list_features (ro padded) to_dict result.\n\nDifferential Revision:\nD49600210\n\nPrivacy Context Container: 314155190942957","shortMessageHtmlLink":"remove duplicate id_list_features.to_dict calls (pytorch#1406)"}},{"before":"be76393a1dd8e15039550cd95e336c6535b209da","after":"251ac4410c018abb06f0610b0bc16eff01b9af75","ref":"refs/heads/export-D49600210","pushedAt":"2023-09-25T20:07:01.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"wpc","name":"Pengchao Wang","path":"/wpc","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10283?s=80&v=4"},"commit":{"message":"remove duplicate id_list_features.to_dict calls (#1406)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/torchrec/pull/1406\n\nKJT to_dict calls are quite expensive when we have large number of keys (mostly from heavy memory operations torch.unbind call). This diff make VDD v4 models ebc call and sequence arch calls share the same id_list_features (ro padded) to_dict result.\n\nDifferential Revision:\nD49600210\n\nPrivacy Context Container: 314155190942957\n\nfbshipit-source-id: a68c00c02268751370dd1fd1c3e983719af36a1f","shortMessageHtmlLink":"remove duplicate id_list_features.to_dict calls (pytorch#1406)"}},{"before":null,"after":"be76393a1dd8e15039550cd95e336c6535b209da","ref":"refs/heads/export-D49600210","pushedAt":"2023-09-25T18:49:51.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"wpc","name":"Pengchao Wang","path":"/wpc","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10283?s=80&v=4"},"commit":{"message":"remove duplicate id_list_features.to_dict calls\n\nSummary: KJT to_dict calls are quite expensive when we have large number of keys (mostly from heavy memory operations torch.unbind call). This diff make VDD v4 models ebc call and sequence arch calls share the same id_list_features (ro padded) to_dict result.\n\nDifferential Revision:\nD49600210\n\nPrivacy Context Container: 314155190942957","shortMessageHtmlLink":"remove duplicate id_list_features.to_dict calls"}},{"before":"224e4aa93ec7be53a31fe0ced9d7fe538865a8a3","after":"1d44fad495de59ce385f64cf3c12cb8ee73bc432","ref":"refs/heads/export-D49423522","pushedAt":"2023-09-20T06:44:43.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"wpc","name":"Pengchao Wang","path":"/wpc","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10283?s=80&v=4"},"commit":{"message":"Add reverse module for ComputeKJTToJTDict to combine jt_dict to kjt (#1399)\n\nSummary:\n\nso that we don't need fx wrap KeyedJaggedTensor.from_jt_dict(jt_dict) manually everywhere. Also base on this we can do graph patten matching cancel (ComputeKJTToJTDict, ComputeKJTToJTDict) pairs during publish to save compute cycles. (see next diff in the stack)\n\nReviewed By: houseroad, YazhiGao\n\nDifferential Revision:\nD49423522\n\nPrivacy Context Container: 314155190942957","shortMessageHtmlLink":"Add reverse module for ComputeKJTToJTDict to combine jt_dict to kjt (p…"}},{"before":"7a237227d7c858f4b9d7bacaef9505ebb73313de","after":"224e4aa93ec7be53a31fe0ced9d7fe538865a8a3","ref":"refs/heads/export-D49423522","pushedAt":"2023-09-19T19:02:54.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"wpc","name":"Pengchao Wang","path":"/wpc","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10283?s=80&v=4"},"commit":{"message":"Add reverse module for ComputeKJTToJTDict to combine jt_dict to kjt (#1399)\n\nSummary:\n\nso that we don't need fx wrap KeyedJaggedTensor.from_jt_dict(jt_dict) manually everywhere. Also base on this we can do graph patten matching cancel (ComputeKJTToJTDict, ComputeKJTToJTDict) pairs during publish to save compute cycles. (see next diff in the stack)\n\nReviewed By: YazhiGao\n\nDifferential Revision:\nD49423522\n\nPrivacy Context Container: 314155190942957","shortMessageHtmlLink":"Add reverse module for ComputeKJTToJTDict to combine jt_dict to kjt (p…"}},{"before":null,"after":"7a237227d7c858f4b9d7bacaef9505ebb73313de","ref":"refs/heads/export-D49423522","pushedAt":"2023-09-19T18:14:36.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"wpc","name":"Pengchao Wang","path":"/wpc","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10283?s=80&v=4"},"commit":{"message":"Add reverse module for ComputeKJTToJTDict to combine jt_dict to kjt\n\nSummary: so that we don't need fx wrap KeyedJaggedTensor.from_jt_dict(jt_dict) manually everywhere. Also base on this we can do graph patten matching cancel (ComputeKJTToJTDict, ComputeKJTToJTDict) pairs during publish to save compute cycles. (see next diff in the stack)\n\nDifferential Revision:\nD49423522\n\nPrivacy Context Container: 314155190942957","shortMessageHtmlLink":"Add reverse module for ComputeKJTToJTDict to combine jt_dict to kjt"}}],"hasNextPage":false,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"Y3Vyc29yOnYyOpK7MjAyMy0wOS0yNlQxNzozNDozNy4wMDAwMDBazwAAAAOJZU58","startCursor":"Y3Vyc29yOnYyOpK7MjAyMy0wOS0yNlQxNzozNDozNy4wMDAwMDBazwAAAAOJZU58","endCursor":"Y3Vyc29yOnYyOpK7MjAyMy0wOS0xOVQxODoxNDozNi4wMDAwMDBazwAAAAOEKhoP"}},"title":"Activity · wpc/torchrec"}