-
Notifications
You must be signed in to change notification settings - Fork 385
add pinned memory support for int8tensor #3489
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3489
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 5bb5818 with merge base ff6d9e2 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
ab9f34d to
4133727
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks great, thanks!
cc @sayakpaul please take a look as well and let us know if the current test is enough to cover the functionality of pin_memory
|
for the |
4133727 to
5bb5818
Compare
sayakpaul
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is perfect! Thanks!
I guess float already supports this?
|
Float8Tensor doesn't support this yet, I think we should follow up with that as well @liangel-02 |
as title
Test
in torchao
python test/quantization/quantize_/workflows/int8/test_int8_tensor.py -k test_pin_memoryin diffusers
python -m pytest tests/quantization/torchao/test_torchao.py -k test_torch_compile_with_group_offload_leaf -sno longer seeing
however, still seeing the known Dynamo error