-
Notifications
You must be signed in to change notification settings - Fork 6.6k
Feature/group offload pinning #12747
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Feature/group offload pinning #12747
Conversation
7e50d90 to
3b3813d
Compare
|
Thanks for your PR. However, it's being worked on in #12721. |
|
Could we resolve conflicts so that it's a bit easier to review? Seems like there's some overlap from #12692. |
6d96002 to
33d8b52
Compare
|
Done! Rebased on latest main and resolved conflicts with #12692. Should be much cleaner to review now. |
sayakpaul
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some initial comments.
| should_synchronize = ( | ||
| not self.group.onload_self and self.group.stream is not None and not should_onload_next_group | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if non_blocking=True?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even with non_blocking=True, if a previous group onloaded this one on a side stream, we need a sync before the default stream uses the weights or we risk reading half-copied tensors. I’ve limited the sync to the record_stream=False case, when record_stream=True the tensors are tied to the consumer stream so we can safely skip the sync.
|
Thank you for the initial comment! We are working on the solutions right now |
6f5887e to
1194a83
Compare
|
@bot /style |
|
Style bot fixed some files and pushed the changes. |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
…onstantine/diffusers into feature/group-offload-pinning
…onstantine/diffusers into feature/group-offload-pinning
9888699 to
b950c74
Compare
|
@sayakpaul need help with approval |
|
@sayakpaul On my local device there is still a few failed tests on We have fixed the previous error on |
|
Thanks! Could you enlist the failures you are seeing? |
|
These were the error logs where two of them are I/O serialization error, a memory check error (I read the comments that this error usually passes on Ampere and Ada environment, which both are not my current environment), and a slight output difference in test_output_pretrained |
|
@sayakpaul Also with the current checks, it looks like there is coding style error. Can you help us run the automatic style correction? |
|
@bot /style |
|
Style fix is beginning .... View the workflow run here. |
|
The style bot cannot automatically do it. See: I would recommend the following:
|
|
Thanks for the pointer @sayakpaul |
|
@Aki-07 @bconstantine I ran those failing tests on my end with this branch and also on |
|
@sayakpaul thankyou for testing! Glad to hear no failures on your environment end. |
|
Hey @sayakpaul, the WanVACE LoRA failures came from the hook offloading immediately when it was attached. It saved the weights before LoRA was added, then put them back later, so the adapters never took effect. I removed that eager offload so the first offload happens after adapters are loaded. Would need your help to re run the pipelines |
What does this PR do?
Fixes #11966
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sayakpaul