Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modular backend - LoRA/LyCORIS #6667

Merged
merged 14 commits into from
Jul 31, 2024

Conversation

StAlKeR7779
Copy link
Contributor

@StAlKeR7779 StAlKeR7779 commented Jul 24, 2024

Summary

Code for lora patching from #6577.
Additionally made it the way, that lora can patch not only weight, but also bias, because saw some loras which doing it.

Related Issues / Discussions

#6606
https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d

QA Instructions

Run with and without set USE_MODULAR_DENOISE environment.

Merge Plan

Replace old lora patcher with new after review done.
If you think that there should be some kind of tests - feel free to add.

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • Documentation added / updated (if applicable)

@github-actions github-actions bot added python PRs that change python files invocations PRs that change invocations backend PRs that change backend files labels Jul 24, 2024
@StAlKeR7779 StAlKeR7779 changed the title Modular backend - lora Modular backend - LoRA/LyCORIS Jul 24, 2024
invokeai/backend/lora.py Outdated Show resolved Hide resolved
invokeai/backend/lora.py Outdated Show resolved Hide resolved
invokeai/backend/lora.py Outdated Show resolved Hide resolved
invokeai/backend/lora.py Show resolved Hide resolved
invokeai/backend/lora.py Show resolved Hide resolved
invokeai/backend/stable_diffusion/extensions_manager.py Outdated Show resolved Hide resolved
invokeai/backend/stable_diffusion/extensions_manager.py Outdated Show resolved Hide resolved
@StAlKeR7779 StAlKeR7779 requested a review from RyanJDick July 30, 2024 00:47
Copy link
Collaborator

@RyanJDick RyanJDick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

I tested the following test cases on both _old_invoke and _new_invoke against main:

  • LoRA patching and unpatching speeds are the same
  • Images generated with LoRA are unchanged
  • Images generated after removing a LoRA are unchanged (i.e. the LoRA is being unpatched properly).
  • Can stack multiple LoRAs

I left one minor comment. I'll go ahead and fix that and fix the conflicts with main prior to merging.

invokeai/backend/util/original_weights_storage.py Outdated Show resolved Hide resolved
@RyanJDick RyanJDick enabled auto-merge July 31, 2024 19:22
@RyanJDick RyanJDick merged commit 4ce64b6 into invoke-ai:main Jul 31, 2024
14 checks passed
@StAlKeR7779 StAlKeR7779 deleted the stalker7779/modular_lora branch July 31, 2024 20:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend PRs that change backend files invocations PRs that change invocations python PRs that change python files
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants