Skip to content

How do we train lora for Flux-Fill-Dev? #1180

Closed Answered by matabear-wyx
matabear-wyx asked this question in Q&A
Discussion options

You must be logged in to vote

What I did is really simple and raw, as I discussed:
`def pack_fill_latents(latents, batch_size, num_channels_latents, height, width):
latents = latents.view(
batch_size, num_channels_latents, height // 2, 2, width // 2, 2
)
latents = latents.permute(0, 2, 4, 1, 3, 5)
latents = latents.reshape(
batch_size, (height // 2) * (width // 2), num_channels_latents * 4
)
# Step 2: Repeat the packed latents
repeated_latents = latents.repeat(1, 1, 2) # Repeat along the channel dimension

# Step 3: Create a black mask of the required shape
mask = torch.zeros((batch_size, (height // 2) * (width // 2), 8 * 8 * 4), device=latents.device)

# Step 4: Concatenate repeated latents with the black mask
packed_…

Replies: 4 comments 16 replies

Comment options

You must be logged in to vote
14 replies
@matabear-wyx
Comment options

@chengyou-jia
Comment options

@matabear-wyx
Comment options

Answer selected by bghira
@chengyou-jia
Comment options

@Sebastian-Zok
Comment options

@matabear-wyx
Comment options

@Bilal143260
Comment options

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
2 replies
@matabear-wyx
Comment options

@bghira
Comment options

Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
8 participants