Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question regarding the forward -backward training loop #50

Open
sorobedio opened this issue Aug 30, 2024 · 0 comments
Open

question regarding the forward -backward training loop #50

sorobedio opened this issue Aug 30, 2024 · 0 comments

Comments

@sorobedio
Copy link

Hello,

I have a question regarding the forward and backward loop. I'm trying to update it to the Lightning style, but I'm having trouble understanding a particular part. Could you please help clarify it for me?

micro = batch[i : i + self.microbatch].to(dist_util.dev()) micro = self.get_first_stage_encoding(micro).detach() micro_cond = { k: v[i : i + self.microbatch].to(dist_util.dev()) for k, v in cond.items() }

assumed 'microbatch' refers to a subset of input data, equivalent to a mini-batch, so let's define microbatch = batch.

The conditioning data is a dictionary where each element matches the batch size. The size of v should depend on the user's conditioning data, not batch_size. We should enforce len(cond) = self.microbatch if self.microbatch > 0; otherwise, len(cond) should match the batch size ==len(micro) or micro.shape[0].

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant