You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your amazing work. I am a little confused about the layer swap part in your implementation. It seems that you first pass the latent code into the base model and then extract the intermediate results for the target model as the following.
img1, swap_res = g_ema1([input_latent], input_is_latent=True, save_for_swap=True, swap_layer=args.swap_layer)
for i in range(args.stylenum):
sample_z_style = torch.randn(1, 512, device=args.device)
img_style, _ = g_ema2([input_latent], truncation=0.5, truncation_latent=mean_latent, swap=True, swap_layer=args.swap_layer, swap_tensor=swap_res, multi_style=True, multi_style_latent=[sample_z_style])
print(i)
img_style_name = args.output + "_style_" + str(i) + ".png"
img_style = make_image(img_style)
out_style = Image.fromarray(img_style[0])
out_style.save(img_style_name)```
Is it true that you are trying to keep the low level information such as shape and pose from original model and put the lightening and texture from the target model?
The text was updated successfully, but these errors were encountered:
Thank you for your amazing work. I am a little confused about the layer swap part in your implementation. It seems that you first pass the latent code into the base model and then extract the intermediate results for the target model as the following.
The text was updated successfully, but these errors were encountered: