You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to apply stargan on my own custom anime dataset to learn facial expression transfer.
My dataset includes 2000 images per each emotions. (2k happy, 2k sad, etc)
I run the training process for about 100k iterations.
The problem is that I only see some small changes specially in mouth part. Do you have any suggestion? How can I make network more flexible for changing different parts of faces?
Some sample results:
The text was updated successfully, but these errors were encountered:
Hi, thanks for your great work.
I'm trying to apply stargan on my own custom anime dataset to learn facial expression transfer.
My dataset includes 2000 images per each emotions. (2k happy, 2k sad, etc)
I run the training process for about 100k iterations.
The problem is that I only see some small changes specially in mouth part. Do you have any suggestion? How can I make network more flexible for changing different parts of faces?
Some sample results:
The text was updated successfully, but these errors were encountered: