Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

what should i adjust to achieve the effect in the example #9

Open
Chrisliu89 opened this issue Nov 1, 2024 · 10 comments
Open

what should i adjust to achieve the effect in the example #9

Chrisliu89 opened this issue Nov 1, 2024 · 10 comments

Comments

@Chrisliu89
Copy link

屏幕截图 2024-11-01 170643
I don't konw what to adjust to replace the characters in the video,i will appreciate if you could teach me.

@gt732
Copy link

gt732 commented Nov 1, 2024

@Chrisliu89 You can see my PR here #6 on how you can use your own videos.

@Chrisliu89
Copy link
Author

I am using an mp4 file,is this possible?where should i put the video,my structure has already finished and it can use,but as a newbee for computer,i tried to change pkl in TalkSHOW dataset and trained again with new words,only change the face ,the gesture is the same

@Chrisliu89
Copy link
Author

Thank you gt732 for your reply, as a freshman for computer science i looked #9 but i saw it need npy files, but i don't know how can i get it and about the script inference_tram.sh i also don't know where i can get it

└── tram
├── animation
│ └── hps_track_0.npy
└── camera
└── camera.npy

@gt732
Copy link

gt732 commented Nov 1, 2024

@Chrisliu89 The way you get those files is by running TRAM on your mp4 video file

https://github.com/yufu-wang/tram

fastest way to get it up and running quickly is by this docker container you still need to clone TRAM into the container

https://hub.docker.com/r/aza1200/tram/tags

If you get it working share your results here!

@Chrisliu89
Copy link
Author

I'm trying to run it in docker,could you share the inference_tram_.sh script if possible, and where in the script should i introduce the path to the custom_motion folder?

@gt732
Copy link

gt732 commented Nov 3, 2024

@Chrisliu89 you can find it under my pull request, clone my fork. Also you can create the custom_motion folder in the root DIR

@Chrisliu89
Copy link
Author

Sorry for my multiple interruptions and thank you for your continued replies, I tried the demo video in the tram and got results, but I still didn't achieve the desired results in your video I think the hps_track_0 file is just a part of the video and not the whole action track,So I tried another tracks file outside of the hps folder (the folder where track0 is located) and updated the path as in the code below, but it had a new problem, and I wanted to ask how your demo video with the full action ran out
屏幕截图 2024-11-09 144316
smpl_poses = f'{TRAM_ROOT}/animation/tracks.npy'
屏幕截图 2024-11-09 143821
屏幕截图 2024-11-09 143750

@gt732
Copy link

gt732 commented Nov 10, 2024

@Chrisliu89 NP, here is step by step what i did to achieve the results below.

1108.mp4
  • Take note of the dimensions of the video from the wild mine were
# Video in the wild dimensions
        image_width = 480 
        image_height = 854
  • RUN TRAM on the video to get camera.npy and hps_track_0.npy
python scripts/estimate_camera.py --video "./another_video.mov" --static_camera # its crucial to add this flag or it will not work correctly

python scripts/estimate_humans.py --video "./example_video.mov" --max_humans 1 # I just track one human you could possibly track more if you want

python scripts/visualize_tram.py --video "./example_video.mov"
  • Move the camera.py and hps_track_0.npy file from TRAM into the custom motion folder in the dreamwaltz-g root dir like so
custom_motion/
└── tram
    ├── animation
    │   ├── hps_track_0.npy
    │   
    └── camera
        ├── camera.npy
        

https://github.com/gt732/DreamWaltz-G/blob/d583b3cfd63a8f351b818c7e4a39bc00ac7de825/scripts/inference_tram.sh#L7

Run inference with the tram loader

bash scripts/inference_tram.sh

If you want can you share the video you are testing I can test on my end as well.

let me know how it goes!

@Nomi-Kud
Copy link

Nomi-Kud commented Nov 13, 2024

@gt732 Hello, I am currently researching the work of replacing video characters, and I am very interested in your work and have tried it. I have been able to generate the result with original video correctly, but I have not been successful in using ProPainter from comfyUI to get a video with the background‘s people removed. I wanted to konw how you achieved this.Thank u!
Here is my ProPainter’s result and the final ones:

AnimateDiff_00003.mp4
000000_image.mp4

@gt732
Copy link

gt732 commented Nov 13, 2024

@Nomi-Kud Hey happy to hear the animation is working using Tram. Here is the workflow I used to generate the final video. It is a bit messy but you can follow what I did.
dreamwaltz.json

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants