Hierarchical Motion Understanding via Motion Programs

motion2prog_release

Hierarchical Motion Understanding via Motion Programs (CVPR 2021)
This repository contains the official implementation of:

Hierarchical Motion Understanding via Motion Programs

Running motion2prog

0. We start with video file and first prepare the input data

$ ffmpeg -i ${video_dir}/video.mp4 ${video_dir}/frames/%05d.jpg
$ python AlphaPose/scripts/demo_inference.py 
    --cfg AlphaPose/pretrained_models/256x192_res50_lr1e-3_1x.yaml 
    --checkpoint AlphaPose/pretrained_models/halpe26_fast_res50_256x192.pth 
    --indir ${video_dir}/frames --outdir ${video_dir}/pose_mpii_track 
    --pose_track --showbox --flip --qsize 256
$ mv ${video_dir}/pose_mpii_track/alphapose-results.json 
    ${video_dir}/alphapose-results-halpe26-posetrack.json

We packaged a demo video with necessary inputs for quickly testing our code

$ wget https://sumith1896.github.io/motion2prog/static/demo.zip
$ mv demo.zip data/  && cd data/ && unzip demo.zip && cd ..
  • We need 2D pose detection results & extracted frames of video (for visualization)
  • We support loading from different pose detector formats in the load function in lkeypoints.py.
  • We used AlphaPose with the above

     

     

     

    To finish reading, please visit source site