Skip to content

Commit

Permalink
update webcam&readme
Browse files Browse the repository at this point in the history
  • Loading branch information
fredfang committed Aug 30, 2018
1 parent 55ca97c commit 4368a58
Show file tree
Hide file tree
Showing 6 changed files with 24 additions and 11 deletions.
12 changes: 10 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,13 @@ To match poses that correspond to the same person across frames, we also provide
git clone -b pytorch https://github.com/MVIG-SJTU/AlphaPose.git
```

2. Install [pytorch](https://github.com/pytorch/pytorch)
2. Install [pytorch 0.4.0](https://github.com/pytorch/pytorch)
```Shell
chmod +x install.sh
./install.sh
```

3. Download the models manually: **duc_se.pth** ([Google Drive]( https://drive.google.com/open?id=1OPORTWB2cwd5YTVBX-NE8fsauZJWsrtW) | [Baidu pan]())(2018/08/30), **yolov3.weights**([Google Drive](https://drive.google.com/open?id=1yjrziA2RzFqWAQG4Qq7XN0vumsMxwSjS) | [Baidu pan](https://pan.baidu.com/s/108SjV-uIJpxnqDMT19v-Aw)). Place them into `./models/sppe` and `./models/yolo` respectively.
3. Download the models manually: **duc_se.pth** (2018/08/30) ([Google Drive]( https://drive.google.com/open?id=1OPORTWB2cwd5YTVBX-NE8fsauZJWsrtW) | [Baidu pan]()), **yolov3.weights**([Google Drive](https://drive.google.com/open?id=1yjrziA2RzFqWAQG4Qq7XN0vumsMxwSjS) | [Baidu pan](https://pan.baidu.com/s/108SjV-uIJpxnqDMT19v-Aw)). Place them into `./models/sppe` and `./models/yolo` respectively.


## Quick Start
Expand All @@ -51,6 +51,14 @@ python3 demo.py --list examples/list-coco-demo.txt --indir ${img_directory} --ou
```
python3 video_demo.py --video ${path to video} --outdir examples/results/ --conf 0.5 --nms 0.45
```
If you have a gpu with GPU memory larger than 8GB, consider increasing the detection batch:
```
python3 demo.py --indir ${img_directory} --outdir examples/res --detbatch 8
```
- **Note**: If you meet OOM(out of memory) problem, decreasing the pose estimation batch until the program can run on your computer:
```
python3 demo.py --indir ${img_directory} --outdir examples/res --posebatch 30
```
- **For more**: Checkout the [run.md](doc/run.md) for more options

## FAQ
Expand Down
2 changes: 2 additions & 0 deletions doc/run.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@ Here, we first list the flags of this script and then give some examples.
- `--format`: The format of the saved results. By default, it will save the output in COCO-like format. An alternative option is 'cmu', which saves the results in the format of CMU-Pose. For more details, see [output.md](output.md)
- `--conf`: Confidence threshold for human detection. Lower the value can improve the final accuracy but decrease the speed. Default is 0.2.
- `--nms`: Confidence threshold for human detection. Increase the value can improve the final accuracy but decrease the speed. Default is 0.6.
- `--detbatch`: Batch size for the detection network.
- `--posebatch`: Maximum batch size for the pose estimation network. If you met OOM problem, decrease this value until it fit in the memory.

## Examples
- **Run AlphaPose for all images in a folder ,save the results in the format of CMU-Pose and save the images**:
Expand Down
2 changes: 1 addition & 1 deletion install.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
pip3 install --user torch
pip3 install --user torch==0.4.0
pip3 install --user torchvision
pip3 install --user -e git+https://github.com/ncullen93/torchsample.git#egg=torchsample
pip3 install --user visdom
Expand Down
8 changes: 4 additions & 4 deletions opt.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@
parser.add_argument('--mode', dest='mode',
help='detection mode, fast/normal/accurate', default="normal")
parser.add_argument('--outdir', dest='outputpath',
help='output-directory', default="")
help='output-directory', default="examples/res/")
parser.add_argument('--inp_dim', dest='inp_dim', type=str, default='608',
help='inpdim')
parser.add_argument('--conf', dest='confidence', type=float, default=0.2,
Expand All @@ -123,16 +123,16 @@
help='visualize image')
parser.add_argument('--format', type=str,
help='save in the format of cmu or coco or openpose, option: coco/cmu/open')
parser.add_argument('--detbatch', type=int, default=6,
parser.add_argument('--detbatch', type=int, default=1,
help='detection batch size')
parser.add_argument('--posebatch', type=int, default=80,
help='pose estimation maximum batch size')

"----------------------------- Video options -----------------------------"
parser.add_argument('--video', dest='video',
help='video-name', default="")
parser.add_argument('--webcam', dest='webcam',
help='webcam number', default=0)
parser.add_argument('--webcam', dest='webcam', type=str,
help='webcam number', default='0')
parser.add_argument('--save_video', dest='save_video',
help='whether to save rendered video', default=False, action='store_true')
parser.add_argument('--vis_fast', dest='vis_fast',
Expand Down
2 changes: 1 addition & 1 deletion pPose_nms.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
mu = 1.7
delta2 = 2.65
gamma = 22.48
scoreThreds = 0.1
scoreThreds = 0.3
matchThreds = 5
areaThres = 0#40 * 40.5
alpha = 0.1
Expand Down
9 changes: 6 additions & 3 deletions webcam_demo.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ def loop():
fvs = WebcamLoader(webcam).start()
(fourcc,fps,frameSize) = fvs.videoinfo()
# Data writer
save_path = os.path.join(args.outputpath, 'AlphaPose_'+webcam.split('/')[-1].split('.')[0]+'.avi')
save_path = os.path.join(args.outputpath, 'AlphaPose_webcam'+webcam+'.avi')
writer = DataWriter(args.save_video, save_path, cv2.VideoWriter_fourcc(*'XVID'), fps, frameSize).start()

# Load YOLO model
Expand Down Expand Up @@ -112,14 +112,17 @@ def loop():
ckpt_time, detNMS_time = getTime(ckpt_time)
runtime_profile['dn'].append(detNMS_time)
# Pose Estimation
inps, pt1, pt2 = crop_from_dets(inp, boxes)
inps = torch.zeros(boxes.size(0), 3, opt.inputResH, opt.inputResW)
pt1 = torch.zeros(boxes.size(0), 2)
pt2 = torch.zeros(boxes.size(0), 2)
inps, pt1, pt2 = crop_from_dets(inp, boxes, inps, pt1, pt2)
inps = Variable(inps.cuda())

hm = pose_model(inps)
ckpt_time, pose_time = getTime(ckpt_time)
runtime_profile['pt'].append(pose_time)

writer.save(boxes, scores, hm, pt1, pt2, orig_img, im_name=str(i)+'.jpg')
writer.save(boxes, scores, hm.cpu(), pt1, pt2, orig_img, im_name=str(i)+'.jpg')

ckpt_time, post_time = getTime(ckpt_time)
runtime_profile['pn'].append(post_time)
Expand Down

0 comments on commit 4368a58

Please sign in to comment.