This is an action detector for the Smart Classroom scenario. It is based on the RMNet backbone that includes depth-wise convolutions to reduce the amount of computations for the 3x3 convolution block. The first SSD head from 1/16 scale feature map has four clustered prior boxes and outputs detected persons (two class detector). The second SSD-based head predicts actions of the detected persons. Possible actions: raising hand and other.
Metric | Value |
---|---|
Detector AP (internal test set 2) | 81.50% |
Accuracy (internal test set 2) | 94.93% |
Pose coverage | Sitting, standing, raising hand |
Support of occluded pedestrians | YES |
Occlusion coverage | <50% |
Min pedestrian height | 80 pixels (on 1080p) |
GFlops | 7.138 |
MParams | 1.951 |
Source framework | Caffe* |
Average Precision (AP) is defined as an area under the precision/recall curve.
-
name: "input" , shape: [1x3x400x680] - An input image in the format [BxCxHxW], where:
- B - batch size
- C - number of channels
- H - image height
- W - image width
Expected color order is BGR.
The net outputs four branches:
- name:
mbox_loc1/out/conv/flat
, shape: [b, num_priors*4] - Box coordinates in SSD format - name:
mbox_main_conf/out/conv/flat/softmax/flat
, shape: [b, num_priors*2] - Detection confidences - name:
mbox/priorbox
, shape: [1, 2, num_priors*4] - Prior boxes in SSD format - name:
out/anchor1
, shape: [b, 2, h, w] - Action confidences - name:
out/anchor2
, shape: [b, 2, h, w] - Action confidences - name:
out/anchor3
, shape: [b, 2, h, w] - Action confidences - name:
out/anchor4
, shape: [b, 2, h, w] - Action confidences
Where: - b - batch size - num_priors - number of priors in SSD format (equal to 25x43x4=4300) - h, w - height and width of the output feature map (h=25, w=43)