To evaluate detection performance, we plot the miss-rate
$mr(c) = \frac{fn(c)}{tp(c) + fn(c)}$
against the number of false positives per image
$fppi(c)=\frac{fp(c)}{\text{#img}}$
in log-log plots.
$tp(c)$ is the number of true positives, $fp(c)$ is the number of false positives, and $fn(c)$ is the number of false negatives, all for a given confidence value $c$ such that only detections are taken into account with a confidence value greater or equal than $c$.
As commonly applied in object detection evaluation the confidence threshold $c$ is used as a control variable. By decreasing $c$, more detections are taken into account for evaluation resulting in more possible true or false positives, and possible less false negatives.
We define the log average miss-rate (LAMR) as shown, where the 9 fppi reference points are equally spaced in the log space:
$\DeclareMathOperator*{\argmax}{argmax}LAMR = \exp\left(\frac{1}{9}\sum\limits_f \log\left(mr(\argmax\limits_{fppi\left(c\right)\leq f} fppi\left(c\right))\right)\right)$
For each fppi reference point the corresponding mr value is used. In the absence of a miss-rate value for a given f the highest existent fppi value is used as new reference point. This definition enables LAMR to be applied as a single detection performance indicator at image level. At each image the set of all detections is compared to the groundtruth annotations by utilizing a greedy matching algorithm. An object is considered as detected (true positive) if the Intersection over Union (IoU) of the detection and groundtruth bounding box exceeds a pre-defined threshold. Due to the high non-rigidness of pedestrians we follow the common choice of an IoU threshold of 0.5. Since no multiple matches are allowed for one ground-truth annotation, in the case of multiple matches the detection with the largest score is selected, whereas all other matching detections are considered false positives. After the matching is performed, all non matched ground-truth annotations and detections, count as false negatives and false positives, respectively.
Neighboring classes and ignore regions are used during evaluation. Neighboring classes involve entities that are semantically similar, for example bicycle and moped riders. Some applications might require their precise distinction (enforce) whereas others might not (ignore). In the latter case, during matching correct/false detections are not credited/penalized. If not stated otherwise, neighboring classes are ignored in the evaluation. In addition to ignored neighboring classes all persons annotations with the tags behind glass or sitting-lying are treated as ignore regions. Further, as mentioned in Section 3.2, EuroCity Persons Dataset Publication, ignore regions are used for cases where no precise bounding box annotation is possible (either because the objects are too small or because there are too many objects in close proximity which renders the instance based labeling infeasible). Since there is no precise information about the number or the location of objects in the ignore region, all unmatched detections which share an intersection of more than $0.5$ with these regions are not considered as false positives.
Note that submissions with provided publication link and/or code will get priorized in below list (COMING SOON).
Method | User | LAMR (reasonable)▲ | LAMR (small) | LAMR (occluded) | LAMR (all) | External data used | Publication URL | Publication code | Submitted on | |
---|---|---|---|---|---|---|---|---|---|---|
SPNet w cascade | Huawei Noah AI Th... | 0.042 | 0.095 | 0.216 | 0.139 | ImageNet | yes | yes | 2020-03-18 23:33:33 | View |
LSFM | Abdul Hannan Khan | 0.044 | 0.099 | 0.230 | 0.150 | ImageNet, TJU-DHD... | yes | yes | 2022-10-16 16:15:54 | View |
Pedestron | IIAI, UAE | 0.051 | 0.112 | 0.254 | 0.162 | ImageNet | yes | yes | 2020-03-09 11:56:49 | View |
APD | Anonymous | 0.053 | 0.124 | 0.268 | 0.173 | ImageNet | yes | no | 2020-05-08 05:49:20 | View |
SPNet w FPN | Huawei Noah AI Th... | 0.055 | 0.121 | 0.246 | 0.165 | ImageNet | yes | yes | 2019-10-15 09:44:33 | View |
Pedestrian2 | Hongsong Wang | 0.056 | 0.126 | 0.266 | 0.171 | ImageNet | no | yes | 2019-11-06 07:07:40 | View |
DAGN | DSLab | 0.059 | 0.142 | 0.263 | 0.175 | ImageNet | yes | no | 2021-07-01 06:28:00 | View |
Real-time Pedestr... | Irtiza and LiJinp... | 0.066 | 0.136 | 0.313 | 0.193 | ImageNet | yes | yes | 2020-01-13 10:18:58 | View |
Irtiza and LiJinp... | Irtiza Hasan | 0.086 | 0.168 | 0.379 | 0.230 | ImageNet | yes | yes | 2019-12-04 12:29:36 | View |
YOLOv3 | ECP Team | 0.097 | 0.186 | 0.401 | 0.242 | ImageNet | yes | no | 2019-04-01 17:08:05 | View |
Faster R-CNN | ECP Team | 0.101 | 0.196 | 0.381 | 0.251 | ImageNet | yes | no | 2019-04-01 17:06:33 | View |
F2DNet | Abdul Hannan Khan | 0.107 | 0.175 | 0.387 | 0.261 | ImageNet | yes | yes | 2021-12-29 18:23:11 | View |
Pedestron | Jannes Scholz | 0.121 | 0.215 | 0.524 | 0.285 | no | no | 2024-01-05 10:45:22 | View | |
SSD | ECP Team | 0.131 | 0.235 | 0.460 | 0.296 | ImageNet | yes | no | 2019-04-02 13:56:14 | View |
Torchvision Faste... | Attila Lengyel | 0.141 | 0.296 | 0.439 | 0.309 | ImageNet | yes | yes | 2020-04-21 15:31:31 | View |
R-FCN (with OHEM) | ECP Team | 0.163 | 0.245 | 0.507 | 0.330 | ImageNet | yes | no | 2019-04-01 17:10:03 | View |
YOLOv3_640 | HUI_Tsinghua-Daim... | 0.273 | 0.564 | 0.623 | 0.456 | no | no | 2019-05-17 04:56:27 | View | |
YOLOv3-spp | Surromind . | 0.425 | 0.679 | 0.755 | 0.586 | ImageNet | no | no | 2019-11-13 10:39:07 | View |
YOLOv3 | Surromind . | 0.699 | 0.916 | 0.877 | 0.789 | ImageNet | no | no | 2019-11-05 05:21:32 | View |
Method | User | LAMR (reasonable)▲ | LAMR (small) | LAMR (occluded) | LAMR (all) | External data used | Publication URL | Publication code | Submitted on | |
---|---|---|---|---|---|---|---|---|---|---|
SPNet w cascade | Huawei Noah AI Th... | 0.066 | 0.119 | 0.231 | 0.131 | ImageNet | yes | yes | 2020-03-18 23:33:33 | View |
Pedestrian2 | Hongsong Wang | 0.071 | 0.127 | 0.244 | 0.140 | ImageNet | no | yes | 2019-11-06 07:07:40 | View |
SPNet w FPN | Huawei Noah AI Th... | 0.090 | 0.172 | 0.292 | 0.170 | ImageNet | yes | yes | 2019-10-15 09:44:33 | View |
Pedestron (retrai... | Anonymous | 0.096 | 0.158 | 0.275 | 0.162 | ImageNet, Wider P... | no | no | 2021-09-06 08:51:13 | View |
FasterRCNN with M... | Anonymous | 0.150 | 0.253 | 0.653 | 0.295 | ImageNet | no | no | 2019-07-08 08:48:13 | View |
Faster R-CNN | ECP Team | 0.201 | 0.359 | 0.701 | 0.358 | ImageNet | yes | no | 2019-05-02 10:10:01 | View |