Evaluation Metric

To evaluate detection performance, we plot the miss-rate $mr(c) = \frac{fn(c)}{tp(c) + fn(c)}$ against the number of false positives per image $fppi(c)=\frac{fp(c)}{\text{#img}}$ in log-log plots. $tp(c)$ is the number of true positives, $fp(c)$ is the number of false positives, and $fn(c)$ is the number of false negatives, all for a given confidence value $c$ such that only detections are taken into account with a confidence value greater or equal than $c$. As commonly applied in object detection evaluation the confidence threshold $c$ is used as a control variable. By decreasing $c$, more detections are taken into account for evaluation resulting in more possible true or false positives, and possible less false negatives.
We define the log average miss-rate (LAMR) as shown, where the 9 fppi reference points are equally spaced in the log space:
$\DeclareMathOperator*{\argmax}{argmax}LAMR = \exp\left(\frac{1}{9}\sum\limits_f \log\left(mr(\argmax\limits_{fppi\left(c\right)\leq f} fppi\left(c\right))\right)\right)$

For each fppi reference point the corresponding mr value is used. In the absence of a miss-rate value for a given f the highest existent fppi value is used as new reference point. This definition enables LAMR to be applied as a single detection performance indicator at image level. At each image the set of all detections is compared to the groundtruth annotations by utilizing a greedy matching algorithm. An object is considered as detected (true positive) if the Intersection over Union (IoU) of the detection and groundtruth bounding box exceeds a pre-defined threshold. Due to the high non-rigidness of pedestrians we follow the common choice of an IoU threshold of 0.5. Since no multiple matches are allowed for one ground-truth annotation, in the case of multiple matches the detection with the largest score is selected, whereas all other matching detections are considered false positives. After the matching is performed, all non matched ground-truth annotations and detections, count as false negatives and false positives, respectively.

Neighboring classes and ignore regions are used during evaluation. Neighboring classes involve entities that are semantically similar, for example bicycle and moped riders. Some applications might require their precise distinction (enforce) whereas others might not (ignore). In the latter case, during matching correct/false detections are not credited/penalized. If not stated otherwise, neighboring classes are ignored in the evaluation. In addition to ignored neighboring classes all persons annotations with the tags behind glass or sitting-lying are treated as ignore regions. Further, as mentioned in Section 3.2, EuroCity Persons Dataset Publication, ignore regions are used for cases where no precise bounding box annotation is possible (either because the objects are too small or because there are too many objects in close proximity which renders the instance based labeling infeasible). Since there is no precise information about the number or the location of objects in the ignore region, all unmatched detections which share an intersection of more than $0.5$ with these regions are not considered as false positives.

Note that submissions with provided publication link and/or code will get priorized in below list (COMING SOON).



19 records found
MethodUserLAMR (reasonable)LAMR (small)LAMR (occluded)LAMR (all)External data used▲Publication URLPublication codeSubmitted on
YOLOv3_640HUI_Tsinghua-Daim...0.2730.5640.6230.456nono2019-05-17 04:56:27View
PedestronJannes Scholz0.1210.2150.5240.285nono2024-01-05 10:45:22View
Faster R-CNNECP Team0.1010.1960.3810.251ImageNetyesno2019-04-01 17:06:33View
YOLOv3ECP Team0.0970.1860.4010.242ImageNetyesno2019-04-01 17:08:05View
R-FCN (with OHEM)ECP Team0.1630.2450.5070.330ImageNetyesno2019-04-01 17:10:03View
SSDECP Team0.1310.2350.4600.296ImageNetyesno2019-04-02 13:56:14View
SPNet w FPNHuawei Noah AI Th...0.0550.1210.2460.165ImageNetyesyes2019-10-15 09:44:33View
YOLOv3Surromind .0.6990.9160.8770.789ImageNetnono2019-11-05 05:21:32View
Pedestrian2Hongsong Wang0.0560.1260.2660.171ImageNetnoyes2019-11-06 07:07:40View
YOLOv3-sppSurromind .0.4250.6790.7550.586ImageNetnono2019-11-13 10:39:07View
Irtiza and LiJinp...Irtiza Hasan0.0860.1680.3790.230ImageNetyesyes2019-12-04 12:29:36View
Real-time Pedestr...Irtiza and LiJinp...0.0660.1360.3130.193ImageNetyesyes2020-01-13 10:18:58View
Pedestron IIAI, UAE0.0510.1120.2540.162ImageNetyesyes2020-03-09 11:56:49View
SPNet w cascadeHuawei Noah AI Th...0.0420.0950.2160.139ImageNetyesyes2020-03-18 23:33:33View
Torchvision Faste...Attila Lengyel0.1410.2960.4390.309ImageNetyesyes2020-04-21 15:31:31View
APDAnonymous0.0530.1240.2680.173ImageNetyesno2020-05-08 05:49:20View
DAGNDSLab0.0590.1420.2630.175ImageNetyesno2021-07-01 06:28:00View
F2DNetAbdul Hannan Khan0.1070.1750.3870.261ImageNetyesyes2021-12-29 18:23:11View
LSFMAbdul Hannan Khan0.0440.0990.2300.150ImageNet, TJU-DHD...yesyes2022-10-16 16:15:54View


6 records found
MethodUserLAMR (reasonable)LAMR (small)LAMR (occluded)LAMR (all)External data used▲Publication URLPublication codeSubmitted on
Faster R-CNNECP Team0.2010.3590.7010.358ImageNetyesno2019-05-02 10:10:01View
FasterRCNN with M...Anonymous0.1500.2530.6530.295ImageNetnono2019-07-08 08:48:13View
SPNet w FPNHuawei Noah AI Th...0.0900.1720.2920.170ImageNetyesyes2019-10-15 09:44:33View
Pedestrian2Hongsong Wang0.0710.1270.2440.140ImageNetnoyes2019-11-06 07:07:40View
SPNet w cascadeHuawei Noah AI Th...0.0660.1190.2310.131ImageNetyesyes2020-03-18 23:33:33View
Pedestron (retrai...Anonymous0.0960.1580.2750.162ImageNet, Wider P...nono2021-09-06 08:51:13View