EuroCity Persons Dataset

Evaluation Metric

To evaluate detection performance, we plot the miss-rate $mr(c) = \frac{fn(c)}{tp(c) + fn(c)}$ against the number of false positives per image $fppi(c)=\frac{fp(c)}{\text{#img}}$ in log-log plots. $tp(c)$ is the number of true positives, $fp(c)$ is the number of false positives, and $fn(c)$ is the number of false negatives, all for a given confidence value $c$ such that only detections are taken into account with a confidence value greater or equal than $c$. As commonly applied in object detection evaluation the confidence threshold $c$ is used as a control variable. By decreasing $c$, more detections are taken into account for evaluation resulting in more possible true or false positives, and possible less false negatives.
We define the log average miss-rate (LAMR) as shown, where the 9 fppi reference points are equally spaced in the log space:
$\DeclareMathOperator*{\argmax}{argmax}LAMR = \exp\left(\frac{1}{9}\sum\limits_f \log\left(mr(\argmax\limits_{fppi\left(c\right)\leq f} fppi\left(c\right))\right)\right)$

For each fppi reference point the corresponding mr value is used. In the absence of a miss-rate value for a given f the highest existent fppi value is used as new reference point. This definition enables LAMR to be applied as a single detection performance indicator at image level. At each image the set of all detections is compared to the groundtruth annotations by utilizing a greedy matching algorithm. An object is considered as detected (true positive) if the Intersection over Union (IoU) of the detection and groundtruth bounding box exceeds a pre-defined threshold. Due to the high non-rigidness of pedestrians we follow the common choice of an IoU threshold of 0.5. Since no multiple matches are allowed for one ground-truth annotation, in the case of multiple matches the detection with the largest score is selected, whereas all other matching detections are considered false positives. After the matching is performed, all non matched ground-truth annotations and detections, count as false negatives and false positives, respectively.

Neighboring classes and ignore regions are used during evaluation. Neighboring classes involve entities that are semantically similar, for example bicycle and moped riders. Some applications might require their precise distinction (enforce) whereas others might not (ignore). In the latter case, during matching correct/false detections are not credited/penalized. If not stated otherwise, neighboring classes are ignored in the evaluation. In addition to ignored neighboring classes all persons annotations with the tags behind glass or sitting-lying are treated as ignore regions. Further, as mentioned in Section 3.2, EuroCity Persons Dataset Publication, ignore regions are used for cases where no precise bounding box annotation is possible (either because the objects are too small or because there are too many objects in close proximity which renders the instance based labeling infeasible). Since there is no precise information about the number or the location of objects in the ignore region, all unmatched detections which share an intersection of more than $0.5$ with these regions are not considered as false positives.

Leaderboard

Day

6 records found
MethodUserLAMR (reasonable)LAMR (small)LAMR (occluded)▲LAMR (all)External data usedSubmitted on
HRNetHongsong Wang0.0610.1380.2870.183ImageNet2019-08-05 17:11:04View
Faster R-CNNECP Team0.1010.1960.3810.251ImageNet2019-04-01 17:06:33View
YOLOv3ECP Team0.0970.1860.4010.242ImageNet2019-04-01 17:08:05View
SSDECP Team0.1310.2350.4600.296ImageNet2019-04-02 13:56:14View
R-FCN (with OHEM)ECP Team0.1630.2450.5070.330ImageNet2019-04-01 17:10:03View
YOLOv3_640HUI_Tsinghua-Daim...0.2730.5640.6230.4562019-05-17 04:56:27View
Export:csvjsonxml

Night

3 records found
MethodUserLAMR (reasonable)LAMR (small)LAMR (occluded)▲LAMR (all)External data usedSubmitted on
HRNetHongsong Wang0.0790.1560.2650.153ImageNet2019-08-05 17:11:04View
FasterRCNN with M...Qihua Cheng0.1500.2530.6530.295ImageNet2019-07-08 08:48:13View
Faster R-CNNECP Team0.2010.3590.7010.358ImageNet2019-05-02 10:10:01View
Export:csvjsonxml