Open
Description
If we are re-implementing papers using clean room practices, we need a good way to evaluate our trackers to ensure that they are performing as expected. It would be useful to have an evaluation framework to test our trackers on common benchmarks. This could also be designed to allow other trackers and detectors to plug into the eval framework to compare different trackers and detections. This can also give us a good idea of how to structure dataloaders for when we implement a training framework for ReID models, or even end-to-end deep learning trackers.
Metadata
Metadata
Assignees
Labels
No labels