Description
I was reading through the paper again and noticed an interesting section when they describe the matching cascade in section 2.3 and the pseudocode where they show the algorithm:
"
Therefore, we introduce a matching cascade that gives priority to more frequently seen objects to encode our notion of probability spread in the association likelihood. Listing 1 outlines our matching algorithm. As input we provide the set of track T and detection D indices as well as the maximum age
"
Which makes it sound like instead of just strictly sorting by lowest cost, the paper first sorts by time since last match (they call this track age) and then does matching for each set of track ages. So they would first calculate the cost matrix for tracks that were matched in the previous frame and do matching with the detections, then they would move onto the tracks that had been lost for one frame and calculate a new cost matrix with the leftover detections and do matching again and so on. I am not sure if we want to implement this, but I think that is what the paper says.
If we did implement it, I think the best way would be to change _get_associated_inds
to loop over the different groups of tracks that have the same time_since_update
and do the distance calculation and matching for each group. In each group of tracks, the tracks that don't get matched are marked lost, and the detections that don't get matched get sent onto the next older group of tracks to be matched.
Originally posted by @rolson24 in #17 (review)