fix!: Fix stepwise performance logging#640
fix!: Fix stepwise performance logging#640vkakerbeck wants to merge 8 commits intothousandbrainsproject:mainfrom
Conversation
There was a problem hiding this comment.
note: It makes me slightly nervous to modify semantic_3d to do this, but I couldn't find anywhere in tbp.monty or my experiment code that distinguishes between nonzero values when use_semantic_sensor is False. And other solutions seem pretty complicated, so I get it.
suggestion: Drop a line in the DepthTo3DTransforms indicating this behavior. Maybe line 387.
thought: I bet habitat returns uint8 for semantic. It's a long shot, but 10_000 > 2*8, so I did wonder whether there's anywhere it could cause overflow. I don't think so though.
|
Yes, I know. I looked into a lot of other options but the |
|
issue: This will become a problem in the future as it assumes that 10 000 is large enough. So, when it isn't we are back to the same problem this pull request is intended to fix. Can we find a permanent solution? |
|
note: I'm still having trouble finding where the logging is going wrong. If the desired state is not to log "confused" when the |
|
Currently, we basically set all on-object pixels to semantic-id 1 (id of the first object in the list) when we don't use the semantic sensor. Setting it to a value other than 1 (and one that is not defined for one if the other objects) is already an improvement. I agree that it would be the cleanest to have a check like |
For a while now, we've been logging "confused" in the stepwise performance column if no semantic sensor is used. This happened because we would set the on-object-map (semantic) to 0's and 1's when we have no semantic sensor data (which became the default last year when @scottcanoe cleaned up that code for the DMC paper). However, when the stepwise_performance logging code looked at those values, it would interpret the 1's as object ID 1 (i.e. the first object in the dataset). So unless the LM actually recognized object 1, it would log "confused" (and incorrectly logged "correct" if the target wasn't actually the 1st object).
This PR fixes this issue (although maybe not in the most beautiful way) but setting the semantic values to a large number that wouldn't be in the
semantic_id_to_labeldict (setting it to np.inf doesn't work for various reasons and a negative value seemed like it would introduce more confusion). I also added a line that actually correctly logs the overall no_label performance (which previously didn't happen).@hlee9212 it would be nice to use this updated version in your demo so people aren't confused why the .csv files show "confused" in the second column if Monty correctly recognized the object. I'll try to merge the PR before then.
I spot-checked benchmarks, but since this only affects the stepwise performance logging, which we don't report in the benchmarks, it's not expected to have any effect.