Hi. I'm attempting to integrate a face detection and recognition engine into an Nx Witness plugin. These are remote services. The detection occurs first, and the recognition later. Nx Witness client/server version is 18.104.22.168840
I would like to get some guidelines for best practices, or what I'm doing wrong, as currently the result is not great.
What I'd like to do:
- Live display of tracking box for detected face
- Live display to show the person's identity, if known
- Ability to filter objects to just the recognised faces
- An filterable object for each recognised face
This is what I'm currently doing:
- Face tracking metadatapackets provided through pullMetadataPackets(). The timestamps are from the video stream, converted from milliseconds. If the track has been recognised, then a "FaceRecognisedObject" type is used; otherwise a "FaceDetectedObject" type is used.
- When a face recognition occurs, I'm matching all previous tracking information and resubmitting it with the additional attributes (e.g. person's name), and adding a single best shot image.
There are at least a few issues:
- The bounding boxes flicker; is it OK to extend the duration of the object to cover for when tracking data is not available, or should new dummy metadatapackets be created for these frames?
- The objects in the list all have the title for the "FaceDetectedObject", even though the "FaceRecognisedObject" metadata has been added; is this expected behaviour?
- Best shot images don't always show for recognised faces.
- The bounding boxes don't line up (timewise) with the video.
Any tips would be greatly appreciated.
Please sign in to leave a comment.