Skip to main content

Updating metadata, events missing, object type display

Answered

Comments

3 comments

  • Andrey Terentyev
    • Network Optix team

    Hello David,

    • The bounding boxes flicker; is it OK to extend the duration of the object to cover for when tracking data is not available, or should new dummy metadatapackets be created for these frames?

    Yes, it's ok. The duration is introduced exactly for smoothening bounding box displaying.

    • The objects in the list all have the title for the "FaceDetectedObject", even though the "FaceRecognisedObject" metadata has been added; is this expected behaviour?

    Most probably, it's a bug in the plugin code. The Server passes to GUI just what it gets from a plugin. Check the parameter passed to the setTypeId() method. I could give more detailed answer if I have the source code.

    • Best shot images don't always show for recognised faces.
    • The bounding boxes don't line up (timewise) with the video.

    These issues are known. The most probable reason for the first is duplicated trackId for different objects or wrong timestamp of the best shot provided. Here is the guide to how to debug such.
    https://support.networkoptix.com/hc/en-us/articles/1500006332441-Troubleshooting-Analytics-Issues-for-Cameras-and-Plugins

    I need more details and a particular example with screenshots in order to give more specific answer.

    What is going on step by step? What is the expected behavior? What is the actual behavior?

    0
  • David Brown

    Hi Andrey, thanks very much for the reply.

    I have included code snippets related to the FaceDetectedObject and FaceRecognisedObject, below. Both objects are added and use the same trackId (which I'm guessing is why everything is merged into a single visible object). 

    When a detection/track comes in, object metadata is generated, and the detection data is cached.

     
    generateObjectMetadataPacket(metadataPackets, detection, atttributesCache.contains(detection.trackId) ? attributesCache[detection.trackId] : nullptr);
    detectionCache.push_back(detection);

    When a recognition alert comes in, the cache is searched for timestamp matches, and the new metadata is generated with the new attributes (e.g. the person's name). The attributes will be added to the attributes cache, indexed by the trackId.

    // we have recogTrackId and the recogTime to match with detection entries in the detectionCache
    bool matched = false;
    for (auto detectionIt = detectionCache.begin(); detectionIt != detectionCache.end();)
    {                            
        if (abs(detectionIt->time - recogTime) < matchableTimeDifference && detectionIt->trackUuid == recogTrackId)
        {
            logStream << TIMESTAMP << " Matching alert at time " << recogTime << " to detection time " << detectionIt->time << std::endl;
            // update the attributes                                
            generateObjectMetadataPacket(metadataPackets, *detectionIt, &attributes);
            // remove the result from detectionCache, because it shouldn't need to be updated again
            detectionIt = detectionCache.erase(detectionIt);
        }
        else
        {
            detectionIt++;
        }
    }

    The generateObjectMetadatapacket is similar to the sample, except that it adds metadatapackets to the vector rather than returning one. Also, the typeId will be either kFaceRecogniseObjectType or kFaceDetectObjectType, depending on the presence of the recognition attributes.

    void DeviceAgent::generateObjectMetadataPacket(std::vector<IMetadataPacket*>* metadataPackets, int64_t timeMicros, float x, float y, float width, float height, const std::string trackId, const DetectionResult &results, AttributesMap *attributes)
    {
    const auto objectMetadataPacket = new ObjectMetadataPacket();

    // ... generate metadata as per the sample code

    const auto objectMetadata = makePtr<ObjectMetadata>();
    objectMetadata->setTypeId(attributes != nullptr ? kFaceRecogniseObjectType : kFaceDetectObjectType);
    auto myTrackId = nx::sdk::UuidHelper::fromStdString(trackId);
    objectMetadata->setTrackId(myTrackId);
    objectMetadata->setBoundingBox(Rect(x, y, width, height));
    objectMetadata->addAttribute(nx::sdk::makePtr<Attribute>(IAttribute::Type::string, "Bluriness", std::to_string(detection.blurriness)));
    objectMetadata->addAttribute(nx::sdk::makePtr<Attribute>(IAttribute::Type::string, "Confidence", std::to_string(detection.confidence)));

    if (attributes != nullptr)
    {
    for(const auto &kv : *attributes)
    {
    objectMetadata->addAttribute(nx::sdk::makePtr<Attribute>(IAttribute::Type::string, kv.first, kv.second));
    }
    }

    //...
    objectMetadataPacket->addItem(objectMetadata.get());
    metadataPackets->push_back(objectMetadataPacket);

    if (attributes != nullptr && !attributes->faceData.empty() && !attributes->faceDataUsed)
    auto bestShotPacket = new ObjectTrackBestShotPacket(myTrackId, timeMicros, Rect(x, y, width, height));
    bestShotPacket->setImage("image/jpg", attributes->faceData); // PNG format doesn't seem to work
    attributes->faceDataUsed = true; // attributes should arrive after the tracking info, so we should only use faceData once here anyway
    metadataPackets->push_back(bestShotPacket);
    }
    }

    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    • The most probable reason for the first is duplicated trackId for different objects or wrong timestamp of the best shot provided.

    I use the same trackId for both object types, as mentioned above. If I understand you correctly, I need to use a new trackId for the FaceRecognisedObjects.

     

    Regarding the timestamps and bounding boxes, they can be made to line up perfectly by offsetting the timestamp. We didn't see a timestamp coming from the nx stream though, as detailed here: https://support.networkoptix.com/hc/en-us/community/posts/4412209074583-Metadata-timestamps

     

    The step-by-step is hopefully shown in the code above. The expected outcome was that each object type would show up in the nx client objects list, and that filtering by a certain object type would show that object, along with the registered bestShot image. The actual outcome is that the objects all have the title of the initial object type (I think when filtered it changed), and that the best shot images can disappear when filtering by RecognisedFaceObject, and sometimes the objects don't show any attributes (they are set every time object metadata is generated).

    0
  • Andrey Terentyev
    • Network Optix team

    Hello,

    Sorry for such late response.

    I use the same trackId for both object types, as mentioned above. If I understand you correctly, I need to use a new trackId for the FaceRecognisedObjects.

    That's correct.

    If you have any new info regarding the issue (if it still persists), let's refresh the status.

    0

Please sign in to leave a comment.