Skip to main content

Post Alarm Image

Answered

Comments

17 comments

  • Andrey Terentyev
    • Network Optix team

    Hello balaganesh,

    Could you, please, elaborate your objective? What is the final result you expect?

    0
  • Evgeny Balashov

    Hello Balaganesh,

    As Andrey mentioned, the specific solution heavily depends on user stories that you need to implement.

    On a top-level, the recommended way to integrate video analytics or AI processing is through implementing a Video Analytics Plugin using our C++ SDK. Here is a general explanation of how it works: https://support.networkoptix.com/hc/en-us/sections/360004090353-Integrating-Video-Analytics. This way currently has some support for sending custom images into the software.

    There is also another straightforward path: you can request and analyze data from the Nx Server using API, and then send TEXT data back in form of Bookmarks or Generic Events. This way the system can react to events, and present the information to the user in a readable and searchable way.

    Finally, if both options do not satisfy the requirements, there is also an option to collect data inside your service and present it to the user using an embedded web interface or 'fake' camera. The web page can provide tools for the user to view events, perform a search, and view additional images that you want to show + there is a way to summon the archive from the camera from a specific timestamp to the layout. A good example of this approach is C2P integration (a long overview can be found here: https://www.youtube.com/watch?v=PE-Z4WiNn7c) 

    Again, to be able to help you, we need a bit more details about the problem you are trying to solve. Can you also share some image examples that you need to show?

    0
  • balaganesh k

    Thanks Evgeny and Andrey. 

    This is one of the samples

    0
  • Evgeny Balashov

    In this case - you need to implement Video Analytics Plugin using SDK

    1
  • Usama Ashraf

    Hey Andrey Terentyev, Hey @...

    I'm working on this integration with balaganesh k. Thank you for your replies.
    I would like to get your thoughts on two different architectural propositions that we're considering. Since we're keen on not wasting time, balaganesh k and I wanted to have these ideas validated or invalidated before we started any development.

    Please do suggest which design you prefer, pitfalls/benefits of each and your general opinion.

    Architecture A:

    1. VMS sends alarm to external service via email

    2. The external service gets the video/images from the email attachments, or from the HTTP cameraThumbnail or one of the streaming APIs.

    3. The external service processes the images and draws bounding boxes/detections on the images as overlay.

    4. The external service triggers an HTTP endpoint on the VMS (createEvent maybe?) to create a custom event, and it passes meta data along with this event which can be some HTTP link(s) to download the analyzed images with the bounding boxes/detections, and some camera info.

    5. A C++ plugin that we develop, installed on the VMS, listens for this custom event, calls the link(s) in the meta data and downloads the analyzed images.

    6. The plugin generates a new event/alarm associated with the analyzed images and camera info from the meta data.

     
    Architecture B:

    1. A C++ plugin that we develop, installed on the VMS, listens for motion events or other kinds of events.

    2. It gets the camera info and the images from the event and sends them to an external service via HTTP to be processed.

    3. The plugin receives the HTTP response from the external service which is multipart/form-data, and contains the analyzed images.

    4. The plugin generates a new event/alarm associated with the analyzed images.



    Thank you.

    0
  • Evgeny Balashov

    Usama Ashraf, I believe Architecture A is not going to work reliably - too many moving parts and custom configuration.

    Architecture B looks like a feasible starting approach. 

    0
  • Andrey Terentyev
    • Network Optix team

    Hi gentlemen,

    First of all, I'll try to answer your previous questions.

    > how to upload the image into VMS ?
    It's not possible to upload or anyhow pass the image from a plugin to the Sever. Only metadata can be sent.

    > which API i have to use.
    > If there is no API what are the other solution ?
    There is no special API for sending either an images or metadata to the Server.
    As Evgeny mentioned, you need analytics plugin developed with Metadata SDK.

    The general pipeline is simple.
    1. The Server feeds video frames to a plugin.
    2. The plugin delivers a frame (or all frames) for processing either local or remote (external).
    3. The plugin gets object metadata, assigns a timestamp of a frame which the metadata belongs to.
    4. The plugin passes object metadata along with a timestamp the Server.
    5. The Server saves metadata next to the video frame to the archive and sends them to the Desktop client.
    6. The Desktop client displays a bounding box and other attributes over the frame which the timestamp points to.

    Architecture A:

    1. VMS sends alarm to external service via email
      What triggers the alarm?
      The email is a kind of asynchronous tool. The external service will poll the email server for new emails. That will introduce a delay equal to the poling interval. I don't think introducing delays is a good idea in alarm processing.
      You should consider using another, preferable synchronous, notification method: Camera rule with "Do HTTP request" action, for example.

    2. The external service gets the video/images from the email attachments, or from the HTTP cameraThumbnail or one of the streaming APIs.
      How the external service will know the moment of time to which it should extract the thumbnail?
      Using email will introduce a delay equal to the email checking interval (see above).
      ======== these two steps could be considered as 1. and 2. of the general pipeline.

    3. The external service processes the images and draws bounding boxes/detections on the images as overlay.

      The external service should not draw any boxes on the images. It should just detect objects and send attributes: bounding box, color, type, alarm flag etc. to your plugin running on the Server.

    4. The external service triggers an HTTP endpoint on the VMS (createEvent maybe?) to create a custom event, and it passes meta data along with this event which can be some HTTP link(s) to download the analyzed images with the bounding boxes/detections, and some camera info.

      This step is unnecessary. Your external server can send and HTTP request directly to your plugin. See comments for  the step 5.

    5. A C++ plugin that we develop, installed on the VMS, listens for this custom event, calls the link(s) in the meta data and downloads the analyzed images.

      Your plugin should listen for the message (over HTTP or another transport) from the external service containing object attributes: bounding box, color, type etc. Then

    6. The plugin generates a new event/alarm associated with the analyzed images and camera info from the meta data.
      Metadata is not supposed to contain camera info, unless you explicitly insert it as one of the object attributes.

    I do not suggest using such architecture.

    0
  • Andrey Terentyev
    • Network Optix team

    Architecture B:

    1. A C++ plugin that we develop, installed on the VMS, listens for motion events or other kinds of events.

      The event designates an activity on a camera and can be configured in the "Camera Rules...", i.e. in the "Event rules"  dialog. Strictly speaking, a plugin does not listen for such events and knows nothing about them.
      But that's a good idea to let a plugin know about some of them.
      A plugin is provided by the Server with video frames and can do whatever it needs: detect objects, detect motion etc.

    2. It gets the camera info and the images from the event and sends them to an external service via HTTP to be processed.

      The plugin gets camera info when is being enabled on a camera, i.e. when the DeviceAgent class instance is being created. See device_agent.cpp of stub_analytics_plugin in the SDK samples for details. Here is the declaration of the constructor
      DeviceAgent(Engine* engine, const nx::sdk::IDeviceInfo* deviceInfo);
      As explained above, a plugin does not get images from the event. It gets video frames from the Server.

    3. The plugin receives the HTTP response from the external service which is multipart/form-data, and contains the analyzed images.

      Why would your plugin need analyzed images (video frame) back from the external? It already has them and could save them just before sending to the external service.
      Let me remind, it's not possible to pass images from a plugin to the Server.

    4. The plugin generates a new event/alarm associated with the analyzed images.
      A plugin can produce several types of events:
    • An analytics event, available in the "Event rules" dialog. See screenshot.
    • A diagnostics available in the "Event rules" dialog and shown on the right-hand panel. See screenshots

    0
  • Usama Ashraf

    Andrey Terentyev, @... thanks very much for your responses.

    Just for some more context, we're trying to build an analytics integration. Our external, cloud-based platform requires a set of images to analyze and create detection on top of them. These bounding boxes for now cannot just be posted anywhere else because drawing them over images is not simple. Only our service can do that.

    So essentially we need a way to get images...and then send the modified images to the VMS which contain the detections (encircling on top of the images, object detections etc).

    0
  • Evgeny Balashov

    Can you share more examples of overlay that our VMS currently doesn't support? 

     

    I guess, the only way to bring images to the software would be using webpage then (check my first comment in the thread)

    0
  • Andrey Terentyev
    • Network Optix team

    Hello Usama,

    > Our external, cloud-based platform requires a set of images to analyze and create detection on top of them.
    > These bounding boxes for now cannot just be posted anywhere else because drawing them over images is not simple. Only our service can do that.

    What are the ways your platform can receive images?

    > So essentially we need a way to get images

    Here is the pipline.
    1. The Server feeds video frames to a plugin.
    2. The plugin delivers a frame (or all frames) for processing either local or remote (external).
    3. The plugin gets object metadata, assigns a timestamp of a frame which the metadata belongs to.
    4. The plugin passes object metadata along with a timestamp the Server.
    5. The Server saves metadata next to the video frame to the archive and sends them to the Desktop client.
    6. The Desktop client displays a bounding box and other attributes over the frame which the timestamp points to.

    You can develop an analytics plugin.
    The Server will feed plugin with frames (step 1).
    Your plugin could deliver frames to your platform (step 2).
    How exactly a plugin delivers depends on the answer to the previous question.

    > ...and then send the modified images to the VMS which contain the detections (encircling on top of the images, object detections etc).

    Usama, the Server does not accept modified images. What you need in order the Server to understand "boxes" is just detect them in your platform and send coordinates of the boxes back to the plugin (step 3), which in turn would pass the coordinates to the Server (step 4).

    0
  • Usama Ashraf

    Hey Andrey Terentyev

    The picture is much clearer to me now.


    The plugin can send the images to our external service via HTTP(s), or by sending an email (smtp).

    1. I'm assuming the plugin can be developed in a way by which the user can configure certain connection details to communicate with our external service? As in enter/edit the details on a UI?

    2. I'd really appreciate it if you could please point us to exactly how the plugin would pass the bounding box coordinates to the Server and what the format exactly is for these bounding boxes/coordinates?


    Thank you again.

    0
  • Andrey Terentyev
    • Network Optix team

    Hi,

    1. Yes, that's possible.

    2. Please, have a look at the sample_analytics_plugin example of the Metadata SDK.

    Here are the method of the device_agent.cpp where the bounding box is being generated and passed to the Server.

    bool DeviceAgent::pushUncompressedVideoFrame(const IUncompressedVideoFrame* videoFrame)

    Ptr<IMetadataPacket> DeviceAgent::generateObjectMetadataPacket()

    For greater details, I strongly recommend you reading this manual https://support.networkoptix.com/hc/en-us/sections/360010787254-How-to-Create-a-Video-Analytics-Plugin

    0
  • Usama Ashraf

    Hey Andrey Terentyev

    You mentioned that "The event designates an activity on a camera and can be configured in the "Camera Rules...", i.e. in the "Event rules"  dialog. Strictly speaking, a plugin does not listen for such events and knows nothing about them.
    But that's a good idea to let a plugin know about some of them.
    A plugin is provided by the Server with video frames and can do whatever it needs: detect objects, detect motion etc."

    I'm getting images via pushUncompressedVideoFrame in the DeviceAgent, so per camera.
    The thing is that I need to send 3-6 images to our external remote HTTP service, which will analyze them together, check whether the activity in the images can actually be considered a true alarm or false.
    Of course, I don't want to send all the received frames to the external service.
    I was wondering what you meant by "letting a plugin know about some of them" (let's assume for now we're only interested motion events for now). I would then send 3-6 against this event to the external service.
    If the HTTP response says it's a TRUE alarm, it'll also send us bounding boxes which I can then use to generate a metadata object and create an event.
    If it's false, we don't do anything.
    How can I get the frames in the plugins only against motion events?


    One approach I have in mind is to just avoid any kind of motion event awareness in the plugin. And just listen to all the frames, using the frame timestamps I maintain a list of the most recent 3-6 frames in a list...where each frame is one second apart.
    When this list is full (reaches length of 3 or 6), I just send the 3-6 frames to the external images, along with device id and go on from there.
    So most frames will be completely avoided, we'll just get one frame per second.

    What do you think about this?

    0
  • Anton Babinov
    • Network Optix team

    >How can I get the frames in the plugins only against motion events?

    You can check each frame which you receive with pushCompressedVideoFrame()/pushUncompressedVideoFrame() for motion metadata. So the plugin will receive all frames, but you can ignore non-motion frames. Here is an example:

    Implement new function in your DeviceAgent class which would check video frame for motion:

    bool DeviceAgent::isMotion(Ptr<IList<IMetadataPacket>> metadataPacketList)
    {
    if (!metadataPacketList)
    return false;

    const int metadataPacketCount = metadataPacketList->count();
    if (metadataPacketCount == 0)
    return false;

    for (int i = 0; i < metadataPacketCount; ++i)
    {
    const auto metadataPacket = metadataPacketList->at(i);
    if (!metadataPacket)
    continue;

    const auto motionPacket = metadataPacket->queryInterface<IMotionMetadataPacket>();
    if (!motionPacket)
    continue;

    const int columnCount = motionPacket->columnCount();
    const int rowCount = motionPacket->rowCount();

    for (int column = 0; column < columnCount; ++column)
    {
    for (int row = 0; row < rowCount; ++row)
    {
    if (!motionPacket->isMotionAt(column, row))
    continue;
    else
    return true;
    }
    }
    }
    return false;
    }

    And then add check for motion in pushCompressedVideoFrame()/pushUncompressedVideoFrame() depending on which function you use. Below is an example based on stub analytics plugin:

    bool DeviceAgent::pushCompressedVideoFrame(const ICompressedVideoPacket* videoFrame)
    {
    if (m_engine->needUncompressedVideoFrames())
    {
    NX_PRINT << "ERROR: Received compressed video frame, contrary to manifest.";
    return false;
    }

    NX_OUTPUT << "Received compressed video frame, resolution: "
    << videoFrame->width() << "x" << videoFrame->height();
    if (isMotion(videoFrame->metadataList()))
    {
    NX_OUTPUT << "Received compressed video frame, with motion ";

    processVideoFrame(videoFrame, __func__);
    processFrameMotion(videoFrame->metadataList());

    }
    else
    NX_OUTPUT << "Received compressed video frame, no motion ";
    return true;
    }

     

    >One approach I have in mind is to just avoid any kind of motion event awareness >in the plugin. And just listen to all the >frames, using the frame timestamps I >maintain a list of the most recent 3-6 frames in a list...where each frame is one >second apart.

    What would be the best for your external software? Would it be sufficient for your external software to receive 1 frame per second to do what it needs to do? VMS server uses secondary low resolution stream with 6 fps for motion detection. 

    1
  • Usama Ashraf

    Anton Babinov you're a life saver! Thanks so much for this. I'll try the code you wrote to filter out non-motion events.

    And, yes, your question is valid: we don't want to put too much unnecessary load on the remote server.

    One more question please. What other kinds of events can I do this kind of filtration for (I mean events other than motion detection)?

    0
  • Anton Babinov
    • Network Optix team

    There is streamTypeFilter flag which you can set in Engine Manifest to define which packets should be supplied to the plugin by the server. In addition to motion packets IMotionMetadataPacket, you can have access to ICustomMetadataPacket if there is any provided by the camera. See full description of streamTypeFilter flag:

    - `"streamTypeFilter"`: Flag set (String)

        A combination of zero or more of the following flags, separated with `|`, defining which kind of streaming data will the plugin receive from the Server in IConsumingDeviceAgent::doPushDataPacket():

        - `compressedVideo` - Compressed video packets, as ICompressedVideoPacket.
        - `uncompressedVideo` - Uncompressed video frames, as IUncompressedVideoFrame.
        - `metadata` - Metadata that comes in the stream (e.g. RTSP) that goes from the Device to the Server, as ICustomMetadataPacket.
        - `motion` - Motion metadata retrieved by the Server's Motion Engine, as IMotionMetadataPacket.

        Optional; default value is empty.
    0

Please sign in to leave a comment.