Enrich motion detection with Object Detection

Answered

Comments

24 comments

  • Avatar
    Sergey Bystrov

    Martijn

    We will consider making integration with it. The benefits are obvious. The issue is it's a bit hardware specific. 
    You know.. a lot of people have just have HD graphics. 

    But again, looks very cool form demo standpoint(sort of WOW factor)  and very very beneficial for the storage. So we will very seriously consider doing this.

    3
    Comment actions Permalink
  • Avatar
    Martijn Dierckx

    Was a feature request created in the backend?

     

    1. Please describe the benefits for us, the developer, you as the integrator, and the end-user? 

      NX: Claim you can optimise storage for users even further, by only recording events that matter. No more false positives. You can also send very specific and accurate alarms. 
      Integrator: /
      End-user: Not only a reliable/stable platform, but also a highly usable and integratable platform

      Downside = bigger investment in hardware needed.

    2. Please describe the use case? How would it improve the use of Nx Witness and what gap would it fill? 

      Instead of recording on motion, you would only record when seeing a moving pre-selected object (car, person, backpack ...)

    3. Lastly, how many licenses would we sell extra if we add this feature? Or how many licenses wouldn't we sell at all is we have this feature? 

      +15%

     

    1
    Comment actions Permalink
  • Avatar
    Miguel Câmara

     

    Please consider enriching the "Smart Search" feature, where instead of just have true/false in each "square" whether there is motion or not, you could have tags as in terms of objects.

     

    Using YOLO or another object classification AI/Deep Learning Algorithm, you could populate each "square" with the tags, so that we could do a Smart Search for cars in a specific zone of the image, and get immediate results as we already do now for simple motion.

     

     

    1
    Comment actions Permalink
  • Avatar
    Martijn Dierckx

    Keep me posted.
    I’m very curious if this is going to work in virtualized environment.
    But maybe it’s possible to add a Intel Movidius stick to offload the computing?

    1
    Comment actions Permalink
  • Avatar
    Tagir Gadelshin

    Martijn Dierckx

    HI!
    I know I can be late, but the plugin has been published as an example integration here: https://nxvms.com/integrations/101
    It also has an open-source code, that can be found on GitHub (https://github.com/networkoptix/nx_open_integrations/tree/master/cpp/vms_server_plugins)

    Same for OpenCV plugin

    1
    Comment actions Permalink
  • Avatar
    Martijn Dierckx

    Thanks

    But in the mean time I've integrated <ANOTHER_VMS> into my NX setup, which detects and then stores the detections as bookmarks in NX.

    1
    Comment actions Permalink
  • Avatar
    Norman - Nx Support

    Hi Martijn,

    Thank you for the feedback.

    Actually, from v4.0 there will be some AI / Deeplearning solutions be available that work together with Nx Witness with the help of our Nx Meta video management development platform that enables solutions like Darknet YOLO to be integrated into Nx Witness. The only thing is that someone needs to sign up for our Nx Meta platform and create the integration together with our development team. 

     

    0
    Comment actions Permalink
  • Avatar
    Roman Inflianskas

    Martijn,

    In Nx we have done a lot of research and came to the conclusion that standard solutions do not work Real-Time on the usual hardware (without a powerful graphics card). Now we are developing a solution based on OpenVINO technology from Intel, which allows us to solve Computer Vision problems in Real-Time on Intel processors. The solution is still in the prototype state and will be released with version 4.1/4.2. We will definitely write about the release.

    0
    Comment actions Permalink
  • Avatar
    Roman Inflianskas

    Martijn,

    What do you mean by "virtualized environment"? OS running in Virtual Machine like VirtualBox? If so, we've tested that. It works ~30% slower (in our testing environment), but it works fine.

    As I understand the company's tactics, at first, Nx will release plugin for CPU only, and then we will expand to other Intel computing devices, because for sure, people with ARM, for example, would benifit a lot from Intel Neural Compute Stick.

    0
    Comment actions Permalink
  • Avatar
    Martijn Dierckx

    Thanks for the info.
    Do you have any idea on the timeline?

    0
    Comment actions Permalink
  • Avatar
    Roman Inflianskas

    Martijn,

    Unfortunately, I have no idea, because this feature will not be included into 4.0 (we already passed feature freeze), and release dates for future versions of Nx Witness are not determined. But we understand that this is a long awaited feature and will release it ASAP.

    0
    Comment actions Permalink
  • Avatar
    Tagir Gadelshin

    Martijn Dierckx
    Great, thanks for the info!
    Can you elaborate a bit, how it is integrated? Do you only use create bookmark API when smth is detected by them? As I understood, they are able to discover cameras and obtain streams by itself, so you don't need to use streams/frames from Nx, right? That would mean you need to write a C++ plugin, I suppose

    But anyway, that's interesting as a potential way of extending Nx solution for other users

    0
    Comment actions Permalink
  • Avatar
    Martijn Dierckx

    the downside of my setup now is that I can only use the bookmark feature. Which is far from optimal.

    It would be much more powerful if I was a able to push meta data of the detected objects via HTTP.

    This way I could plugin any existing well working object-detection solution.

    Otherwise every solution always needs a custom c++ plugin. Which is not flexible.

    0
    Comment actions Permalink
  • Avatar
    Martijn Dierckx

    As to how <ANOTHER_VMS> works:
    I feed it with a 720p RTSP stream coming directly from the camera. All camera’s are manually configured in a config file.
    Once it detects movement, it will trigger it’s object detection algorithms and while it does, everything is published on MQTT.

    I’ve created a separate nodejs docker container which reads the MQTT messages and translates that to the bookmark api of NX.
    In the config of that nodejs component I map the NX camera id’s to the camera names of <ANOTHER_VMS>.

    So the chain is:
    1. <ANOTHER_VMS>
    2. My translation component
    3. NX W

    0
    Comment actions Permalink
  • Avatar
    Tagir Gadelshin

    Thanks for the clarification, Martijn Dierckx

    yep, forcing developers to write c++ code is not flexible, we understand that. We are working on REST metadata API that will allow those types of integrations. Hope we will be able to release it in a year or two, but there is no solid ETA.

    0
    Comment actions Permalink
  • Avatar
    Martijn Dierckx

    In a year or two? What are you guys? A incumbent bank with 50 year-old IT systems?

    I’m afraid that if you don’t get this out sooner, you’ll lose your place in the nvms market.

    0
    Comment actions Permalink
  • Avatar
    Permanently deleted user

    Oh, Martijn, can't agree with you more. The problem is that we have several hundred other features in the backlog, which are also important and absence of which cause us to loose clients, and can do only so many at a time.

    0
    Comment actions Permalink
  • Avatar
    Tagir Gadelshin

    Martijn Dierckx
    yes, Aleksandr is right. I didn't mean that we will develop this functionality only for a year or two. If we invest only in this feature, I think time-to-market will be months.
    But we also invest in our internal processes, tech debt elimination, and testing automation right now, and that also takes time. We don't want to become an incumbent bank (even in 50 years :)).
    Not mentioning other important features that we develop in the meantime.

    I think we will do our best to release it as soon as we can, cause we share your concerns about the market.

    0
    Comment actions Permalink
  • Avatar
    Michael Gruber

    I just stumbled in here, and have a question about motion and object classification:

    Martijn & Tagir: I would like to know, if either solution not only is able to recognize major objects (persons, cars) but maybe is able to detect minor objects that never should cause a motion event.

    For example: RainDrops, SnowFlakes, Moist (Nebel in german), flying or creeping insects in IR light, moving leaves, moving trees, glancing leaves in the sun, reflections, movingy other plants in the wind, flying birds, small animals like cats

    Many of these false positive events are flooding a nightly stream of an outdoor cam, so it is almost impossible to find true positive events, that should potentially be detected and classified.

    f.ex:

    • persons walking though regions during times where or when nobody is allowed to be there
    • animals that shouldnt be there
    • cars that appear or disappear in parking lots and should produce an event, should be recorded in hires and maybe should trigger a reaction.

    My question: Is it possible to eliminate these described false positive events

    • either with <ANOTHER_VMS>
    • or with OpenVino
    • or is there something already implemented in v4.1
    • or planned in v4.2 (maybe in near future) that you mentioned two years ago ?

    Thanks Michael

    0
    Comment actions Permalink
  • Avatar
    Norman - Nx Support

    Hi Michael Gruber,

    It isn't possible with a single camera since it cannot determine size.
    A spider close to the camera can be very big and an elephant very tiny when it is far away.

    AI can differentiate that, without the need of expensive 3D cameras.

    There is a basic OpenVino integration available that will reduce the number of false positives. You can check it out HERE.

     

    0
    Comment actions Permalink
  • Avatar
    Tagir Gadelshin

    Hi, all!

    Regarding this feature request, we are considering adding Object detection in Recording settings.

    This is only an early mockup, we can change things or leave it out of scope. Target ETA is also unknown at this moment.

    This solution also implies having some external analytics (this is not native Nx analytics that we were discussing earlier in this thread).

    0
    Comment actions Permalink
  • Avatar
    Michael Gruber

    Hi Tagir Gadelshin

    Thanks for your comment, I'm very curious how this will work even in early bird or beta status. I would be interested in beta testing this feature, because I dont have it in actual productive state now, it doesnt matter if anything goes wrong.

    I would like to be able, to define objekt types, maybe by editing additional text files that describes the motion type.

    Last day we had a windy day: So trees, leaves, grass and other moving structures, that by no means have to be detected as 3D objects, produced a permanent red ribbon all day long

    Last night we had a rainy night: So raindrops produced a permanent red ribbon all night long.

    Do you consider removing those false alarms too, not only recognizing cars, persons and big animals ?

    Is it planned to receice external events, like motion detector signals from smart home ir motion detectors. I have a section where I have an existing ir motion detector. Every time, if this detector signals a detected motion, i want to send a signal to nxW (maybe by mqtt or http whatever) that several predefined cams were set to recording and there should be a motion color index (with different predefined individual colors)

    I will stop here to describe this feature, because thinking this further and further I have so many tiny parts of feature funktions that this wouldn't fit in here.

    Let me know a bit more about your steps towards integration of object recognition, if this has the capability to be expanded to categorize uninteresting movements (that should never produce alarms) and those movements that should be detected by the surveillance purpose.

    Do you have a beta program. Do you have a developer programm.Do you have a developer discussion section, where to discuss these issues ?  If yes, where to join ? Is this the purpose of mxMeta ? That I didnt understand really ?

    Hi Norman - Nx Support

    I understand that 3D recognition needs more cams. Had a glimpse on OpenVino but didnt understand how to test install it.
    Maybe object recognition is a 2nd step for me. First step would be to eliminate false / uninteresting motions from desired (in the sense of detection, but burglars are not desired :-) motions.

    I think that often recurring small motions like light reflections on leaves in a total windless area, should never lead to a motion detection. Rain snow moist, flies, bees, and even birds might be detected with one cam quite well. Same as moving clouds, ...

    sincerely Michael

    0
    Comment actions Permalink
  • Avatar
    Tagir Gadelshin

    Michael Gruber

    Thanks for your input!
    We don't plan any improvements regarding this topic except mentioned ones short term. Long term we will add features, probably covering some of your mentioned ones.

    About beta -- we have Beta, they are announced here, on this site. And you can download the Beta version through my.networkoptix.com/#/download tab Beta. Currently, 4.2 is in Beta, but it doesn't have features announced in this thread.

    Also, we have Nx Meta (developer program). You can join Meta Early Access Program to get new unstable builds. We release them for 3rd-party developers primarily so they can adjust their integrations.

    Thanks

    0
    Comment actions Permalink
  • Avatar
    Rick Dunn

    I am running version 5.0.0.35745 Server and Client on Ubuntu and see the Camera Settings where the Motion, Objects and Motion & Object are located in the Record Scheduling section ...  I am only able to select Motion?
    The other selections are grayed out?

    I am asking here since this topic is close to what I would really like to see happen in this product offering.

    I run HomeAssistant integrated with a docker instance of <ANOTHER_VMS> using an MQTT broker to connect the two systems. I recently was able to add the Google Coral AI TPU within <ANOTHER_VMS> and have no issues running 4 IP CAMs for full object detection vise simple weighted motion masked areas in NX Witness ...

    In NX Witness most of the motion my recordings are from spiders and night time bugs flying thru the detection areas.. or daytime of wind driven shadows in my detection areas..

    With <ANOTHER_VMS> and the Google Coral AI device I no longer have any night bugs nor spider webs bouncing as motion recordings... NX Witness is becoming less used and less important to my overall use case without some sort of credible object detection...  I wish I could present a better picture however many new open source products like <ANOTHER_VMS> are beginning to replace non-object detecting systems on the market.

    Hoping you all can take an initiative and integrate something like this tensor flow into your products soon...

    0
    Comment actions Permalink

Please sign in to leave a comment.