What purpose does AI driven video analytics serve and how does integrating with Nx Meta VMP help me and my users?
AI-driven computer vision/video analytics solutions serve two basic purposes:
- To augment video to allow operators to accomplish tasks more quickly / accurately
- To automate and remove human operators from a system reaction/process completely
For Nx Meta VMP we can distill these two general concepts into something more specific:
- Video Augmentation - a Plugin captures metadata from video and display that metadata in the Nx Desktop client (object bounding boxes and overlays on to of live or recorded video) to generate an augmented reality view of video feeds. Plugins also generate AI-triggered Events and enable smarter search.
- Example Use Case: Provide an alert to an operator when an unidentified person has entered a secure area, put an overlay on the person on live/recorded video, and allow the operator to search for recorded video from other cameras that include the target individual.
- System Automation - we can use the Rules Engine in Nx Meta VMP to program the System to react without the need for operator participation.
- Example Use Case: Tony is identified by a Powered-by-Nx System with an integrated face recognition system. The System automatically sends an HTTP Request to a 3rd party Access Control solution authenticating Tony, which opens a secured door to allow Tony entry.
Nx Meta VMP provides two ways to achieve augmentation/automation with A.I. video analytics:
- Nx Meta VMP captures metadata (detected objects) from integrated computer vision/video analytics solutions and stores the metadata in a dedicated database on the Nx Server application.
- Nx Meta VMP's Metadata SDK provides the ability to create Custom Events that can be used by operators in the Rules Engine to create rules that automate System reactions.
Objects & Events Methods
The latest version of Nx Meta (v4.0 alpha) provides two methods for integrating intelligent video applications:
- accepts detected objects with bounding boxes and attributes
- stores the entire history of detected objects in the database
- visualizes detected objects and their attributes on live or recorded video
- searches through all detected objects
- shows live feed of objects in the interface
- accepts information about specific event detected on analyzed video
- stores events in the database
- searches through the events log
- takes configurable actions for those events, for example:
- show notifications
- bookmark an event
- send a request to a third-party service (to open a door or to perform any other action)
- send a command to Zapier (to do a lot of other automation: https://zapier.com/)
The Metadata (Analytics) SDK also provides additional features for deep integrations:
- Settings: Allows configuring the analytics solution inside Nx Desktop client.
- Configure ‘global’ (VMS System-wide) settings for video analytics such as the license key
- Configure video analytics for a specific camera
- Context actions: Allows manipulation of detected objects.
- Add a name to a detected person
- Add a license plate number to a block list
- Mark the detection as incorrect to train AI
- Plugin events: Send diagnostic notifications to the user.
- Notify the user that analytics service is not available
- Inform the user about incorrect settings configuration
Plugin Use in Nx Desktop
Enabling and Configuring a Plugin
Once a Plugin is installed on an Nx Server, the Nx Desktop Clients in a System will have a new ‘Analytics’ tab in the Camera Settings dialog where the user can enable or disable the Plugin for the specific camera(s) as well as configure other camera-related settings. Once a Plugin is enabled for a camera the Nx Server will feed video frames into the integrated video analytic engine for analysis.
Configuring System-Wide Settings for a Plugin
Users can find and modify the Plugin’s System-wide settings in the Analytics tab on the System Administration dialog.
Viewing detected objects on live / recorded video in the Objects tab of the Notifications panel.
Once a Plugin is enabled, users can see bounding boxes and previews for detected objects on live or recorded video by opening a camera in a Layout and selecting the Objects tab in the Notifications Panel.
Bonus: Context Actions
Context actions provided by a Plugin (e.g. enroll face) can be accessed by a user by right-clicking on detected objects in the Object tab of the Notifications Panel.
Accessing Analytic Events
Once a Plugin is enabled, Plugin-defined analytics Events become available in the Event Rules Engine, and users can create and configure Rules using these Plugin Events to trigger System Actions.
If you have any questions related to this topic or you want to share your experience with other community members or our team, please visit and engage in our support community or reach out to your local reseller.