Overview
The VMS allows plugins to provide data about objects that were detected on the picture. Objects are primarily a sequence of rectangle coordinates and a set of custom tags.
The object database is searchable by area of interest (as seen with motion detection), types and tags as provided by the plugin, date/time, and camera where appeared. Plugins are able to inject into the VMS analytics events with some basic attributes: date, time, and text tags.
Object metadata is stored partially in an SQL-based database (file object_detection.sqlite in the storage root catalog). The rest of the metadata is stored in a proprietary database (files in subdirectory archive/metadata/<camera_mac>/YYYY/MM/analytics_detailed_*.bin).
Users can choose which storage location should be used for metadata by going to the Server Settings menu, tab Storage Management, and selecting Use to store analytics data.
What hardware did we use during our testing?
Since the VMS is used in a wide variety of network and hardware environments, each situation should have its own capacity considerations regarding object storage and access. Let us consider several common usage scenarios. To gather figures below, we had to run the Server application with the stub plugin in a virtualized environment. The specification of the environment is listed here:
- Host CPU: Intel Core i7-6800K, 3.4 GHz, 6 cores, 15M cache, VT-x and VT-d enabled
- Host chipset: Intel C610
- Host memory: 32 GB, DRAM Freq 1066.5 MHz
- Host HDD: WDC WD40EFRX-68N32N0
- Host OS: Windows 10 Home 10.0.18362 N/A Build 18362
- Guest CPU: 6 cores
- Guest memory: 4 GB
- Guest OS: Ubuntu 18.04 LTS
How do I create an object?
Object creation is initiated by the analytics plugin and may be preceded by intense CPU usage if the plugin implemented so. With the Stub Analytics plugin, there is almost no CPU overhead, but real systems have to be planned with respect to the plugin's CPU and memory load. Although, these issues are not directly related to the VMS itself.
Object creation implies write operations, the amount of data written depends on the duration of the object that appears on the camera stream; the longer the duration, the more data it stores. For objects with an average duration of 3.3 seconds, it takes about 26 KB to store metadata on the drive. It gives 7 Kb/s stream overhead, what is insignificant in relation to video stream itself.
How do I search through the database?
Object search is initiated primarily by the Desktop client. A user can specify a time frame on the timeline and other parameters on the right panel. Search results are available on the notifications panel and are also presented on the timeline as yellow chunks.
This scenario is more complicated since it may cause read requests throughout all the System's metadata.
Searching over 2000 objects that belong to one camera within a 1-hour time frame causes VMS server to read 3 MB of data from the analytics storage. Database indices used usually allow for better performance, but for simplicity let us assume I/O amount grows linearly with time frame span, average object intensity, and camera count.
For example, if we store an object database on a general-purpose HDD with random-access read speed of 60 MB/s and have a search request latency of 1500 ms, it will give us a maximum search time frame of 30 hours.
CPU performance does not affect this scenario significantly. In contrast, RAM availability may give a performance boost for searches with partly repeated criteria, as the OS caches data that was read recently. There will be even greater performance gains if SSD drives are used to store an object metadata. Due to high latency and typically unstable throughput, remote drives (CIFS/NFS) do not perform well enough in this usage scenario.
Searching the object database in a system of merged servers
If a camera was moved from one server in a system to another, metadata is stored on more than one server. As a result, the server to which the client is connected to request all other servers for metadata that fits the filter.
Performance evaluation is an extremely complex problem here since it depends on how many servers are involved, how scattered the metadata is, and what the network performance between servers is. As in the previous scenario, the most probable bottleneck is metadata storage I/O throughput on random read operations.
If a remote server replies with about tens of thousands of objects and communication is performed over a congested connection, there may be network issues as well. Additionally, always ensure you have stable network connectivity between servers to prevent false failover triggers and enhance throughput and responsiveness.
Questions
If you have any questions related to this topic, or you want to share your experience with other community members or our team, please visit and engage in our support community or reach out to your local reseller.
Comments
0 comments
Article is closed for comments.