System configuration. 3 servers, 100 cameras, best method for backing up to NAS for hive/failover? Skipping RAID?
AnsweredGreetings,
I'm deploying to a new site, and I'm just about to configure the Synology NAS to handle the backup archive. Each server has a 2TB drive for ~1-3 days of archive. However, due to some unique needs, we need 3-6 months of archive. We run the streams pretty light, so we're fine with network bandwidth across a single NIC on the servers.
My basic question is, is it better to have each drive in the NAS as it's own storage pool / volume? Where I am mapping each individual drive on every server?
The documentation for NX says that the write-to-archive process works this way:
- All suitable drives are written concurrently and according to a ratio the system calculates for their size. So, for example, if a single server has multiple sized hard drives Nx Witness will fill up each hard drive at the same rate to ensure that no single drive's system bus gets overloaded.
But, I'm not 100% clear if that is true for how it handles backing up the archive as well? And I don't see mention of any specifics as to how multiple servers in a system should be configured for backup as well.
Here's a rough sketch of what I'm asking:
Questions:
- For the storage pool configuration, I know the overall benefits of RAID and the different types. We are ok if a single drive fails and we lose a rather random segment of the archive. I also know that some RAID / multi-drive configurations would allow a greater overall write speed... But, if NX is evenly distributing the archive backup across all available drives anyways, then that doesn't seem like a real advantage?
- Is it ok to set the same backup destination(s) for multiple servers? For failover to work properly, I'm assuming I need to set all available backup destinations on all servers. If not, what is the best way to configure things overall for redundancy for server failure, and ensuring the longest possible archive backup?
-
Hi Luke McFadden,
Regarding you questions:
My basic question is, is it better to have each drive in the NAS as it's own storage pool / volume? Where I am mapping each individual drive on every server?
Yes, mapping each drive as its own storage pool has the advantage that Nx can manage the drives as they would be local drives and thus dividing the load for each drive as described in THIS support article.
With this in mind, the left configuration in the image is a better than the right one.For the storage pool configuration, I know the overall benefits of RAID and the different types. We are ok if a single drive fails and we lose a rather random segment of the archive. I also know that some RAID / multi-drive configurations would allow a greater overall write speed... But, if NX is evenly distributing the archive backup across all available drives anyways, then that doesn't seem like a real advantage?
I don't have a strict opinion about the use of RAID and which RAID configuration, but what should be kept in mind is that often RAID isn't always fast enough to process the continuous load as it occurs in a video system, there I configure my system with the backup function of Nx Witness where I have an identical copy of my main storage, but shorter, since I use a smaller drive for the backup.
Is it ok to set the same backup destination(s) for multiple servers? For failover to work properly, I'm assuming I need to set all available backup destinations on all servers. If not, what is the best way to configure things overall for redundancy for server failure, and ensuring the longest possible archive backup?
In theory, you can use a single external storage solution for multiple VMS servers, but keep in mind that the storage solution should have enough resources to handle the amount of data. Often performance issues occur due to the fact the storage resources aren't powerful enough to handle the continuous load, and therefore it is recommended to consult the supplier of the storage devices to get an advice which storage device and how many to fit your use case.
Please sign in to leave a comment.
Comments
1 comment