A bit slow backup/restore speed with Pure
My ESX is hosted with a 1gig line and backs up over internet to another host which is connected to a 1gig line.
I tried a couple backup software before I found Pure. With Nakivo backup system I could get a 400-600Mbit transfer rate when doing backups. With Pure I get 40-60Mbit instead.
Is there anything I can do to get the speed up or is this something that needs to be done from Archiware's side?
Pure is running on a Synology nas same as I ran Nakivo on also. So that should not be an issue.
Where are you storing your Pure backups? Please check this topic here where another user experienced slow transfer rates due to the incorrect NFS version.
Another thing you can try is to increase the number of parallel backup jobs by increasing the 'parallelism' setting in the App Configuration -> Repository.
Pure is running on my NAS at home and is storing the data directly to one of the volumes on the same NAS.
I do have 2 paralell jobs running at the same but but each of them only manages to get the above speed where Nakivo did 10 times that on the same hardware removed link
But if each of them manages to get 40-60Mbit, how about increasing the number of jobs to 10? I am not sure how Nakivo software works internally but it is entirely possible they are running more than one job in parallel. If you can't resolve this issue, I will forward it to our NAS integration team and ask them to test the performance and make some recommendation.
They ran 1 backup of 1 VM at a time and got good speeds. Would be nice to see the same speeds with Pure.
running too many jobs in parallel might have the opposite effect as then the disks in the NAS might be the bottleneck when having too many writes to different places at the same time.
2-4 in parallel would be ok I think.
The best way to find out is to try. Start increasing the parallelism after each backup batch and see what performance you get. When you hit disk bottlenecks, decrease the number to the latest value you were ok with. Network transfer speed is something we will definitely work on and improve in future but in the meantime, I am sure you can get higher speeds by adjusting the available settings.
I will try and see what happens.
On more thing I saw in this mornings report was that the verification of the backups took weird times.
Low amount of data backed up took longer than a bigger amount of data backed up.
|mail1||38.93 GB||Backup||OK||00:08||04 min 10 s||0.966 GB||Operation completed successfully|
|Verify||OK||01:46||153 min 39 s||-||Operation completed successfully|
|fw||11.82 GB||Backup||OK||00:02||02 min 36 s||1.277 GB||Operation completed successfully|
|Verify||OK||00:33||08 min 06 s||-||Operation completed successfully|
As you can see above it looks weird.
Lower amount of backed up data should take shorter time than a bigger amount. Or am I wrong in how this works?
Verification will check ALL of the data for a particular backup against the 'live' VMwre data so the verification time greatly depends on the total virtual disk size of the VM. In your case, verification for the 'mail1' did not just check the 0.966GB of new data but the whole disk(s), which I assume are quite larger than the ones used by 'fw' VM.
Ah.. good to know then I understand. Thanks.
I updated to 8 parallel jobs and still only get a total of 40-60Mbit. So more parallel jobs does not make it go faster.
I have forwarded your issue to our NAS integration team and asked them to get back to you with the results of testing.
For reference I run it on a Synology DS2415+
Any news on speed testing and perhaps a fix for it?
Although the testing is not yet over, I can share what we have learned so far.
We managed to gain a little bit of performance increase by tweaking some network transfer settings like forgoing the ssl encryption on the download stream, but the gains were not game changing. When using network transfer to read data from vSphere, we are getting around 46 MB/s (Megabytes, so ~385 Mbit per second). This seems to be the maximum for a single transfer stream for now. Using the VMware's sample program to measure transfer speeds in the same environment (measuring only the read speed over network and disregarding everything else), we get between 51MBs and 63MBs (408-504 MBit).
Huge performance increase can be had simply by using the hotadd method, in which case we can easily get 256MBs (2048 Mbit) using Pure as a virtual appliance hosted on an ESXi that has direct access to the datastore where VMs are being kept. If at all possible, I would suggest to consider deploying Pure as a VM on one of your hosts and using the Synology NAS purely for storage (connected to Pure over iscsi or NFSv4). One half of the backup process will still use network transfer and the other half will be local, but for the network part, both iscsi and NFS are much more performant protocol than the NBD implementation that VMware uses. In our testing above, Pure was storing backups on an iscsi target sitting on an external device and added to Pure VM as a RDM disk. This iscsi target could have easily been on a Synology or a similar NAS device.
That all being said, your backup speeds seem to be unexpectedly low. I have seen such low speeds only two times over the last 10 days of testing and they were both exceptions due to some issues with our vSphere. Maybe you can monitor vSphere performance during Pure backups to see if there are any unusually high performance spikes. Also, I can send you an email with a download link for a slightly modified version of Pure which will log the transfer speed after each backup.
Thanks for the response.
Before I tested Pure I also tested Nakivo backup that I got really good performance from when doing the backups.
The reason I decided not to use that one was the high price they had but I got 10 times the speed from them on the same setup that I get from Pure. Which seems weird.
I only use this for my private stuff and some lab stuff I need to do. So cost is a big factor for me.