Images say more than thousands of words…
After tuning a bit my new VMware ESXi Bare Metal, I had the idea of comparing the software RAID with an Enterprise RAID, basically the performance is the double, of course the servers are different but the benchmark and performance in both servers were exactly that I was expecting
This server had a big problem that in the P2V (Physical to Virtual) Migration got fixed, the NTFS had 17 millions of records with less than 50 thousand files, the MFT is close to the 8GB!!, the problem was that one of the sites hosted here had more than 3 million files, I repartitioned the disk to separate the OS layer from the websites and hosted files moving close to 4 million of files to a new partition, the result is that the MFT size is 3,2 GB in the new partition but in the NTFS architecture there in no way to clean the records Microsoft recommends reinstall the system. I guess I must describe that in other post, let focus in the performance.
The subsequent three images are from the new server but with the cache off/on and Write back / trough, In almost any VMware forum you’ll see posts from people talking about poor performance when running their data store on a directly attached RAID array. The performance problems are about 3 or 4 configurations that are only available in Enterprise RAID cards.
The most important is Enabling the write-back, will cache writes in its on-board memory and indicate to the OS that the data has been successfully written to disk. The card then sends that data from memory down to the disk, but the advantage is that ESXi isn’t sitting waiting for that to happen, its moved on to other things by then. Write-back greatly increases random write operations which are highly characteristic in a VM environment.