I’ve been going back and forth with this issue for some time but honestly I have no idea if the vCenter telemetry is something to rely on. I’m experiencing rather high latency on the storage on my VMs, most of them idle, only vCenter and virtual firewall generate some IOPS, 5 are shut down, other 3 VMs are linux machines that idle for 99%, even though they can spike 100ms per IO. Today I have decided to migrate a VM storage to another server to find that higher disk utilization reduces the latency on the host, how that makes any sense? I’m using P420 in RAID 10 with 4x4TB 7k SAS HDDs.

Host latency:

https://preview.redd.it/cqvmy550ty1c1.png?width=986&format=png&auto=webp&s=f5823391eb6cd82cb9612b44aa2768087bf619e1

You are viewing a single thread.
View all comments
1 point

I had the same issue, only using a 930-8i w/ 2M cache. Honestly performance sucked on all of my VMs. I reinstalled the server with Rocky Linux 8.8 and KVM using the same array and performance was acceptable (the array was configured as a LVM volume). I then added a NVMe drive as a LVM cache and performance was much better (good enough for my homelab). Too bad, since I really prefer VMware.

permalink
report
reply

Homelab

!homelab@selfhosted.forum

Create post

Rules

  • Be Civil.
  • Post about your homelab, discussion of your homelab, questions you may have, or general discussion about transition your skill from the homelab to the workplace.
  • No memes or potato images.
  • We love detailed homelab builds, especially network diagrams!
  • Report any posts that you feel should be brought to our attention.
  • Please no shitposting or blogspam.
  • No Referral Linking.
  • Keep piracy discussion off of this community

Community stats

  • 9

    Monthly active users

  • 1.4K

    Posts

  • 6K

    Comments