![]() They need to provide you will a full report of the snapshot sizes, retention times and volume layout.Īlso, It sounds like you're leveraging snapmirror? That will also cost snap reserve space.Īsk them to give you the output of a snap list command. What would happen is that the 25GB would become full and you'd lose 75GB of usable space. It comes from the user space, that 475GB that you are paying for. If the snapshot reserve is only 25GB where does that disk space come from? You delete 100GB of files whilst having a clear up. Now, a snapshot occurs at 13:00 - that's 0bytes in size. 25GB is what is in what Netapp call the snapshot reserve. Lets say that you have a 500GB volume with snapshots, by default Netapp will take 5% of that area for snapshots. Sorry to say this but your hosting provider don't understand how a Netapp snapshot works. It would be nice if a NetApp expert could explain this to me better. It sounds like a snapshot could have been taken just prior to the RSYNC operation, and then caused it to balloon out of control before the NetApp copied it off. I've scoured my datastore looking for any temp files, or anything else out of the norm and can't find anything. The hosting provider tells me that the snapshots are taken, and then copied to an additional (+)5% area of space that is not provisioned to me on the NetApp so that none of my disk space is used for the snapshots. Zero? Really? I'm skeptical about a snapshot being 0 bytes. I asked my hosting provider about it and they explained that the new NetApp SAN does "zero-byte" snapshots to backup my data. ![]() Question is: what could possibly have caused this much disk (127GB) to get eaten? Database was 99GB that was being RSYNC'd. MASTER server has 24GB of RAM, SLAVE has 12GB of RAM. Swap space was not being used on the Linux VMs. When it crashed I was RSYNCing between the MASTER & SLAVE servers but its become more or less routine at this point with this being the 3rd time I've done this. The day before it crashed I had moved my last VM over to the new storage (which is a NetApp SAN) and had noted 127GB of disk space available. We have always rode the storage close, around 10% to make efficient use of the $$, but last week the cluster froze up and ran out of space on the "capacity" based datastore. Have done this three times now, so I've got some practice! Then once done, RSYNC the database directory, and re-establish the MASTER/SLAVE relationship. For that I have a MASTER/SLAVE relationship setup where I can fail over to the SLAVE server while I move the MASTER server. Each time I have to migrate my VMs, which isn't a problem, until I get to a PostgreSQL server. It is hosted storage at the datacenter, and the hosting provider has upgraded their SAN more than once. Not much has changed in my environment over the past few months, except for the underlying storage vendors. One datastore is 1TB "capacity" based storage of SATA disks on a SAN, and the other is 500GB "performance" based storage on a SAN with SAS drives behind SSD. I've got a cluster of three host servers that all have ~250GB of local storage, unused, and all share two (2) datastores. Had an issue a few days ago on a VMware cluster, and was wondering if anyone here could help me understand some of the nuances of VMware and what it does with its datastores. ![]()
0 Comments
Leave a Reply. |