This article will attempt to enlighten you about some of the benefits AND drawbacks of using thin provisioning technology. The examples I’ll give reference VMware- a leading virtualization vendor’s hypervisor, but you can safely substitute just about any virtualization technology out there (Hyper-V, Oracle VM, Xen, KVM) in it’s place.
Normally, when you provision storage and present it to the client, the entire amount of storage is allocated immediately and is unusable by any other clients- that chunk of disk is spoken for. The client then begins to use the storage and write data to it. Problem is- you almost never completely fill up that storage so there is some waste involved. Then along comes thin provisioning to save the day! Now we can identify a chunk of storage for use by a given client(s) but not have to pre-allocate all of it immediately. This allows us to do something called oversubscribe or overprovision our storage.
Let’s look at a practical example: given a SAN with 1TB capacity and thin provisioning enabled, you can allocate more than 1TB of total space to your clients. That is- you could give client 1 a 500GB thin allocation, client 2 a 400GB thin allocation and client 3 a 250GB thin allocation. For the math experts out there- you realize this totals up to more than the 1TB that we actually have. This works because we don’t allocate all 500GB to client 1 right away. It only gets allocated when data is actually written to disk. This gets us around the issue of wasted space in a thick provisioned scenario, but it opens the doors to another more frightening possibility- running out of storage on the SAN itself. So if we follow that scenario to its unfortunate but inevitable conclusion, and we start getting close to the point of filling up the SAN based on the usage of client 1, 2 and 3- you find yourself in a sticky situation. You would either have to scale back your usage from the client side (yeah that’s gonna happen), or you have to spend money and make your SAN bigger (see previous comment). Either scenario is not good.
I used to consider myself firmly in the camp of “always thin provision” due to the space savings and ability to overallocate. After having lived through a few scenarios like the one above I decided to take a more pragmatic approach to thin provisioning. My approach today is this: if a volume needs to be artificially large and very under-utilized or if the volume has a low data change rate, then thin provisioning may be a good idea. If there is a high amount of utilization and/or a high data change rate then use thick provisioning. You’re probably asking yourself right now- why does the data change rate matter? Allow me to explain in the next few paragraphs and the drawing below (Note again- I’m using VMware as an example here but this discussion is not specific to VMware).
When you write to a thin provisioned volume and then delete data from within the OS, neither the SAN nor the hypervisor knows that you have deleted anything. The SAN knows that at some point in the past- a block of data was written to. The fact that you delete that data on the client side doesn’t matter to the SAN- it has no way of knowing the difference between real data being written and data or pointers being updated. The SAN is just not that smart- so it says “ok- you wrote a chunk of data to sector 123. I’m going to keep that data block allocated”. Even though the data being written to sector 123 may be just pointers being cleared from a delete operation inside the guest OS, the SAN just says “I have to keep that data”. What happens is the volume “thickens” out from the SAN and hypervisor perspective. Think of it as a high water mark. In the third frame above, the yellow previously allocated blocks can be written to again by the OS. The SAN and hypervisor are just reporting the blocks as used because they have no visibility into what is actually being stored in those blocks. They only know data was written to them at one time. That data could be actual real data, or all zeroes for all it knows.