Spud Webb could reach great heights with a basketball in hand and Danny Woodhead could weave up and down the field. Both proving that sometimes smaller IS better. Deduplicating your production storage is also better, and when done correctly, faster. But how can this be you may ask? If I have to spend more time and overhead calculating hashes, updating tables, and generally manipulating bits and bytes, surely this would slow my array down. Not when you use flash as an accelerator and separate the metadata from the data blocks. The end result of faster processing is also the fact that you can squeeze more data into the fastest layer of your array. The cache. The more data you can fit here the more data you can serve up to your applications with extremely low latencies. Low latency means fast application response. And as I’ve said time and time again, fast application response means happy end users. Happy ends users mean IT staff that gets to focus on other things besides troubleshooting. Combining a deduplication process that strips the slow parts of the process away from slow spinning disk and turbocharges it by using SSD, and by cramming more of your active working set into cache makes for a faster array. Smaller really is faster.