Data storage has come a long way since the first hard-disk drive (HDD) hit the market in 1956. The IBM 350 Disk File was the size of two refrigerators, weighed more than a ton, had 50 spinning magnetic disks the size of pizza pans and could hold roughly 5MB of data. Today, an HDD capable of storing multiple terabytes weighs just a couple of pounds and will more or less fit in the palm of your hand.
However, disk technology has not kept pace with CPU technology. While storage density has improved dramatically, mechanical limitations prevent HDDs from achieving faster rotational speeds. This has created a significant performance gap between storage and computing that has organizations struggling to contend with rapid data growth and large-scale social, mobile, big data and cloud initiatives.
More and more organizations are incorporating solid-state disks (SSDs) into the storage environment to help address these issues. SSDs use flash memory instead of spinning magnetic disks to store and retrieve data. With no moving parts, SSDs eliminate the mechanical chokepoints of disk storage – thus enabling read/write response times that are exponentially faster than the best HDDs.
Flash is particularly effective in virtualized environments in which multiple operating systems run on a single server. These mixed workloads create random input/output (I/O) requests for the storage array, which is difficult for spinning disks to handle. Disk heads are almost constantly rotating back and forth looking for data, which add precious milliseconds to the read/write process and creates a huge performance bottleneck. This is known as the “I/O blender” effect.
People have typically tried to resolve the I/O blender effect by adding more spindles, provisioning more disk space and creating storage silos to meet the different performance and capacity requirements of various applications. That’s obviously not sustainable. You wind up wasting storage capacity and physical space while increasing the management burden and the power required to spin all those disks. In the end, you can end up doubling your cost per gigabyte of storage.
Flash memory actually excels at random I/O performance. Since it doesn’t have to spin or rotate, it essentially has direct access to all data locations simultaneously. It is equally fast on random workloads as on sequential ones, producing huge gains in I/O per second (IOPS) compared to HDD. A single flash drive can deliver tens of thousands of IOPS – the equivalent of an entire midrange disk array.
Additionally, flash saves on power, cooling and physical space, all of which are in short supply in many data centers. Per IOPS, flash uses around 600 times less energy than disk. Some data center operators have reported up to 90 percent reductions in power and cooling costs when replacing disk with flash.
With the price of usable flash storage capacity now comparable to that of high-performance disk drives, it is natural to explore the use of flash to accelerate the speed and agility of mainstream applications. From a performance perspective, all-flash arrays meet the most demanding requirements.
These purpose-built arrays, also known as solid-state arrays (SSAs), aggregate flash into one large pool and implement flash management services across the entire array. They generally have an operating system and accompanying data management software that includes deduplication, compression, replication, snapshots, thin provisioning, advanced data protection and storage scaling that have been tuned specifically for flash technologies.