Americas

  • United States
John Edwards
Contributing writer

Choosing between solid-state and hard-disk drives

News
Dec 02, 20198 mins
Data CenterEnterprise Storage

Before the last hard-disk drive spins down, IT teams need to navigate an HDD-to-SSD transition. Here's a look at current options and best practices.

big data / data center / server racks / storage / binary code / analytics
Credit: monsitj / Getty Images

When it comes to venerable IT technologies, it’s hard to beat the hard-disk drive (HDD).

Introduced by IBM in 1956, the Model 350 RAMAC – larger than a home refrigerator – could store up to 3.75MB of data. While modern HDDs are smaller, faster, and capable of holding multiple terabytes of data, the underlying technology has changed little over the past 60-plus years.

HDDs and tape drives ruled the desktop and data-center storage world virtually unopposed until the past decade or so, when NAND flash solid-state drives (SSD) began maturing to the point where they could not only rival or surpass HDDs in term of capacity, speed and reliability, but also on the basis of cost for certain applications.

Most experts believe that SSDs are destined to eventually become the predominant storage technology. However, making the choice between SSDs and HDDs today remains far from clear cut. “There is no straightforward answer,” says Devesh Tiwari, an assistant professor of electrical and computer engineering at Northeastern University.

Tiwari advises IT leaders to consider a variety of issues before deciding on the appropriate technology for a particular storage application, including workload size and demand, latency and bandwidth needs, and storage architecture and infrastructure connectivity requirements. It’s also helpful to assess basic storage factors such as elasticity, reliability and availability, while understanding that conclusions made today may not hold true in the near future as SSD technology and pricing continue to evolve. “Nothing is constant; this space is evolving rapidly,” Tiwari says.

Different types of SSD drives

A traditional HDD stores data on a high-speed rotating disc, known as a platter. As the platter spins, an arm equipped with a pair of magnetic heads (one for each side of the platter) moves over the surfaces to read or write data. Bits of data are organized into concentric, circular tracks. Each track is divided into smaller areas called sectors. Most hard drives use a stack of platters, mounted on a central spindle with a small gap in-between them. A sector map created by the HDD records which sectors have been used as well as those that remain free.

Unlike an HDD, an SSD has no moving parts. Instead, data is written to and read from a substrate of interconnected flash memory chips. SSD manufacturers stack the memory chips in a grid to achieve varying densities. To prevent volatility, SSDs use floating gate transistors (FGRs) to hold the electrical charge. This technique enables an SSD to retain stored data even when it’s not connected to a power source.

IT organizations can turn to several different types of SSDs, including:

  • SLC: Single-Level Cell SSDs store a single bit in each cell, an approach that aims to produce enhanced performance, endurance and accuracy. Pricier than most other flash memory options, SLC SSDs are widely used for an extensive range of mission-critical enterprise applications and storage services.
  • TLC: Less expensive than SLC is Triple-Level Cell NAND flash technology. Storing three bits per cell, TLC is typically used for applications with low performance and endurance requirements. The technology is best suited for read-intensive applications.
  • MLC: Multi-Level Cell SSDs, which store two bits per cell, are generally viewed as a consumer-grade technology. While stuffing more than one bit into a memory cell conserves space, the tradeoff is a shorter useful life and diminished reliability. MLC SSDs often find a home in desktop and notebook computers.
  • eMLC: Enterprise Multi-Level Cell technology aims to span the performance, reliability and price gap between SLC and MLC SSDs. While still storing two bits per cell, eMLC takes advantage of a controller that enhances reliability and performance by optimizing data placement, wear leveling and other key storage operations.
  • QLC: Quad-Level Cell technology supplies more capacity than SLC, MLC and TLC NAND SSDs, but not as much extra space as might be expected. While MLC doubled the capacity of SLC, and TLC offered a 33 percent storage improvement over MLC, QLC supplies only a relatively modest 25 percent boost over TLC. Still, QLC’s cost, density, speed and power efficiency attributes make it a strong choice for applications such as machine learning, data analytics and media streaming.

All types of SSDs fall into the category of “consumable media,” meaning they gradually wear out as data is written over and over to the drive. SSD failure is usually gradual, as individual cells fail and overall performance degrades,although sudden failure may occur as well. Many SSD manufacturers address the gradual failure issue, known as “wear-out,” by overprovisioning their products, including slightly more memory than is actually claimed in product literature.

“All SSD manufacturers provide an endurance rating called ‘drive writes per day’ (DWPD), which corresponds to their expected use case,” says Paul von-Stamwitz, a senior storage architect at Fujitsu Solutions Lab. Read-intensive drives, for instance, can be used in applications that have a light write workload and will therefore have a lower DWPD rating than mixed-use drives. “As long as the workload matches the DWPD rating, the SSDs should easily last throughout the warranty period,” von-Stamwitz notes.

Most enterprise SSDs are based on TLC technology, primarily due to their lower cost compared to other types of NAND flash drives. TLC SSDs are typically used for routine read tasks and light-duty write operations. QLC SSDs, featuring a low DWPD that’s counterbalanced by density, speed and power efficiency advantages, are frequently applied to high performance, read-intensive applications. Meanwhile, a growing number of IT organizations seeking higher performance are turning to SSDs based on 3D XPoint, an emerging class of non-volatile storage and memory devices that are faster and denser than previous NAND flash devices. “These drives are suitable for specific applications that require consistent, ultra-low latency performance such as real-time analytics,” von-Stamwitz explains.

SSD performance vs. cost

Generally speaking, SSDs outperform HDDs across the board. Having solid-state components makes SSDs inherently more reliable and energy efficient. “A RAID-5 SSD volume may be as reliable as a RAID-6 hard-disk volume, and rebuilds are faster with SSDs,” von-Stamwitz says. “In addition to reducing energy cost, the extra performance may allow for a reduced footprint in the data center since fewer SSDs can produce the same number of IOPS ((Input/Output Operations Per Second) as hard disks,” he adds.

Yet SSDs aren’t always the best choice for every enterprise data-storage function. “The determining factor should be what you’re doing with them,” advises Matthew Tonelson, operations engineer for The St. Paul Group, a Baltimore-based data reporting and connectivity solution provider. “An SSD would be an unnecessary cost to store old files that will rarely be used, due to high cost and low use,” he explains. “Also, if there are a high amount of writes to the drive, an SSD will need to be replaced often, leading to more cost.”

Cost is almost always a critical factor when deciding whether to use an HDD or SDD for a specific storage task, particularly since SSDs are currently four to five times more expensive than a comparable HDD, says Shailesh Kumar Davey, vice president and director of engineering for IT management software developer ManageEngine. “Fortunately, costs are falling, and newer technologies like 3D XPoint offer a better price-performance ratio,” he says.

Perhaps the largest caveat associated with SSDs, other than cost and long-term wear issues, is the technology’s tendency to occasionally fail without warning. “When a traditional HDD fails, there’s usually a warning period of slower-than-normal performance,” says Steve Buchanan, a support technician at Limestone Networks, a Dallas-based data-center services provider. On the other hand, an SSD “can crumble with zero-warning unless properly monitored with software,” he notes.

Moving from HDDs to SSDs

The best way to begin planning a modern storage system deployment is by first determining how the array will ultimately be used. “If you’re going to be delivering large files to many users at once, then the more SSDs you can run the better,” Buchanan suggests. “If, on the other hand, you need a safe ‘file and forget’ solution for old documents you might only need occasionally, then focus on using mostly traditional HDDs.”

Tiwari agrees. “If you want fast response time, are mostly read-heavy and have deep pockets, SSD might be the first option you want to explore.” he says. “If your data is going to be sitting for days and months before getting accessed, and require really large amounts of capacity, look the HDD way.”

It’s also important to understand, however, that if current pricing trends continue, HDDs may soon be heading out the data-center door for good. “The price of SSDs has come down drastically in the past few years, so much so that many data centers have migrated or are planning to migrate … to all solid-state drives,” von-Stamwitz says. “Generally, the only reason not to move from HDDs to SSDs is the cost, and it is becoming less and less an issue considering all the benefits of SSDs.”