Trends

OPINION

Dell Takes the Long View With Hyper Scale Computing

Technologically inclined businesses and other organizations have long enjoyed what Icall IT “trickle down”: Continuing, rapid development results in the mainstreamingof hardware, software and services that were originally unthinkably expensive andspecialized. This doesn’t mean that higher-end products ever disappear.

In fact,the baseline performance of enterprise solutions typically ratchets ever upward. However,the effective result everywhere else is to put what were once radically powerful andunaffordable tools into the hands of most any business.

Supercomputing — along with associated high-performance and technical computing –shows how this works. Once these technologies were almost completely relegated towell-financed government and university labs and major enterprise data centers. Thecontinual evolution of Intel’s x86 microprocessor architecture, along with complementaryclustering, grid and virtualization technologies, made x86 the dominant player in modernsupercomputing it is today.

The latest Top500.org list of the world’s best supercomputers, published in June, providesclear evidence of this. The top-rated supercomputer — the “Sequoia” installation at theDOE’s Lawrence Livermore National Laboratory — is an IBM BlueGene/Q system basedon the company’s Power architecture, as are three of the list’s other top 10 systems.However, five of the top 10 utilize Intel Xeon or AMD Opteron processors. Moreimportantly, out of all the systems on the latest Top500.org list, nearly 87 percent (434)are x86-based.

Moreover, the users of these systems have changed significantly. In 1993, whenTop500.org began collecting supercomputer statistics, fewer than a third of the systemswere being used in industrial settings. Today, more than half of the world’s fastest computersare being used by enterprises. Notable changes have also occurred elsewhere, ashighly scalable and affordable x86-based technologies have taken supercomputing andHPC deep into the commercial market.

There’s Something About Dell

What does any of this have to do with Dell’s new C8000 Series? Quite simply, thesenew solutions are designed to extend the company’s already substantial hyper scalecomputing portfolio into new areas.

Dell launched its Data Center Solutions (DCS) group in 2007 to focus on the emergingcommercial hyper scale market, and the company has done very well overall. IDC’sanalysis of FY 2011 worldwide server sales revenues placed Dell firmly in first place inDensity Optimized (IDC’s term for hyper scale) system revenues with a 45.2 percent share(HP was a distant 2nd with a mere 15.5 percent). While the segment’s FY2011 revenuestotaled less than US$2 billion (compared to the worldwide x86 server market’s $34.4 billion), IDCsaid that demand for Density Optimized systems grew by a robust 33.8 percent in FY2011compared to just 7.7 percent for x86 solutions.

Dell means for the new C8000 Series to expand its leadership position by using highlyconfigurable, flexibly deployable solutions to widen the pool of hyper scale use casesand potential customers. Along with typical HPC and Web 2.0 and hosting applications,the C8000 Series can also support both parallel processing-intensive scientificvisualization workloads and the high-volume storage demands of Big Data applications.

Plus, the new systems take full advantage of Dell’s innovative work in fresh air cooling,which allows servers to be deployed without costly air conditioning systems or coolingupgrades. Plus, they can be placed in nontraditional settings, including Dell’s innovativeModular Data Center infrastructures. That means that Dell’s C8000 Series is likely tofind fans among a variety of organizations, including new and even smaller companiesinvestigating the hyper-scale market.

The new Dell systems should also pique the interest of longtime HPC and technicalcomputing players. In fact, the Texas Advanced Computing Center (TACC) is anearly advocate of the C8000 Series and is basing its upcoming petascale Stampedeinstallation on “several thousand PowerEdge C8000 servers with GPUs to help speedscientific discovery.” When it opens for business in 2013, Stampede will qualify asthe most powerful system in the National Science Foundation’s eXtreme Digital (XD)program with a peak performance of 10 petaflops, 272 terabytes of total memory and 14petabytes of disk storage.

Far-Sighted Strategy

So how big a deal is Dell’s C8000 Series? Some will suggest that the small sizeof the hyper scale market (at least compared to general purpose server opportunities)makes any effort small potatoes. That may be true in today’s dollars but makes lesssense looking ahead. Several of the use cases for the C8000 Series — hosting, Web 2.0and Big Data, in particular — are growing rapidly, and interest in commercial HPC andscientific computing applications is also robust.

Given the development of these markets over the past half-decade and the promiseof their continuing growth, Dell’s 2007 entry into hyper scale solutions looks extremelyfar-sighted. Given the company’s longstanding investments in that effort, its resultingleadership position is hardly a surprise. The new C8000 Series proves Dell is continuingto look forward and developing solutions its customers will need tomorrow but can alsouse quite handily today.

Charles King

E-Commerce Times columnist Charles King is principal analyst for Pund-IT, an IT industry consultancy that emphasizes understanding technology and product evolution, and interpreting the effects these changes will have on business customers and the greater IT marketplace. Though Pund-IT provides consulting and other services to technology vendors, the opinions expressed in this commentary are King's alone.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories

E-Commerce Times Channels