Where now for storage? Dell EMC, NetApp and HPE

by Emma

In this first article of two, we look at Dell, HPE, and NetApp. All three are making a big play towards the cloud – with Dell EMC and HPE’s consumption models very prominent – and NetApp, the only pure-play storage supplier of the three, being noticeably vocal about things like containers.

IDC forecasts that more than 50% of core datacentre infrastructure and 75% of edge infrastructure will be sold as a service by 2024.

Where now for storage? Dell EMC, NetApp and HPE

That trend is driven by the cloud and its as-a-service mode, and it potentially hits storage suppliers hard, with their historical dependence on the sale of hardware products.

So, we have seen the big players in storage adapt to the new world by offering consumption models of purchase via the cloud and on-premise and hybrid modes that straddle the two.

But that’s not the only trend. We also have the tendency towards the edge and to analytics-based IT activities, sometimes combined. There is also the rise of containers as a rapidly scalable method of application deployment.

And, of course, the storage array is not dead yet. But it is – for primary storage use cases at least – almost always flash-based and nearly always available with NVMe for very high performance. Elsewhere, flash is making inroads even in secondary use cases, mainly via the latest, bulk storage-focussed flash generation, QLC NAND.

According to their history, size, and reach in IT, and beyond, the most prominent storage players manifest these trends and other characteristics beyond. Here we look at Dell, HPE, and NetApp.

Dell EMC

Dell’s big push is towards making everything in IT infrastructure available as a service. That’s not to say there have been no storage hardware developments. There have. But if you had to characterize Dell Technologies’s main thrust, it’s summed up by Project Apex.

Project Apex was launched last summer at the virtual version of its annual shindig. It offers customers an Opex consumption model for Dell Power-branded products via local datacentre, edge, and cloud.

Project Apex services are coming online this year, starting with storage-as-a-service and Dell EMC storage. Further Project Apex rollouts will include hyper-converged infrastructure, Dell PowerEdge servers, PowerOne networking, and eventually workstations and laptops.

Having said all that, last year, Dell EMC did launch its new PowerStore midrange array, PowerScale NASPowerFlex software-defined storage, and rugged versions of its VxRail hyper-converged infrastructure and PowerEdge XE2420 server. Later the XE7100 storage server came along, targeted at real-time analytics in the hybrid cloud.

Via Dell’s web portal – Cloud Console – customers can order IT resources for delivery on-premises as a service. Customers specify the type of storage, capacity, performance, SLAs, and pricing requirements in the Cloud Console.

Dell EMC also has the SC and PS Series storage arrays – formerly Compellent and EqualLogic – on their books.

NetApp

Like others, NetApp has struggled with customers migrating from on-premises storage to the cloud. NetApp’s strategy, therefore, centers on offering its storage software as cloud-native services. These subscription services include NetApp Cloud Volumes on AWS and Google Cloud Platform and Azure NetApp Files.

NetApp also sells Cloud Volumes OnTap through AWS, which is an Amazon Machine Instance that uses Amazon Elastic Block Storage (EBS) to serve as the equivalent of an on-premises OnTap storage node.

Meanwhile, NetApp launched Project Astra last April. This centers on a containerized version of OnTap and is a data management service that manages, protects, and moves Kubernetes containerized workloads in a public cloud and on-prem.

NetApp made other moves into containerization in 2020. NetApp Spot Storage and Spot Ocean abstract the compute and cloud storage needed to run a Kubernetes farm. At the same time, Spot’s continuous optimization platform combines analytics and automation, and brokers cloud pricing to help organizations control costs.

Earlier container-focussed work included NetApp’s Trident open-source driver for provisioning container storage. Last year, NetApp also bought Talon Storage which brought global file caching and data sync capabilities, and CloudJumper to provide virtual desktops for customers.

NetApp introduced Keystone as the consumption model for its hardware storage products in 2019. Customers commit to a minimum storage capacity and timeframe and select from three performance levels and service offerings, such as files, blocks, or objects. NetApp installs and supports the equipment.

NetApp’s FAS array line-up expanded in 2020 with the FAS500f high-capacity model outfitted with quad-level cell (QLC) NAND solid-state drives (SSDs). Compared to previous NAND generations, QLC flash has a limited endurance and performance profile, with the trade-off being a lower cost per gigabyte.

HOPE

HPE – with its Greenlake consumption model – has said it wants everything to be available in the cloud by 2022. It also places importance on activities at the edge, as well as in containers and analytics.

In his keynote at the 2020 Discover event, CEO Antonio Neri said new HPE services would address customers’ needs to adapt edge and on-prem workloads to work with the cloud.

New elements in the portfolio include the Ezmeral Container Platform and Ezmeral ML Ops, delivered as cloud services through GreenLake.

HPE also unveiled HPE Cloud Volumes Backup, which converts proprietary backup data sets to a standard data format. That allows multiple data sets drawn from backup to be available in one form to secondary workloads – including analytics – running from the public cloud.

Before the 2020 event, HPE had also launched the Primera storage platform, possibly replacing its 3PAR range. Primera is an all-flash enterprise storage array sold as Tier 0 enterprise storage. Primera uses HPE InfoSight to provide an intelligent storage platform that incorporates AI and machine learning to predict and prevent storage disruptions.

HPE Primera uses custom chips to enable massively parallel transport of data across dedicated PCI express lanes. It is equipped to support NVMe flash and persistent storage memory used to train massive AI data sets.

Elsewhere HPE made its InfoSight predictive analytics resource management capabilities available on its HPE SimpliVity hyper-converged infrastructure platform.

HPE has its Container Platform on the container front, which combines the supplier’s BlueData and MapR acquisitions with an open-source Kubernetes layer. BlueData provides persistent data stores that can support stateful legacy applications. MapR is a distributed file system.

Related Posts

Leave a Comment