My trials with and observations of PureStorage’s All Flash Array

Wow – this looks lean! This is the first impression you get when you logon to the FlashArray’s console. And rightly so. Ridiculously easy to get going, no RAIDs (as with most, if not all, flash arrays) to worry about, just carve out LUNs, assign them to your hosts and away you go. I was in a meeting with Pure’s CTO, Michael Cornwell, and he said – we wanted the console to be so simple that a 6 year old could create LUNs and assign them to a host or group of hosts. My experience was exactly that too. Now to the details of my tests with the product. Note these tests were not to stress test the array, but instead check its:

  • resiliency and HA capabilities
  • non-disruptive upgrade testing, and;
  • ease of administration and single point of management

Test cluster:

5 x HP ProLiant BL685c G7 blades running ESXi 5.5 with FC HBAs. These hosts were 4 way, 16 cores for a logical total of 64 processors.

Test array:

Pure’s FlashArray FA-420 (upgraded to 450 during these tests) running firmware 4.0.12 (upgraded to 4.0.15 during these tests)

Test datastores:

55 x 2 TB VMFS 5 datastores.

Test workloads:

2000 fully loaded Windows 7 64 bit Citrix VDI desktops similar to the ones used for production VDI workloads. Some of these ran a synthetic workload generated by IOmeter. IOmeter workload was at a 4kb block with randomized 85% writes and 15% reads as is usually typical of a VDI workload. The IOmeter workload was kicked off by a script after bootup.

Tests performed with active workloads running (I kicked off bootstorms in some instances too while the tests were being performed for some added stress):

  • Controller disconnected: Event log entries noticed in ESXi for path failure. No effect on VDI workloads. The dip you see in the red boxes is the webserver switching to the redundant controller.

Pure1

  • Upto 5 flash disks failure: All up, 5 flash disks were pulled out. Initially, 2 disks were pulled out at once. The remaining 3 were pulled out just after the array recovered from the mimicked disk failure. The rebuild was super quick with no visible loss of IO. The NVRAM card was also pulled out from the controllers (one at a time for obvious reasons), with no visible effect.

Pure2

  • To demonstrate the non-disruptive code upgrade ability of the array, the following tests were conducted:

Code upgrade:

Pure4

During the code upgrade from 4.0.9 to 4.0.15, there was no loss of connectivity to the array. The red boxes show absence of info, it was just that there was no data was being collected. ESXi continued to run with no errors/event log entries. Note this wasnt a major upgrade of the firmware, just a minor one. I was advised major upgrades are just the same too.

Controller 1 upgrade:
Pure6

No issues were detected in VDI desktops when the controllers were upgraded from version 420 to 450. For more rigour into this test, a bootstorm was kicked to load the array further. IO switched from one controller to the other seamlessly. The blanked section you see is for missing IO on the controller that was upgraded.

Disk shelf addition:

Pure7

A disk shelf was added non-disruptively to the array, this doubled the capacity from 6.37TB to 12.74TB. As before, a bootstorm was added to the equation. No issues were experienced, non-disruptive addition. The highlight about this one was the fact that I as the VM admin didnt have to do anything at all. Sweet!

  •  vSphere integration:

PureStorage allows for vSphere integration by installing a plugin in the associated vCenter Server. The following dialog box was used to configure vCenter for monitoring and managing the array from a single pane of glass.

Pure18

The plugin was enabled in the Web Client, resulting in datastores showing up as below:

Pure12

Cody Hosterman has done a stellar job of demonstrating Pure’s integration here. I recommend checking this out for more information.

Summary:

As shown above, a number of tests were conducted and completed with no hiccups. You may notice the latency being very slightly higher, this is likely due to the bootstorm that was generated, post-bootstorm you’ll notice the latency settled down. Notice the bandwidth achieved! Again, it’s about the satisfaction of the end users, don’t get hung up on latency figures. I was grazing about 100k IOPS with the array saying – ‘is something happening at all’..?!

My observations:

With Pure I got

  • ease of management via the integration with vSphere (vCenter and SRM). You don’t have to be a SAN expert to manage the box or even need a SAN team if you went with 100% Pure in your environment.
  • non-destructive upgrades (something that’s glaringly missing from XtremIO’s Xbrick offering, no use beating a dead horse here, though they’ve adequately compensated by offering to baby-sit the upgrade).
  • outstanding HA capabilities that cover the risk of loss of data. These features coupled with replication and snapshot capabilities further drive down the risk.
  • top notch support. The engineers I worked with had storage knowledge coming out of their ears.
  • COST. At the end of the day, all AFAs do the same thing. Cost is one major influencing factor for the people with the keys to the money chest. Some people go – How on earth do they keep their price down?! From what I’ve gathered during my time with the array Pure’s offering is a fraction of the cost of XtremIO’s Xbrick of the same capacity. I’ll agree they use MLC flash disks instead of SLCs and this does drive the price down, but how many units/drives have actually failed? Apparently, only 5. I read this article by a good friend, Paul Meehan, look at the comments section for some, frankly, amazing stats. Additionally, Pure use variable-length block deduplication which helps achieve greater deduplication ratio (though it does take longer to do so, gotta make a trade-0ff somewhere).
  • intelligent data reduction technology. The large reduction (more than 100 to 1) you see in my tests are not a true indication since most VDI desktops were the same, the actual reduction would be more like 10 to 1 for a homogeneous workload as Pure claim.

I’ll be back with another (and more-comprehensive) post about Pure’s AFAs with real workloads with real end users if/when I get the chance.

Disclaimer – I don’t work for Pure or endorse their product(s), you decide whether their gear is right for you. These are only my observations.

4 Comments

 Add your comment
  1. Manny,

    Thank for sharing your experience with Pure Storage. I’m a huge fan of transparency and sharing, as it helps customers better understand real world capabilities. I look forward to the comprehensive follow up.

    — Cheers,
    v

  2. man the time you invest into your blog Manny makes my blogs look like farts 😛

    by-the-way; love the article. 🙂

    • Cheers mate, well I reckon one needs to put in decent time and effort into anything worth having and a blog’s one of those things. There are tons of people who do a far better job at this though! Look at Chris Wahl’s and Derek Seaman’s blogs for example, bursting at the seams with great information.

Leave a Comment

Your email address will not be published.

1 Trackback

  1. Anyone here use or trialed PureStorage/XtremIO/SolidFire? - Page 2 (Pingback)