Taking in NetApp Insight 2018
We sent a few of our smart guys to check out NetApp’s 2018 Insight Conference in Las Vegas this week and we are hearing some great things! With a lot of great takeaways from the event, here is some of what Jeff Hoeft, NetApp Storage Consultant and Adam Thompson, Account Executive had to say about their time there:
Jeff Hoeft – NetApp Storage Consultant:
There were numerous sessions around DevOps and its capabilities at Insight this year. Last year, there was only one PowerShell class, so it’s refreshing to see more sessions added, and a good indication of what we’ve been suspecting; DevOps is making quite the footprint in the market. In fact, this year, there were four different sessions ranging from beginner to advanced, dedicated to testing DevOps methodologies.
Other topics that were discussed include Kubernetes, Trident (on-demand provisioning of NetApp storage products to containers), and Github/scripts which was mentioned in numerous HCI discussions.
In attending a couple of the first sessions, it is apparent that many people are interested in exploring or have implemented PowerShell-related solutions in their own environments. In a discussion that I had with an associate, I can see how we could provide some scripted capabilities (new volumes/LUNs, volume moves, etc.) when we implement a new NetApp that could help a migration effort, or get the client up and running, while they learn the platform.
I also attended a couple of other discussions around NetApp HCI. Where we went through running database workloads and protection applications (specifically Oracle). The data protection (SnapMirror replication from the HCI ElementOS storage to NetApp ONTAP FAS ) seemed to work best when using APIs to configure and enable replication between the platforms. The database workload discussion focused a lot on the QoS or “performance tiering” capabilities of ElementOS and the ability to change those values on-the-fly.
It’s obvious that NetApp wants their DataFabric to be the cloud enablement vehicle for clients, and data is central to their messaging. Their four year vision around the DataFabric started with tiering data to S3 and today’s demonstration of their abilities of their Cloud Central platform showed a much broader portfolio. They have positioned themselves, through the installation of physical NetApp storage in Amazon, Google and Azure, in order to give the flexibility to select where the volumes should live and how data can be accessed.
Interestingly, there was almost no mention of hardware in Tuesday’s keynote. Every storage platform/cloud provider has the ability to store data, what NetApp the DataFabric provides is an ability to eliminate the isolated storage islands that tend to pop up in the different cloud providers. This allows for data to be available wherever it is needed.
I think the Cloud Central platform was undersold while they were demonstrating its capabilities. There were some really interesting integration points from Cloud Central to a couple different locations. Specifically around the ability to pull inventory from ActiveIQ, the current evolution of AutoSupport, to allow those systems to be “added to the fabric” (I am not sure what all that entails right now, will find out more, later…). They demonstrated the ability to deploy a Kubernetes cluster and a cloud volume on a NetApp HCI from Cloud Central; In my opinion, those capabilities will start to get NetApp HCI a significantly more traction in the hyperconverged space and I am excited for what the future holds.
Adam Thompson – Account Executive:
It is very clear that NetApp it is a leader in helping clients leverage their data wherever it lives. They believe that unleashing the full potential of your data is critical to business success now, as well as in the future. NetApp’s vision is to provide a common data management platform anywhere your data resides. In my opinion, NetApp sees most customers leveraging flash on prem for traditional workloads, and then extending their data out to cloud or archive, test, development or new applications.
I sat in a couple of sessions which showed NetApp features that have been around for years (snapshots, flexclones, etc), used with today’s new services like Docker, Kubernetes, Jenkins, etc. These are designed to help clients reduce time setting up new solutions, and with less infrastructure. As a non-technical guy who doesn’t quite understand how all this stuff works, it is amazing to me the amount of time NetApp technical teams are spending proving frameworks for customers with today’s new technologies/services.
The enthusiasm around what we learned at Insight this year is contagious, and everyone at Evolving Solutions is excited to get working on many of the things we learned. With such a focus on data, it’s hard not to geek out about the prospect of getting to do some really cool things going forward. Stay tuned to our social as we start to uncover some of these things and report back on what we find out!