Skip to main content
Search
Essential Resilience

Essential Resilience: Drive Outcomes like a Cloud Native

  • 05 June 2020
  • Enterprise Modernisation

Moving your data to the cloud doesn’t have to be tricky or scary. Following a simple plan of the “5 S’s” will get you there in no time. Learn the smart ways to move and link data to scale your business and work with the cloud.

Kamala Dasika, Head of Partner Marketing, VMware Tanzu Ecosystem and Tiffany Jernigan, Developer Advocate, at VMware explain how companies have achieved resilience and compelling business outcomes through cloud native transformation and platforms.

Return to all on-demand sessions

Go back

Speakers

Kamala Dasika, Head of Cloud Platform Technology Partner Marketing, VMware
Tiffany Jernigan, Senior Developer Advocate, VMware

Introduction

Kamala:
Hi there. Welcome to our session, Essential Resilience: Drive Outcomes Like a Cloud Native. I'm Kamala Dasika and I focus on modern applications and ecosystem marketing at VMware, specifically on our VMware Tanzu portfolio.

Tiffany:
Hi, I'm Tiffany Jernigan. I am a senior developer advocate, specifically focused on Kubernetes.

Kamala:
So we've talked quite a bit about the connected customer experience today and how important it is. In fact, a Forrester Consulting Thought Leadership paper that surveyed and collected over 614 responses and conducted several interviews across multiple industries, found that an overwhelming number, 88% of IT leaders in fact, agree that improving app portfolio improves the customer experience, in that the customer experience itself is directly tied to revenue growth. So with every business today being a digital business, this calls for companies to combine their engineering and service efforts to create strong feedback loops between these two teams. Now one chief financial officer in fact said that their end users care so much about responsiveness that roughly 20 million to 30 million dollars in revenue was at risk if they didn't respond with a better technology. Fortunately, many IT departments are rising to the occasion.

No doubt many of the folks attending today have seen the 2019 DORA State of DevOps report. So the comparison over the last two years clearly shows a growth in the number of elite performers and a reduction in the low performers, which means that the industry velocity of software delivery as a whole is up. So there are a couple of other happy takeaways here as well. The most notable of which is that it shows that excellence can be learned. You just have to know what the right things are and do them.

Secrets of Elite Performers

(00:02:28.09)

So what are the elite performers actually getting right? So they're actually outperforming on a few different dimensions. One is deployment frequency, which is how often organizations are deploying code to end users. They have really low lead times for changes that are going from non production environments to production environments. They're able to restore service very quickly in case they have an incident or outages and their change failure rate is pretty low. That is, they have fewer of these dope or oops moments as part of their deployment. So what does this actually mean in economic terms? Now here are the results on the right side of the Forester Economic Impact study over the first three years of folks investing in the Cloud-native approach. It showed that customers can actually expect to accumulate pretty significant savings, up to 140% return on their investment, not including intangible benefits like employee morale, hiring, and retention, all of the soft things that actually are fairly strategic for your business.

Now at the center of all these things that are happening are three big changes; the methodology by which software is being released, the architecture of the application itself has changed, and the platform has changed. So for the scope of this particular session, we're going to mostly focus on the platform part, and only slightly touch on the other two. Now the platform actually sits at the center of the developer teams, as well as the operator teams, and actually helps them adopt the architectural and operational changes required to be shipping continuously. It actually makes doing the right things easier and more consistent by becoming the natural framework or the contract between the two individuals; the app and the runtime, the platform and the infrastructure, and essentially facilitates the shared tools, the processes, the vocabulary that simplify the collaboration between the developers and the operations teams. So it actually acts as an enabler of the DevOps culture. One of the great privileges we've had over the last decade is to be a part in our own small way of these great transformations that companies have set out to do. And it's given us a chance to observe up close across hundreds of customers and discover some patterns for success.

(00:05:32.00)

So let's take a look at two companies, both are in financial services. One of the largest credit card issuers in the United States, and the other is a U.S insurance and financial services company doing home and auto loans and other financial services. Both are customers that we worked on together with our hosts today, Kin + Carta and I thought it would be interesting to share these. So first of all, if you are running a homegrown or legacy platform included in your scope. In the case of this client, they had success building consumer facing applications around the edges. We did that pretty well. But then connected them to their existing legacy stack because one never knows what sort of technical debt actually lacks there. Once they pulled that thread, it turned out to be a setback for them. So it turned out that the right way to do this was to stand up a series of functionally aligned Cloud-native data services, and slowly pick away at the old system using, you know, tools like the strangler pattern. And by doing that, they insulated the system from the demands and the strains of all of the high velocity changes that were going to happen to the application that they deployed on it.

Second, don't be afraid to change the old processes. But with the changes to the application architecture that we're going to have to accommodate, and the automation that the platform offers, there may no longer be a need to force every change through every single step, control gate, and governance checkpoint that was designed before, especially if it used to be manual. So as part of this change process to really help people out of their old habits, establish a culture of learning by doing things. For example, we have the notion of a Jojo, where you create a time and a place to practice shared responsibility, which makes the process real for everybody. Next, start by delivering something, and then make improvements as needed. So starting with a manageable scope helps to orient work around a specific outcome. And this one's really important, embrace CI/CD. Most companies have adopted this kind of automation in pockets. So having a platform that supports it will help in broader adoption of Cloud-native practices. Otherwise, you'll be stuck with some groups asking for exceptions, and that's never good.

Last tip, Cloud-native architecture offers a lot of benefits, but it does introduce some operational complexity, in that it creates more things to deploy and also monitor. Having the ability to instrument applications and the platform with the necessary observability toolset is an important part of building your system up for resilience. One of the things that companies typically struggle with is metrics and the language to describe how all these changes tie to business outcomes. We've been using this easy framework that we call the five S's: speed, security, stability, scalability, and savings, that Cloud-natives build for resilience. And so to me, these are great proxies for the different types of business resilience.

Take for example, speed. This is basically the ability of the business to adapt to changes in customer demands. We're often told by clients that they can usually tell that they're on the right track when they see that their adoption is up. So how are they able to tell what that is? In the case of a mobile app, for example, you would be able to see that the downloads have increased and that the number of the deinstalled has reduced, which means that people are actually keeping the app they're getting value from it. In the dimension of security, it's important to note that high performing organizations do not sacrifice agility to safety. In fact, they actually invest in improving both. And scalability is another type of resilience, which enables businesses to adapt to increased demand from customers, and/or take advantage of multi cloud design patterns. This is a pretty common trend that we are seeing. With stability, we have the more traditional type of resilience, which translates to things like better uptime, improved customer experience. And one of our clients actually reported that their improved uptime allowed them to be available more and therefore onboard more credit card customers because the users actually encounter fewer problems when accessing the application. And finally, we have savings, which helps with the financial resilience of a company. So now that we've covered those off in general, I'm gonna hand off to Tiffany who will now go into how Kubernetes specifically improves resilience.

Speed and Security for Cloud Deployment

(00:11:32.06)

Tiffany:
Thanks, Kama. So if we think back to seven or more years ago, when we think about applications, we think of gen mollis. And then Docker came along. And basically it changed the space with their version of containers. These gen mollis were then broken down into smaller components and ran in separate containers. And as people started using more containers, it became hard to manage all of them by themselves. And that's where Kubernetes came in. And along with Kubernetes, later came tools and services that were designed to work in conjunction with it. So now let's take a look at the five S's with respect to Kubernetes.

First let's take a look at speed. So by breaking down your applications into these individual and smaller components, it makes it faster to make updates. It makes it faster to patch bugs, and then therefore it's faster to create new releases. In Kubernetes, it ends up taking on much of the heavy lifting. And then with supporting tools and managed services, this contributes even further. Therefore more time and focus can actually be spent on the applications themselves. For instance, if you're using Tanzu application services for Kubernetes or Tanzu Kubernetes or Cloud Foundry for Kubernetes, this can result in faster deployments. And also with Kubernetes, you can request resources on demand. And with managed services, you can also provide this self service environment with resources such as virtual machines.

Next let's talk about security. So also by breaking apart these applications into separate containers, there can be a separation of concerns. Kubernetes has out of the box security features, which you can utilize such as secrets and policies. And there are many tools and services that work in conjunction with Kubernetes. For instance, certain users can be given limited access, or there can be container image scanning. And with containers, each container has its own image, which includes the operating system. So you can ensure that there isn't a virus or some sort of bug before your application gets out there. When it comes to versioning, Kubernetes only supports the current version, and two previous versions at a given time. And then new security patches can then be backcoded to these other supported versions. By supporting only a few versions at a time, this results in users upgrading to versions with the latest fixes and updates. The applications, what they're running and using Kubernetes itself is more secure. Additionally, this upgrade path for upgrading to new versions is a well tested path. Has CI/CD builtin, which is resulting in a lower chance of bugs. And therefore this results in upgrades being safer. CI/CD is equally as useful for working on and deploying your own applications. And with having CI/CD, you can build in a bunch of different steps along the way, and ensure that everything that you're doing is happening the same way every single time. And additionally, since Kubernetes is open source, you have many, many more eyes on the code. And then this results in a reduction in the number of bugs. And then therefore there are also a lot more people that have the opportunity to go and check for.

Scalability and Stability for Cloud Deployment

(00:15:14.01)

Okay. So next let's look at scalability. So Kubernetes provides the ability to quickly scale up your applications. You can easily go from having just a few containers to having hundreds or thousands. And this can be done based on a set of requirements that you have or certain triggers, basically things that you can set up. You can also run your applications across multiple public Cloud providers, or you can run it on-premises or both. And this is where the Tanzu Kubernetes grid can come in. So now we have stability. So basically to follow on with scalability, you can increase the stability of your application by running multiple copies of the application components, as well as deploying them in a highly available way. So basically you don't have to worry about if a specific container goes down or if a specific data center goes down because it won't take down your entire application with it. You can also utilize resource types that will restart your application components if any of them go down. You can also update your backend applications in Kubernetes itself using things such as blue-green deployments and rolling updates. And this is another place where CI/CD becomes very beneficial again. There's also many tools out there that you can use for things like metrics to ensure that you're consistently getting the metrics that you're expecting certain alarms, etc. And some of these tools that you can use for metrics are Wavefront and Meatus which is open source. And continuing on the path of being open source, with the Kubernetes being open source, having many maintainers and community members involved in tech to sustain the code results in greater stability of the code.

And lastly, let's take a look at savings. Using Kubernetes versus a self-created and managed platform results in savings of developer time and potentially cost. Managed Kubernetes services additionally save developer time, and can result in overall cost savings. So these platforms and services, they can also help with an improved resource utilization, which therefore again, can increase your cost savings. There are many other tools and services that work in conjunction with Kubernetes that help to speed up the development time. Based on a study, the dimensional research conducted with a few hundred Kubernetes users, users have improved resource utilization by 56% and reduced public Cloud cost by 33%. The full report will be linked on our Twitter pages

So Kelsey Hightower said that Kubernetes is a platform for building other platforms. If you're a developer building your own platform, AppEngine, Cloud Foundry, or Heroku clone, then Kubernetes is for you. And we agree with Kelsey. A lot of our developers really like the deployment experience with Cloud Foundry. This is how we've chosen to present our portfolio to our customers. So providing customers with a choice of the best of both worlds by running Tanzu application service on Tanzu Kubernetes grid. Our portfolio is also modular, and can be deployed independently if that's the customer's preferred path. They can also deploy on-premises or in the public Cloud of their choice.

Finally, if you'd like to learn more, here's some resources. If you'd like a primer on containers or on Kubernetes, check out the courses on Kube Academy. And the other links will also be shared on Twitter afterward. Thank you for attending our talk today. And if you have any questions, you know how to find us.

Want to know more?

Click here to get in touch