Since I last wrote about it, DevOps has become widely adopted and continues to evolve, coupled with new technologies that have been directly influenced by it and the culture it has helped shape. Even though many organizations are still struggling to realise the potential of DevOps due to a lack of expertise and/or cultural challenges, it’s already indisputably been a mainstream improvement, a movement away from how we used to build and deliver products and services into the world to a better way of working together.
As a result, traditional hardware platforms, the hypervisors and virtual machines of yesterday are giving over to serverless architectures and other, more efficient and cost-effective strategies for managing all of the complexity of the hidden machinery behind apps, websites and other data-driven products. Will this continued shift lead to even better overall collaboration?
Deploying even the most simple functions in these environments naturally requires crucial decisions across disciplines, including development, operations, security and even financial. At a minimum, decisions such as the amount of available memory and timeout (the time budget for the function invocation) influence cost. Beyond these even, the team has to decide who/what has access, through what means/protocols and for how long, setting expectations about the lifespan for the function – how long will it be viable and useful?
These decisions all factor into the modern-day pay-for-what-you-use cost structures that are based on memory usage and the time it will be used to execute said function, which is, in turn, also linked to the underlying platform the function is running on (if running in a container or more traditional platform, like a computing instance). Whatever the case, memory is like fuel. Burn more fuel, burn more processing power, burn more throughput, burn more money. The team all needs to chime in on the burn rate.
With cost effectiveness riding on optimal configurations of these architectures, it’s crucial to adjust settings based on a balance of budget vs. required performance. If you’re thinking, “Why does it always have to be a tradeoff?” you’re spot on. That’s where metrics come in.
There are more useful tools than ever for providing insight into the performance of infrastructure and they are essential to operating these architectures because, without them, costs are more difficult to contain than more traditional, less flexible and soon-to-be-obsolete architectures. Using them well, however, makes these next-gen architectures faster to provision, more scalable and secure, higher performing and more cost effective, especially because metrics are available from the moment resources are provisioned.
There is another upside to this, too, including insight into more operational considerations, such as capacity planning, performance monitoring and logging. This is more opportunity for the team to collaborate as more information on their work becomes available in short order.
Better security, anyone? Better business continuity? Traditionally, security and resiliency in software delivery was addressed as an afterthought, if at all. While serverless architectures provide environment parity, which increases consistency and predictability in the workloads of developers, ops and security, by extension this consistency and predictability also facilitates better security.
Unikernel architectures, for example, eliminate overhead as stripped down stacks, free from burdensome operating system bloat to minimize the attack surface for those so inclined to try and exploit it.
Considerable complexities are built into the Linux kernel to keep users safe from other users, but also to keep apps safe from users and other apps. That means a lot of added complexity, such as permission checks, remnants from an era when it was necessary on larger systems to delineate and segregate all the apps and users all working on the same hardware.
This complexity means the Linux kernel has a larger attack surface than is always necessary. Historically, to minimize this risk, operations teams would patch the kernel, potentially creating compatibility issues with development teams who are writing code on the platform. Tracking interactions between the two teams is a painful and tedious process that creates issues, such as outages and worse. Team flame wars. Ugh.
These new building blocks, containerized, eliminate these challenges, offer more powerful automation capabilities to create immutable infrastructure. In a containerized environment, security problems aren’t handled by modifying something that is already running. Instead, whole new containers are deployed because images are already vetted, which means the chances of introducing bugs, compatibility issues and flaws into production are radically minimized. Even trying to install malicious code in a containerized environment only means it will be destroyed when images are updated.
This means that security is solved pro-actively, as functions are deployed, rather than reactively because each function has an associated security policy, the right security policy for the function based on its job, based on the principle of least privilege. For example, functions that only need to query a specific database table only requires permissions to query that one thing and nothing else.
It’s self-evident that serverless models are making security more a part of the development process rather than deferring until operations teams get involved when it’s typically too late to address uncovered issues easily or properly. This shift in culture has given rise to DevSecOps principles.
These great new principles look something like this:
- Customer-focused (internal, too, not just external)
- Scalability (how obstacles to deployment are removed while still achieving compliance)
- Objective criteria (what’s best for the collective vs. one team or discipline)
- Pro-active hunting vs. reactive responding (the new architecture allows for this)
- Continuous detection and response (pro-active vs. reactive)
Three cheers for DevSecOps.
More Adaptable Environments
Being able to spin up as many environments as needed, whenever they’re needed, offers exciting new possibilities. What if each member of the dev team has their own environment in the cloud? Maybe each feature being developed is deployed into a dedicated environment so it can be demoed independently without any impact to the larger system as a whole. These separate environments can even live on separate providers, creating a degree of segregation that’s only been imagined, until now.
Rise of the Generalists
Broad skillsets and good familiarity with cloud platforms can achieve more, and do it much quicker than specialized skillsets locked into a single, more traditional method of working. Many development and operational activities can be integrated into cycles and costly handoffs to segmented internal or external resources can be eliminated altogether.
Ideally, the whole team should participate in delivering a feature, including operating it in production. This is the best way to make sure the team is incentivised to produce quality software operable from go.
Old Challenges, New Outcomes
While many companies are still talking about bringing dev and ops closer together, still struggling to establish some form of DevOps culture and practice, their efforts will be bolstered by this latest shift. The serverless approach offers a new strategy for creating a culture of rapid business value delivery and operational stability, while offering new strategies for containing costs and, the best part, uniting teams better than in the past. Serverless workflows are the embodiment of DevOps cultures, where technologies, methods and tools are forged to be ready for production right from the start, with nods to each discipline involved. No small thing.
New Challenges, New Outcomes
While it’s true that only a few organizations are mature enough in their current DevOps cultures to welcome the brave new world of serverless right away, and while it is also true that these ways of working are indeed brave and still immature, they offer the organizations that have fallen behind a chance to cover lost ground quickly. Doing so requires a lot of work to identify, define and address the new challenges it presents to each, unique culture, but will be worthwhile for new and improved outcomes.
The most common challenge to overcome? Organizations open and willing to build a serverless practice may do so using their already existing processes and structures, losing any speed they could have gained. It is key to realize this change in its entirety. Not a small task.
Even when done right, those companies who are able to succeed will most likely have to go back to the front and redefine not only the way they deliver products and services but also the way they sell them. Is this a good problem to have, though? Definitely.
The next iterations of DevOps include better integration of security culture and serverless architectures and workflows that are designed, out-of-the-box, for rapid delivery of business value. Continuous improvement and learning have clear potential, as the rise of DevSecOps in earnest, to further elevate the cultural shift that began with DevOps, even in organizations that have struggled to achieve it so far.