Microservices have been the main buzzword in web application architecture for a good few years now. The model resides at the far end of a spectrum that starts with fully monolithic architecture, and goes through various degrees of modularity and service orientation before ending up with a swarm of microservices tied together by application programming interfaces (APIs). As with most things in tech, microservices are no silver bullet (though they often end up being a golden hammer), and using a highly distributed architecture needs to be carefully considered in terms of cost, performance, scalability — and security.
Microservices in vogue, monolithic applications in the news
The original motivation behind building software from loosely coupled modules rather than having a single lump of code for the entire application hasn’t changed much since the dawn of software engineering. Modular software is easier to develop, maintain, test, extend, and reuse, and the combination of cloud deployments with agile development quickly made these benefits even more attractive. Especially for start-ups, small teams armed only with laptops can now take advantage of microservices to quickly build innovative software with no up-front investment. For containerized cloud-native applications, microservices have become the go-to architecture, with cloud service providers quickly moving to supply ready-to-use environments and functionality for building highly distributed software.
But for all their benefits, full microservice architectures are a specialized solution that may not fit every problem. This was brought to light recently by an internal case study from Amazon, where going with microservices by default proved to be exactly the wrong solution. Due to the type and sheer intensity of calls to microservices in a stream monitoring scenario, a distributed architecture based on serverless AWS components turned out to be slow, costly, and impossible to scale to the required level. Moving to a monolithic application resulted in 90% lower cloud costs and improved performance overall in that specific use case.
This story comes at a time when more and more organizations are rethinking their cloud strategy and seriously weighing up all the pros and cons of cloud-based versus on-premises deployments. Because microservices are more or less synonymous with containerized cloud infrastructures, pulling anything back on-prem is closely related to turning the architecture dial in a more monolithic direction. While the cloud re-evaluation tends to be driven primarily by cost, your choice of application architecture also has serious implications for security.
Monoliths: Harder to update but easier to secure
Having a single monolithic app that puts your whole codebase in one place has traditionally been seen as a bad engineering practice. In particular, any change typically requires a rebuild of the entire application, even if it’s only a single line of code. Depending on the internal structure, monoliths can also be hard to maintain (because any changes risk breaking existing functionality) and prone to bloat (because it can be quicker and safer to add a new feature than adapt or remove an existing one). But in terms of security, and in particular security testing, securing a monolith can be far easier than when dealing with more distributed models.
For a start, the attack surface exposed by a monolith will usually be smaller compared to architectures where each microservice represents a separate target. Less distributed applications also tend to have fewer external dependencies, making it easier to know what to test. In a monolith, all the business logic and communication between functions is handled internally rather than in a flood of API calls exchanged between the application front-end and the back-end services. A monolith is also an easier target to define and lock down when setting up data protection, network security, and runtime protection tools such as a web application firewall (WAF).
For security testing, a monolith means you have most or all of your source code available (often all using the same tech stack and programming language) and can use the whole array of security tools, starting with static application security testing (SAST) and software composition analysis (SCA). You can also run dynamic application security testing (DAST) at multiple stages of the application lifecycle, from the first builds to the final production application. One downside of a monolithic architecture is that the debugging and remediation process for security issues can be slower than for a microservice application, where it’s easier to isolate and fix vulnerabilities in one specific service (assuming your development team has access to its source code).
Microservice security caveats and advantages
The advantages of microservices need to be weighed against the many potential security headaches of having an attack surface spread across dozens, if not hundreds, of independent services. On an infrastructure security level, each individual service often runs in its own cloud container, with specific service instances orchestrated using Kubernetes or a similar solution, making cloud security and visibility a crucial consideration. Because all the application components communicate via API calls, ensuring API security is also paramount both for operations and testing, with authentication being a particular concern. With each service a separate target, monitoring is another weak spot, as attackers can potentially go after individual services without raising alarms that the application is under attack.
With cloud-native applications, it is easy (and common) to rely on third-party dependencies and services that you don’t control and can only test from the outside in, limiting your security testing options for those components to dynamic methods (DAST and manual penetration testing). Coupled with the fact that highly distributed applications can use multiple programming languages and technologies, this makes code analysis tools like SAST and SCA of limited use — even if you control and can check your core codebase, it won’t be enough to cover the entire attack surface.
As already mentioned, one upside with microservices is that isolating a security issue to a specific service can be easier than with a monolithic application. Even if you can’t immediately fix the underlying vulnerability, it’s easier to firewall or outright block the service until remediation is possible. Because services are generally self-contained, there is also less risk of a full system compromise if one application component is breached.
Reliable application security testing regardless of architecture
The relative ease and speed of creating and modifying web applications also extend to changes in architecture and deployment models. But while the software development and infrastructure aspects of such projects are often understood and manageable, security can get left behind. For example, existing investments in SAST tools and workflows may be invalidated if an application is migrated to a different technology stack and programming language, or if a development organization that previously relied heavily on SAST to test its monolithic apps starts building with microservices and finds that they don’t have a DAST to test the entire API-driven environment.
In principle, DAST solutions have the advantage of being able to scan web applications and APIs regardless of the underlying technologies and architectures, but in practice, they can vary widely in accuracy and practical usefulness. For example, many organizations rely on cloud-based vulnerability scanners for the DAST part of securing their cloud applications. When they decide to bring some of their software on-premises, they often realize the product they were using is cloud-only and won’t work on-prem. Similar surprises can await when moving in the microservice direction, with companies realizing too late that they now need to scan lots and lots of APIs that their existing tools can’t handle.
Protecting your investment in DAST regardless of the architecture or deployment model is a crucial part of the value that Invicti brings with its SaaS and on-prem offering. As the industry’s most mature and accurate DAST solution, Invicti allows organizations to build application security testing into their DevOps pipelines to run scans automatically and deliver reliable vulnerability reports directly into issue trackers without time-consuming manual verification. And all this for both cloud-based and on-premises applications, with extensive support for API security testing out-of-the-box.
As applications are constantly updated to incorporate new technologies and their architectures adapted to match, at least keeping them all secure with Invicti doesn’t require additional juggling skills.