In large environments security controls based on packet filtering, such as firewalls and ACLs on network devices, often face an unfortunate dilemma: there’s a gap between the parties understanding the communication needs of an application (say: the application owners) and the parties implementing the actual security enforcement (e.g. the firewall ops team). Those also have different motivations: “it has to work” (see RFC 1925 rule 1 😉 for the former group versus “it has to be secure = fulfill certain security objectives” for the latter. This gap can manifest in many socio-technical ways, which is the reason why ‘firewall rule management’ has been subject of many discussions over recent years. In another post which I wrote a few years ago I stated that going for the upper-right quadrant in the following diagram usually requires high operational effort (which can actually produce the opposite outcome due to added process complexity), a high level of automation, accepting trade-offs, or a combination of these.
That’s why several organizations are considering another approach or have already started deploying it. Here I’ll call it ‘self-service ACLs’, and it can be summarized as follows:
- move (the enforcement function) of packet filters towards the hosts (e.g. via iptables running locally or some set running on a network device ‘just in front’ of a group of hosts, e.g. a VPC).
- provide a nice web-based management interface to these rules
- store all rules in a centralized database
- allow application teams themselves to manage the rules. Besides the technical decentralization (or to put into more familiar lingo from the networking space: ‘disaggregation’), this one constitutes the main paradigm shift.
The underlying idea is simple: “let the owners of an asset/a service handle what they need, in a flexible manner”, without all those organizational or process-induced gaps, and it’s seems like a good idea to solve the issues I laid out above.
Alas – as so often with simple ideas seemingly solving complex problems – there are some often over-looked pitfalls. These are going to be the subject of this post.
Quick disclaimer re: terminology: I’ll use the terms ‘rules’, ‘firewall rules’, and ‘ACLs’ interchangeably. Just think of rules being part of larger rule set, implementing packet filtering based on a traditional approach of sources, destinations and services with the former represented by IP addresses/ranges and the latter by protocols or ports in some notation.
Let’s start with looking at the lifecycle/dimensions of such a rule:
- (1) a ‘management’ step/function, like the creation of the rule (by some party) or the modification of the rule (by some party)
- (2) the actual enforcement function of the rule
- (3) logging (of certain enforcement events, e.g. dropping a packet)
- (4) analysis of a rule (e.g. as an intellectual exercise performed in certain life situations 😉 or as a contributing element to metrics)
- (5) troubleshooting network communication flows which often involves the functions (3) and (4).
In the ‘traditional model’ most of these were performed by the same party (‘firewall ops team’), but here the self-service model induces significant changes. The expected benefit is centered around moving (1) (into the hands of the app owners holding the contextual intelligence) and (2) (topographically towards the assets actually needing the protection). Sadly this also brings changes to the other functions, with some interesting effects. Let’s look at two affected functions/lifecycle elements: logging and review.
In infosec circles there’s an old adage ‘each security layer should provide logging’. Let’s assume the log files are still written to a central place (this is what most organizations do, for a variety of reasons, and I for one think that this makes sense). This can create interesting situations:
- The new owners of the rules will, somewhat legitimately, think that they own the logs , too. (“These are our rules, we manage them, and we should be able to see what’s happening”).
- How do they then get access to the (centralized) log files?
- More importantly: get access in a tenant-proper way? (you don’t want the database team to be able to see the log files of the authentication servers, do you?)
I’ve yet to see an organization which has solved this problem in a way that fulfills the requirements of the different parties. So one might have to accept some trade-off here (e.g. the loss of visibility into log files for one of the involved parties).
A similar conflict of interests arises in the context of rule review. Can one reasonably expect the party whose main interest is essentially ‘it has to work’ to perform a review of rules based on corporate policy, PCI requirements or the like? Again, this would be an inherent dilemma only solvable by a high degree of collaboration (self-service ACLs are often supposed to reduce needed collaboration). On the other hand rule review might a bit of a dysfunctional process in some organizations anyway, as this recent Twitter poll seems to imply 😉
General Paradigm Shift
Finally one should keep in mind that the introduction of self-service ACLs can mean a cultural shift for application teams, from opening tickets (which includes a partial transfer of responsibility) to managing rules (= security controls) in their own hands (which in turn also requires developing security skills & practice). Not all app teams might be happy with that; especially those running core applications with high availability requirements might be a bit risk averse in this context ;-).
tl;dr: While self-service ACLs can address long-existing process-level deficiencies in some organizations, they might well introduce new ones. Understanding the demarcations of the individual functions within a rule’s lifecycle, and the incentives of the different involved parties, will be crucial for a successful deployment of the approach.