Managing Code: Pull Requests & Amending The Plan
You could think of this as something of a sequel to managing infrastructure, where I mentioned source control as a mechanism to manage code - which is perhaps the most important piece of a software project's infrastructure. What I didn't talk about was the process that surrounds the introduction of new code into a target release.
There is a procedural norm in software development where individuals review one another's code changes before including them into the current source of truth for the relevant target release, and it is where various compliance checks, including ones related to the shape of code, normally happen.
When using Git as the source control primitive of choice, this code review process is the evaluation of a pull request. The changes within the code review have historically been presented in terms of differences that will be applied to the computer files associated with a project.
It's helpful to understand the limits of this process - as developers we're interested in maintaining the integrity of The Plan, but this review process effectively presents amendments in terms of an abstraction - code - and furthermore organises the changes in a way that obscures the chronological sequence of events. There are a few drivers for this, one is that the processes that take code and turn it into a digital experience require our code to be structured according to particular rules.
Within these rules there are a field of possible organisations.
Another driver for obscuring The Plan is reuse - it is quite common for 'the same' (or very similar) information to be accessed at multiple points and be presented in different ways, e.g. the logistics of accessing a summary of a recipe and the entire recipe are quite likely to have a lot in common. It is generally considered wasteful to have two copies of overlapping instructions, and if the overlap is expected to be stable in time the safest way to ensure that they are both always changed simultaneously is to bind them through reuse. This is a classic source of issue (and therefore future pull requests) when the maintainers of digital factories and the visionaries of the digital product are not aligned on what the overlap between two experiential primitives is.
The larger the volume of changes in a code review are, the more challenging it is to make reliability guarantees about how well they reflect The Plan. So if we were to dramatically increase the size of our team, or we had some kind of technological revolution where there was an explosion in the production of code, we could also have an explosion in our exposure to various risks through this process of amending The Plan.
How do we manage such risks?
One thing that developers do to mitigate such risks is to exercise discipline in various ways, such as keeping the pull requests relatively small and avoiding the use of uncommon technical primitives to make the code as easy to navigate as possible. These help but the most valuable project infrastructure for reliability by a long way is the testing infrastructure that is operated when the code review begins, or when the code changes in response to remarks made during the review.
The goal is to detect broken promises, or at least to detect unanticipated behaviour change even if it is ultimately deemed acceptable, in which case the test would change rather than the code under review. Automated tests can represent handshake agreements at either of the digital factory layer or at the digital product layer. The addition of new automated tests is great practice for any pull request and it is also often the first thing to face neglect when there is pressure to prioritise velocity over other project success metrics. The value of tests is really its own story, but they add a lot of value to the process of code review by mitigating the inherent weakness of reviewing changes to files as opposed to changes to The Plan.
Until next time.