I’ve noticed two philosophies in the software world. They both deal with the idea of responsibility and ownership of bugs and mistakes. The first puts a focus on the human element of software. It sees software as an act of the human. The human makes it. It runs and what it does is a reflection of the human. Its success or failure says something about the human: they are good or bad. Competent or incompetent. They succeeded or they “fucked up.”
The other philosophy puts focus on the software itself. It sees software not only as a means to a business end but also a framework in which to act. It sees software as a social scaffolding. It’s a force to accomplish a goal, but it’s also a safety net for those working toward that goal. So when things go wrong, the question isn’t “who fucked up” but why aren’t we as a group being “safe”? And I put safe in quotes, because it’s important that the group defines what their safety scaffolding looks like. What is it that they want to protect against? How far is everyone willing to go to pursue it?
Think of a time before mountain roads had safety mechanisms like guard rails, road markings, or traffic signs. Looking back at that time it would seem ridiculous to categorically blame a particular party for an accident. Instead we would ask, “Why wasn’t the safety infrastructure in place? Why wasn’t there a guard rail? Why wasn’t there a sharp turn sign?” Why shouldn’t software employ the same sort of safety mechanisms? Most languages and ecosystems, to one degree or another, have mechanisms to ensure accidents don’t happen. There exists design patterns and tools that can eliminate many classes of errors.
A few examples of software’s guard rails: Avoiding global namespaces with modules and dependency management. Type checking at the value or type level. Unit testing stateless functions. Organizing code into understandable groups: functions, classes, modules, files, etc… Isolating code from other code that could potentially conflict. Static code analyzers to catch common bugs and syntax errors. Build tools to automate development and production tasks.
I would put myself in the later philosophical group. That’s not to say I don’t believe in personal responsibility and caution. That’s important, but when you start dealing with large scale software the human element cannot compensate for a system’s weaknesses. When software gets complex, no one human or group can manage it. At that point you absolutely have to employ extra infrastructure to aid in its ongoing development. You have to have tools to ensure safety and enforce rules. I believe putting that burden on individuals or groups is not fair, not productive, and not a sustainable long term solution for software development.