Alternatives To Automation: Amplification, Complexity, and Control
In Stop Hitting Yourself: How Automation Is Hurting You I talked a lot of shit about automation. What can we do to avoid toil and error if we cannot automate? Well, we can automate but that’s coming later. First, let’s discuss the middle ground.
This post is part of series on automation, the result of many months of reserach and reading. I may adjust these posts as my research grows. If you've got comments leave feedback at the bottom!
- Stop Hitting Yourself: How Automation Is Hurting You
- Alternatives to Automation: Amplification, Complexity, and Control
- Improving Automation: Working Together
Thanks to Arijit Mukherji, Franklin Hu, Jay Shirley, Rajesh Raman, and Sam Boyer for their feedback and reviews of these posts.
What Is Complexity?
I have a lot to learn about complexity. I’m still researching, but I really enjoy David D. Woods’ take in Coping With Complexity, summarized by me:
Complexity isn’t a thing, it is a situation composed of a world, an agent, and a representation. The world is dynamic, interconnected, uncertain, and costly. The agent is one or more humans and… other stuff like computers. The representation is whatever those agents use to understand and manipulate the world.1
Complexity can be expensive. It makes things slow, error-prone, and frustrating. My primary interest is in complexity’s impact on the operator.
As the complexity of a system increases, the accuracy of any single agent’s own model of that system decreases rapidy.2
The Goal Is Control
In many of the citations I’ve used in this series the discussion is usually about control. The systems we operate hum along, doing their thing, within a desired performance envelope. This envelope is defined by what our customers are willing to accept, what we’re willing to pay, and how well the operation works.
Now and then things go wrong and, like a plane blown off course, we steer, throttle up, or however a plane works — I am not a pilot — to get things pointed in the right direction.
Amplifying Human Ability
Sticking to airplanes, the human pilot exerts control by operating the various control surfaces on the wings and such. Apparently, I’m still not a pilot. Originally these surfaces were operated with cables and pulleys and required raw human strength. As airplanes got faster, more strength was needed and hydraulics amplified human strength. Some modern planes use fly-by-wire to replace all those heavy hydraulics with wires and actuators. Regardless, the human is still in control.3 This is just one example of how humans have engineered increased capabilities to exert control. But it’s a cool one because planes are rad.
Before deciding to remove the human from the loop you should consider amplifying the human’s effort. Take Terraform or Puppet as examples. Both allow a single user to define a desired outcome and affect change across tens of thousands of machines in a fraction of the time.4 Critically these and other tools of their shape keep the human “in the loop”: deciding on action, monitoring progress, and responding to error. They can still cause incidents — believe me, I’ve done it plenty — but the human is more likely to be present, aware, and up to speed.
One way to deal with complexity is not to add it. The gist of my argument in Stop Hitting Yourself: How Automation Is Hurting You is that creating automation adds complexity.5 Before we add this complexity we could instead amplify the human’s abilities of strength, perception, or attention.
Autoscaler Example
In Stop Hitting Yourself: How Automation Is Hurting You we talked about automating an autoscaler. Before removing the human, what if we tried making a small tool that speeds up, simplifies, or otherwise improves this process? Maybe a small command line tool with best practices as defaults that allows multiple systems to be scaled all at once?
If we had done this we could keep the human involved. We could amplify their effort as well as coding in a few smart defaults. Maybe the human can even submit patches or, at a minimum, we can interview them to get some feedback to improve further.
You may recall the Law of Stretched Systems6 from the last post. The additional time gained by leveraging our tool will undoubtedly result in our humans doing more things. The difference is that we’ve not cast the work aside for our humans to forget about. We’ve merely made it easier. Furthermore we’re good on the Law of Requisite Variety because our tool can continue to rely on the human to cover all of the things we’ve left out. The tool is a supplement to an adaptive human.
Tools Can Be Complex Too
The creation of such a tool will obviously require some investment. It will require consideration about best practices, repercussions and outcomes. The adoption of any technology — tool or automation — fundamentally changes the job, the problems, and the future of the operation.7
The difference I am advocating here is to include the human in the use of the tool, versus automation which replaces the human. When we remove the human we remove the only adaptive, safety-creating8 component and replace it, if at all, with a poorly coded analog. When we amplify, we retain the best human parts.
Amplify Before You Automate
There are other ways to deal with complexity such as increasing reliability to the point that it’s mostly ignored, or embracing complexity’s representation. Those are topics for another day, though.
Amplification via tools allows us to increase output while decreasing, or at least keeping constant, human effort. We retain the best traits of humans while decreasing effort or toil.
This post is part of series on automation, the result of many months of reserach and reading. I may adjust these posts as my research grows. If you've got comments leave feedback at the bottom!
- Stop Hitting Yourself: How Automation Is Hurting You
- Alternatives to Automation: Amplification, Complexity, and Control
- Improving Automation: Working Together
Thanks to Arijit Mukherji, Franklin Hu, Jay Shirley, Rajesh Raman, and Sam Boyer for their feedback and reviews of these posts.
References
-
David D. Woods, Coping with complexity, the psychology of human behavior in complex systems. ↩
-
Richard I. Cook, M.D., Above the Line, Below the Line ↩
-
Yes, there are autopilots which are automation but that’s supplemental and the subject of many more papers. ↩
-
Both also have their faults, but they obviously allow people to control huge fleets of machines. ↩
-
Lisanne Bainbridge, Ironies of Automation ↩
-
Woods, Hollnagel, “Joint Cognitive Systems: Patterns in Cognitive Systems Engineering” ↩
-
Woods, Dekker, Anticipating the effects of technological change: A new era of dynamics for human factors ↩
-
Richard Cook, How Complex Systems Fail ↩
Subscribe via RSS!