John (a real engineer, but not his real name) sat in his office staring at his workstation monitor. John’s door was closed. It was always closed.
The software company where John and I worked was founded with a great deal of respect for engineering talent. We understood what many companies in our industry did not – that the biggest asset of any technology company is its people. Our campus was designed with that principle in mind. We provided real offices for all of our engineers – with real walls and real wooden doors that locked. Most of the offices also featured large exterior windows with nice views of the campus and the surrounding countryside. Adjacent to each office door was a narrow window between the hallway and the office. This window brought the outside light into the hallway, and also allowed a limited view of the inside of each engineer’s office.
John’s did not.
Strategically positioned at about eye level in John’s hallway window were printed copies of several memos to “all employees” and “all technical staff” from company management. To the casual observer, these might be seen as John’s helpful attempt to assist in disseminating useful company policy to the staff. To those of us who worked with John, however, the primary purpose of the memos was obvious. John wanted to block the view of all but the most determined passerby. If you wanted to look into John’s office, you had to either be seven-foot-two, or you had to stand in the hallway bent over to about waist level – peering under the column of memos.
Those who took the additional time to actually read and consider this carefully curated collection of corporate communications in context might get a glimpse of something even more complex going on in that software development team. The memos painted a subtle picture of what John believed were management’s deliberate efforts to exert non-traditional control over their staff. John’s window display of memos showed us that, in addition to being a brilliant software engineer, John was a devout conspiracy theorist.
John did his engineering work on ultra-complex in-memory data structures used for transforming control and data-flow graphs into netlists of logic elements within the confines of an emotional tapestry whose subtle interconnections dwarfed those of the nodes and edges his 32-bit pointers addressed. John knew They were watching him – monitoring his work – keeping track of his progress. Sometimes, They even slipped in and made subtle changes in his code – too small for anyone but an expert programmer like John to notice. They were always one step ahead of John – waiting patiently for him to overcome each technical hurdle, then quietly harvesting his discovery for their own nefarious purposes. They would then quietly interfere with his work – redirecting him just enough to send him careening down some unproductive logical cul-de-sac where he languished for weeks, unable to see his way clear to continued progress on the project.
Paralyzed by paranoia.
I was John’s engineering manager, so I was responsible for the success of the project on which he worked. John was a genius. He held a PhD from one of the most prestigious technical universities in the world. He had years of experience in the successful development and deployment of electronic design automation software – tackling some of the most complex technical challenges of his era. I considered myself an expert in this field, and I could only hang with John for the first few minutes of any in-depth conversation about what he was doing on our project. The degree to which he immersed himself in the code he was writing was staggering, and apparently that immersion carried with it emotional consequences that John was not equipped to handle. His work would grind to a halt and he would sit in his office literally for weeks on end – coding and re-coding the same routine – always claiming that he’d hit some sort of roadblock or had some externally-imposed setback. Occasionally, he would confide his suspicions to me (although at most times, he clearly believed me to be part of the conspiracy). “I think this isn’t working because They really want us to move to an object-oriented structure. They can see that I’m writing traditional procedure-based code, and They’re just going to keep sneaking in at night and screwing me up until I give up and do the whole data structure over in the way They want it.”
In my twenty-plus years managing engineering and software development teams, John was one of the more extreme examples of conspiracy-prone engineers I encountered. Eventually, we had to move him out of the team to another assignment, and we recommended emotional counseling through the company’s employee assistance program. John declined the help, and he maintained throughout the process that we were manipulating him and his work for some nefarious purpose.
If we were a cog in the gears of some sinister conspiracy machine, I was never in on the plan. I continued to manage the software project for several more years after John’s departure. We eventually got a product out the door that more-or-less worked, but that didn’t generate much excitement in the market. Like many well-intentioned ideas, we were a bit ahead of our time, and there were so many limitations and problems with the insanely complicated technology that we had developed that it was impractical for anything but very specialized use. The part that John had developed was eventually completely replaced – ironically by a much simpler data structure that did happen to be object-oriented.
I’ve often wondered why so many engineers, the very people whose training demands that they view the world with a rational and skeptical perspective, the professionals whose performance most strictly demands that they have a direct and honest view of cause and effect, are so disproportionately prone to conspiracy paranoia. As engineers, we are taught to evaluate evidence objectively, to be data-driven, and to maintain a balanced view of the compromises, strengths, and vulnerabilities that we design into our systems. Why, then, do we so often fall victim to wild, unsupportable theories involving evildoers in the night – whose black-helicopter-flying, supercomputer-having, IQ-enhanced engineers are always several steps ahead of us mere mortals – technically and psychologically manipulating us as pawns in their incomprehensibly sophisticated games – against some unseen and equally awesome foe.
Perhaps the same innate creativity that enables the best engineers among us to imagine out-of-the-box solutions to everyday problems also has a dark side that conjures up conspiratorial demons. Maybe it’s just plain hubris. When faced with challenges we cannot overcome, our only face-saving explanation is that some superior intellect is quietly blocking our path to success. It actually requires a good measure of hubris just to believe that we are the targets of conspirators. We have to believe that we are so important – and the work we are doing is so unique and interesting, that somebody would be willing to dedicate an entire team of clever, intelligent agents to watch our every move and infiltrate our work.
Or maybe it’s the flimflam man.
It seems that conspiratorial theories are often encountered in diametrically-opposed pairings with outrageous claims of technical achievement. The engineer who has developed the 200MPG carburetor is unable to bring it to market because the oil companies, fearing that demand for their product would plummet, are quietly sabotaging his project. The medical breakthrough that would end cancer is suppressed because drug companies would lose billions in revenues from treatment. The list goes on and on. Almost any time we hear a claim about a new, unrealistically-disruptive technology, there seems to magically appear a conspiracy theory about a well-oiled machine that seeks to prevent its deployment. The flimflam man is a good friend to the conspiratorial adversary.
It is odd, however, that these sinister forces don’t manifest themselves more effectively on plain-old realistically-disruptive technologies. Entire industries have been eradicated by new products that obsoleted their core value while these alleged elegant machines of malicious enterprise sat and watched idly. Did the demise of these corporate giants somehow not warrant the aid of the conspiratorial community?
In John’s case, though, I don’t believe that foul play was at work, and likewise we were not the flimflam man. Our project was complex and challenging, but doable. Even though our goal was to produce a disruptive technology (albeit in a very narrow application area), we didn’t fail because of any outside force that I could ever detect. Sure, we may have been up against an adversary that was too clever for any of our engineers except John to notice. If so, They only partially succeeded in Their goal, as the technology we developed is (two decades later) now in common use. I, however, am convinced that our failure to succeed in our project goals is best characterized by Hanlon’s Razor: “Never attribute to malice that which is adequately explained by stupidity.”