Member-only story
Generative Code and the FSD Dilemma
It works… until it doesn’t. And then what?

The biggest challenge with Generative Code isn’t really about making the technology work.
It’s about how humans interact with it.
The Problem
The fundamental problem is that when a system works well most of the time, people tend to trust it too much. Which in turn leads to a dangerous feedback loop: the better an autonomous system performs, the worse humans become at supervising it.
It’s sounds like a paradox, but it’s true. Humans are notoriously bad at passive monitoring. If you’re forced to pay attention to something that rarely requires intervention, your mind naturally starts to drift.
Pilots in highly automated cockpits, for example, have been known to zone out, sometimes missing critical warnings because they weren’t actively engaged.
This has become so common it’s now lead to a new term in the aviation industry, “automation induced errors”.
The same thing happens with drivers using advanced driver-assistance systems (ADAS). If the car handles 99% of situations flawlessly, the driver’s reflexes and attention degrade over time, making them less capable of reacting in the rare but critical moments when human intervention is required.
This creates a tricky design problem. If the system is too unreliable, people won’t use it.
But if it’s too reliable — but not perfect — people will trust it too much and fail to intervene when needed.
Tesla’s Full-Self-Driving Dilemma
The most notorious example of this, of course, is Tesla’s “Full-Self-Driving” system. (FSD)
But FSD, despite its name, is currently classified as a Level 2 driver assistance system, meaning it can handle some driving functions but requires active driver supervision and intervention.
In other words, it works well most of the time…
But as illustrated earlier, when drivers trust it too much they may react too late when the system makes a mistake.