Skip to main content

The Tooling Didn't Fix the Instinct

In 2008, we didn't have Jira. We had a whiteboard, index cards, and magnets with people's faces on them.

The first card for a site rebuild at IPC Media said: "User wants to play a game on the website." That was it. No epic. No acceptance criteria essay. A card and a conversation.

The UX designer immediately wanted to redesign the whole site. The developer wanted to start with the database. The card stopped both of those conversations before they started. What do we actually need to do right now? Make it possible to play a game. Everything else can wait.

We launched on time.

What the constraints were actually doing

The index card forced clarity by limiting what you could write down. You couldn't document your way into scope creep. You couldn't generate a hundred tickets and call it planning.

The conversation that followed was where the real work happened. Two or three people standing at a whiteboard asking: what does this actually mean? What does someone need to do? What are we building first?

The card wasn't a specification. It was a starting point. A prompt for a conversation that then defined the work. That's a meaningful distinction.

When you have to physically write something on a card and stick it to a wall, you make choices. You decide what matters enough to put up there. You expose your assumptions early, because there's nothing to hide behind.

What happened when the tooling got richer

Jira didn't change the instinct to build too much, too soon. It gave that instinct somewhere to live.

Now you can create 200 tickets before a single line of code is written. You can write acceptance criteria that reads like a contract. You can run refinement sessions, planning poker, and backlog grooming, and still end up building the database before you've established whether anyone wants to play the game.

The process became the product. Teams got good at working Jira rather than delivering software.

The tooling doesn't cause this. The instinct is older than any software. But richer tooling can make it invisible for longer. You look busy. The board is full. Progress looks like it's happening. The fact that you're three sprints in and still haven't shipped anything users can touch is easy to miss when the backlog is groomed and the ceremonies are all attended.

AI-generated tickets make this worse, not better

Now you can generate tickets from a prompt. Describe a feature, and a tool will break it into stories, sub-tasks, and acceptance criteria in thirty seconds.

That sounds useful. In practice, it tends to produce volume without judgement. Fifty tickets for something that probably needed twelve. A backlog that looks thorough and is actually just large.

The instinct to over-specify hasn't been addressed. It's been accelerated.

The thing that still works

The discipline that made the index card effective was not the card itself. It was the constraint it imposed, and the conversation it forced.

Before any ticket gets written, you need someone asking: what is the smallest thing that would prove this works? Who actually needs to do it? What are we explicitly not doing yet?

Those questions are harder than writing tickets. They require judgement, not output. They can't be generated. And they matter more than the format your work tracking lives in.

Teams that do this well tend not to care much about which tool they use. Teams that skip it tend to struggle regardless of how sophisticated their setup is.

The tooling didn't fix the instinct. It probably never will. That part is still a human job.