Skip to content

Some users forgive broken things. Others leave.

24 April 20265 min read

A founder shipped a rough beta to technical ops leaders and got 15 bug reports in a week. Another shipped the same category of product to regional facility managers and got silence. Your audience's tolerance for half-built is an industry variable, not a B2B/B2C one.

Editorial illustration for "Some users forgive broken things. Others leave." — Marga Haus Perspectives

A founder I advise shipped the beta of their B2B SaaS last spring. Two hundred waitlist signups, mostly technical ops leaders at mid-market companies, people who spend their day inside a terminal. She shipped rough. A known bug in the onboarding flow. A placeholder on the settings page. A 'coming soon' label on CSV export. Within seven days she had 40 active users, 15 bug reports in a shared Notion doc, 8 feature requests, and three warm intros from users to their procurement leads. The rough beta was a conversation starter.

Another founder I advise shipped a structurally similar product to a very different audience. Compliance software for aged-care facility managers at regional sites in Queensland. Same category of bug in onboarding. Same placeholder. Within seven days: two users, zero bug reports, one email saying 'this doesn't work' followed by ninety days of silence. The waitlist kept growing. The conversion rate kept dropping. She thought the problem was positioning. The problem was that her audience could not bridge between what the product intended and what it actually did.

The variable is not B2B versus B2C. It is not SaaS versus legacy. It is what I think of as cognitive bridge capacity: how much the user is willing and able to mentally complete your product when it is not yet complete. Some users carry you across your rough edges. Some do not. Knowing which kind you sold to is the difference between shipping rough and learning, or shipping rough and disappearing.

Who bridges, who doesn't

The highest-bridging audiences are people who live in software. Engineers who debug their IDE on a Sunday. Product managers who file Figma tickets by reflex. Technical ops leads who sit on the Hacker News front page during coffee. Early-adopter consumers who pay for beta access. They forgive bugs. They infer intent from ambiguous labels. They screenshot errors and send a reproduction. They are participating in the build. A rough beta is a conversation they want to join.

The lowest-bridging audiences are people whose job is not software. Field technicians. Facility managers. Medical staff. Truck drivers. Regional government workers. Older consumer demographics. They have neither the tolerance nor the mental model to paper over broken flows. A placeholder reads as abandonment. A confusing state reads as broken. They do not send bug reports. They leave and do not come back, and your waitlist conversion drops without a diagnosis.

This is industry-agnostic and it is contextual, not identity. A mining supervisor on a field tablet in gloves under high-glare sun is a zero-bridge user. The same person, at home on a Saturday picking a new laptop, is a high-bridge user. Context sets the threshold, not the intelligence of the buyer.

FigureCognitive bridge capacity by audience

High bridge

  • Engineers, product managers, technical ops
  • Early-adopter consumers, Hacker News regulars
  • Infer intent from ambiguous labels
  • Screenshot bugs, send reproduction steps
  • Rough beta is a conversation they want in on

Low bridge

  • Field technicians, facility managers, medical staff
  • Regional operators, older demographics
  • Placeholder reads as abandonment
  • Do not send bug reports — they just leave
  • Rough beta is a broken product, full stop

Context sets the threshold, not the intelligence of the buyer. The same person is high-bridge at home picking a laptop and low-bridge on a field tablet in gloves.

Two audiences, two definitions of done

I have shipped software to six distinct ICPs across my time at Accenture and on my own ventures. The furthest two ends are instructive, because the variable that set 'done' was bridge capacity, not polish.

On one end: hi-vis mining field crews on open-cut sites in the Pilbara. Thirty-second attention windows between radio calls. Gloved hands. The cost of bridging was physical: removing a glove to tap a recessed button, squinting at a loading shimmer in direct sun. The 'done' bar was a two-tap form that worked through a screen protector, and a haptic confirmation you could feel through leather. Animations were a liability; a loading shimmer read as a broken device.

On the other end: board-level executives reviewing capital allocation on a seven-figure decision. Their bridge capacity for understanding the model was infinite: tab through, drill down, read the methodology, push back on an assumption. Their bridge capacity for errors in the numbers was zero. Two decimal places instead of three was the same signal as a broken product. The 'done' bar on that side was a single-page summary whose totals tied to a signed-off model, with tabs the CFO could argue into.

Both shipped. Both landed. They were unrecognisable as the same category of work. The variable in each case was not polish. It was what the user was willing to complete on our behalf.

The founder's two mistakes

The first mistake is to assume your audience bridges like you do. Founders are extreme high-bridge users by selection. They ship something broken, see it mostly work, ship the next thing, read the feedback of three users who look like them, and conclude the product is landing. If the real audience is low-bridge, the founder has shipped something that failed silently and never came back. The waitlist grows. The conversion rate does not.

The second mistake is the inverse. The audience is participatory and wants a conversation, and the founder spends three weeks polishing a micro-interaction nobody asked for. The customer wanted a phone number and a bug tracker. They got a confetti animation. The feedback loop the buyer valued was never opened because the product felt too finished to argue with.

What to do before you ship

Five conversations with real prospective users. Not a survey. Not a research panel. Ask five questions, in this order: what tool did you use before this? When did you last feel confident using a piece of software at work? What did it do that worked for you? What made you stop using the last thing you tried? What is the slowest part of your day we should be replacing?

The pattern in the answers is the bridge. If they describe the last software they abandoned in detail (what was wrong, why they gave up), they are high-bridge, and you can ship rough and iterate with them. If they say they just stopped using it and do not know why, they are low-bridge. They will do the same thing to you, quietly, and your analytics will not explain it.

Your product is not judged on what it does. It is judged on what your user is willing to complete. That number varies more than most founders assume, and it does not correlate with intelligence or income. It correlates with context.

Found this useful?

Thirty minutes. Free. No prep needed.

If the diagnosis is clear without me, you go do it. If not, we talk about the sprint. Either way, the first call takes 30 minutes and costs nothing.

Book the call

Keep reading

All posts
All perspectives