There’s a special kind of dread that comes from getting a message asking for your personal details — from a random mobile number — claiming to be your hospital.

It doesn’t say which hospital.
It doesn’t identify itself.
It just says: Hi! Please reply with your full name, date of birth, address, and email.

Naturally, I assumed it was a scam.

But no. This wasn’t a con artist in a foreign call centre.
This was an official, government-backed initiative.
An actual public health service, running an actual SMS campaign, using random burner-style numbers to collect sensitive personal data, with no sender verification, no pre-warning, and no branding.

Because nothing builds trust like a cryptic text asking for your identity and your trust — in that order.


How Real Systems Become Indistinguishable from Scams

Let’s break this down like we’re doing a forensic UX audit of a crime scene. Because, functionally, that’s what it was.

❌ Step 1: Use Generic Numbers

The messages came from a +6421… number. That’s a standard NZ mobile prefix. Could be your cousin. Could be a scammer. Could be your hospital, apparently.

This is like your bank texting you from “Steve” and asking for your credit card number.

❌ Step 2: Include No Identifying Information

No logo. No name. No “Hi, this is XYZ Health.” Just an open-ended prompt asking for personal info.

At this point, we’re actively training the public to ignore legitimate outreach — because there are no visible cues of legitimacy. And that’s not a trust problem. That’s a design problem.

❌ Step 3: Go Straight for the Data

This campaign asked for full name, date of birth, address, and email — in one go, via SMS!

No staged verification. No secure follow-up link. No context about what it’s for. Just a blanket request that would make your average phishing syndicate nod in approval.

There was no prior warning. No email. No patient portal notification. Just: surprise, we’re texting you now, please verify your life story.

In technical terms, this is called a consent anti-pattern. In human terms, it’s called what the hell were you thinking?


This Wasn’t Just a UX Fail. It Was a Systemic One.

This is the kind of decision that gets made when systems are designed around processing, not people. When the metric is response rate, not reputational harm.

A few likely quotes from the internal planning session:

“We need to reach a lot of patients quickly.”
“SMS is cheap.”
“We’ll just buy a pool of numbers and blast them out.”
“What do you mean, ‘trust signals’?”

No one in the room stopped and asked: “What will this feel like to the person receiving it?”

And that’s how good intentions become bad patterns.


We Are Normalizing Insecure Behavior

The worst part isn’t that this happened once.
The worst part is that now, thousands of people have been conditioned to believe that this is how official comms work.

Next time they get a text like this — from a scammer — they’ll be more likely to respond. Because the bar has been lowered. By design.

This isn’t just a UX issue. It’s public safety negligence through interface design.


What They Could Have Done (And Still Could)

This could’ve been avoided with basic hygiene:

  • Verified sender ID via branded short code
  • Advance notice through trusted channels
  • Staged data collection, not all at once
  • Links to secure portals, not replies via SMS
  • Public campaign to build awareness first

This doesn’t require AI. Or blockchain. Or six-figure consulting. It just requires someone — anyone — in the room to ask, “Will this look like a scam?”


Closing Thought

When legitimate services look indistinguishable from criminal activity, we don’t just confuse people.
We erode the very trust we rely on to keep public systems functioning.

This was preventable. It was predictable. And it will happen again — unless we stop treating humans like a flaky API and start designing for how people actually behave.

Just because it wasn’t a scam doesn’t mean it wasn’t a failure.

A frustrated developer holding their phone, staring at a suspicious SMS. Caption: 'Looks like a scam. Feels like a scam. But surprise—it’s real.'

Looks like a scam. Feels like a scam. But surprise—it’s real.