The Secret Work Revolution No One Is Leading
- Erin Eatough
- Oct 7
- 3 min read
Why employees are secretly using AI — and what leaders can do about it.

Right now, millions of employees are working side-by-side with machines their leaders pretend aren’t there.
In meeting notes, emails, and decks across every industry, AI is already ghostwriting the modern workplace. And yet, most of that work lives in the shadows — unseen, unacknowledged, and in many cases, actively hidden.
Our research conducted in the fall of 2025 with QuestionPro, surveying 2,000 full-time employees across the U.S. and Europe, revealed something astonishing: more than half of workers (54%) say they use AI for most of their jobs. And nearly seven in ten (68%) admit they conceal it from their bosses, peers, or IT.
We call this workshifting — the quiet transfer of tasks to machines that leaders don’t even know are working for them.
From Shadow IT to Shadow Work
This isn’t the first time technology has outpaced governance. In the 2000s, employees built unsanctioned spreadsheets and shared drives to get things done faster than corporate systems allowed.
The difference now is scale — and psychology.
People aren’t hiding AI because they think it’s wrong. They’re hiding it because they don’t feel safe being honest about what work now is.
58% of employees say they conceal AI use out of fear — fear of being labeled lazy, fear of losing credibility, fear that their value will be attributed to a tool instead of to them.
That fear isn’t an HR issue. It’s a system design issue. It’s what happens when an organization’s psychological ergonomics — the way its systems fit or fight human motivation — breaks down.
The Consequences of Concealment
The hidden middle of work is expanding fast, and it’s not benign. When people use AI in the shadows, three things happen:
Risk becomes invisible. Leaders can’t govern what they can’t see.
Fairness erodes. Recognition flows to whoever is most visible, not who is most valuable.
Trust corrodes. People spend energy managing appearances instead of sharing insight.
And because executives often respond with control — stricter rules, more monitoring — the behavior goes even deeper underground. The more they tighten control, the more concealment becomes adaptive. The loop reinforces itself.
The real risk isn’t misuse of AI — it’s the concealment of it.
Design, Not Discipline
The answer isn’t stricter policy. It’s smarter design.
You can’t command trust; you have to architect it.
At Fractional Insights, we study the psychological ergonomics of work — how systems either create friction or flow between human intention and organizational reality.
Applied to AI, that means creating conditions where the most productive behavior (responsible, transparent AI use) is also the most natural one.
Three design principles make that possible:
1. Transparency by Example
Executives must narrate their own learning curve with AI — not just endorse it from a podium. Curiosity is contagious.
2. Measurement that Reveals, Not Penalizes
Track both AI workload share (how much work AI is doing) and concealment rate (how much of it remains hidden). What you measure becomes governable.
3. Recognition that Teaches
Celebrate employees who share how they partnered with AI to create value. Recognition is the cultural signal that replaces fear with learning.
These aren’t soft culture moves. They’re system interventions that reduce fear, reduce blind spots, and reduce inequity — turning AI from a hidden risk into a managed advantage.
The Hidden Workforce Within the Workforce
The “AI transformation” isn’t coming — it’s already underway, silently, at the task level. Most organizations are just the last to know.
The irony is that the same employees leaders fear will misuse AI are already the ones teaching themselves to use it responsibly. What they need isn’t another policy; it’s permission.
The question isn’t whether AI belongs in the workplace. It’s whether we’ll design workplaces where honesty does.
Survey conducted by Fractional Insights, sponsored by QuestionPro (2025).
Learn how Fractional Insights helps leaders design psychologically ergonomic systems for responsible AI adoption and measurable performance outcomes.
Comments