Workslop, Junk Learning, and the Future of Teams
Generative AI is increasingly in use across all types of organisations and we’re beginning to see some studies show the trade-offs between productivity gains and the toll on cognitive ability. While most conversations fixate on productivity, there is a story surfacing on cognitive decline, and critically the impact on organisation culture and cohesion. Generative AI is going to shape more than productivity, it could quietly erode how our people think, learn, and work together. How do you respond as a business leader?
The Experts
The frequency of discussion around cognitive impact, and how to address it in business, I believe is going to feature heavily over the coming years; already we’re seeing some terms come to the surface:
Cognitive Offloading, the concept developed around the time of Google, where we just don’t need to remember as much as before
Junk Learning, quick answers that don’t sink in; surface learning with little depth
Workslop, uncritically generated content causing colleague confusion and time wasting
Cognitive Laziness, uncritically accepting answers, accepting whatever the machine says
All the terms highlight what is now being reported, that AI has a cognitive impact, much like Google did and still does, but it appears to be more insidious causing laziness and contradictory to claims of productivity improvements across organisations, it’s actually causing productivity losses.
A recent MIT study [insert link] effectively shows this, that when using AI as a replacement for human thought tasks were undertaken 60% faster but with a 32% drop in cognitive load (mental effort in learning); which is as telling, add that 83% of people who used AI to assist writing couldn’t recall what they had just written.
For leaders the question may become, is speed now worth it for shallower cognitive capability tomorrow, this points to potential loss of institutional knowledge and weaker long-term capability. This isn’t theoretical, evidence is already out there. Trust is built on the quality of the output, not the tool and how people choose to use it.
Expectedly this impact isn’t limited to those in knowledge work, i.e. office workers, but also healthcare professionals; a study showing that Doctors are losing the ability to spot cancer [insert link]. Admittedly it was a small observational study rather than a randomised trial. Still striking enough to demand attention.
Even AI leaders are coming out with concerns about the impact on learning, coining the term Junk (Food) Learning [insert link]; I’m sure a lot of us are guilty of uploading a document, a research paper, and asking for a summary, then reading it and believing we understand. We likely don’t.
⁉️ Leader Listenings
Listen to Jack Clark on The News Agents: part 1, part 2 (disclaimer, I’ve not listened to part 2)
Pilot v Passenger
In 2025 (22 Sept 2025) a study was announced via the Harvard Business Review on the impact of AI-Generated workslop; let’s highlight some key stats to consider:
Respondents stated that just over 15% of work they received from colleagues was classified as workslop; 1.3 hours per day per employee
Respondents felt annoyed (53%), confused (38%) and offended (22%) when receiving workslop
34% are ‘telling on’ their colleagues to teammates and managers; how much work can a colleague trust, if a workslop could slop work
32% are less likely to work with someone who has sent them workslop previously
If 15% of work is classed as workslop: what kind of AI users are creating it?
Pilots are people with high agency and optimism, and use AI to enhance their work; this is one of the internal principles of Underfold, to use AI as a sidekick and let my own creativity talk first.
When I write, I act as the pilot: I gather sources, draft myself, and only then use ChatGPT as an editor.
Passengers are outsourcing thought and sending unrefined output. They use AI to (controversially) avoid work. Sloppy or lazy work has always been around, there have always been and always will be people who will try and take a shortcut to ease their workload, potentially for their gain.
We can already see this play out on social media, where AI-generated posts are called out for lacking authenticity or meaning. Inside organisations, the same pattern risks eroding trust and cohesion unless leaders act.
⁉️ Leader Readings
Read the Harvard Business Review article on how ‘AI-Generated “Workslop” Is Destroying Productivity’
The Leadership Challenge Ahead
Think of AI governance less like cybersecurity’s external shield, and more like an internal trust-building exercise. Where cyber protects from outsiders, AI governance protects culture from within. Who might need to address this issue?
This isn’t just an issue for business leaders, but talent teams that may need to ensure new joiners access greater training to avoid picking up poor habits and improve AI literacy, and to high performing team coaches, how will you guard against cognitive offloading?
There are no ‘easy’ answers to this topic, but there are some ideas starting to bounce around the zeitgeist
Three things to try:
Become an organisation of Pilots; HBR “Recommit to Collaboration”
Why: Pilots engage with their work and use AI as their assistant. That’s where the value is
Do: Talk openly with your teams about what “pilot behaviour” looks like. Creativity first, AI second
Result: Less workslop, fewer frustrations, and a culture where people feel good about the work they’re producing
Invest in Training
Why: People can only use AI as well as they understand it. Without training, shortcuts and bad habits creep in
Do: Offer practical training — not just the “what AI can do,” but also the “when and why to use it.”
Result: AI used with intention, not blind trust
Acknowledge the frustration
Why: Workslop is already here, and pretending it isn’t will only damage trust
Do: Create space for people to talk about their experience with AI outputs. Listen and adjust to when issues are called out
Result: More trust, more openness, and a stronger sense of shared standards
This subject is only going to grow. As more evidence emerges of uncritical AI use, and as pushback builds, leaders who act early may gain the competitive advantage. Generative AI may help us work faster, but it’s leadership that will decide whether it makes us better, or less than we were.