The Transparency Paradox

The Internet, and especially LinkedIn, is awash with mediocre, missing point-of-view, "it's not this — it's that", generated "content"; we all know it, we’ve all seen it, every day. The first problem is we’ve become aware, we can see it, see the patterns in the text, the flow, the words that are chosen (how many people are “thrilled” on LinkedIn, compared to say, real life?), the simple lack of humanity.

The second, and more important, problem is that people are unhappy, genuinely unhappy, with generated content being forced upon them, for mediocre content becoming the norm; people are asking for transparency, authenticity and humanity.

How do we know this? It’s everywhere and everyone is talking about it; just look for LinkedIn comments, look for people calling out the stories, the lack of tone, the lack of content for want of a better word.

That pattern alone tells a story, but there's harder evidence too: how about where brands have already faced backlash when they deployed AI-generated content:

  • Selkie faced customer backlash to AI-generated artwork

  • Skechers received criticism for AI-generated print ad execution

  • Coca-Cola saw mixed reception to their AI remake of "Holidays Are Coming"

  • J.Crew sparked "AI slop" controversy on Instagram

  • Guess in Vogue generated debate over AI "models" replacing human talent

The pattern is forming, and people are going to continue to call it out. But, when companies are transparent, it also leads to lack of trust, as a new study shows.

The Humanity Gap

AI as a tool is fantastic, it enables a lot of people to create something "new"; but it in some ways doesn't feel as 'new' as it should.

As explored in The Mirror Trap, AI cannot create new, it cannot create new thought; it can see patterns in existing data, it can create what has already been created. We all saw that ability when everyone used the Studio Ghibli style on every photo for about a week.

This is where I think the tension is coming from, people can see that content is AI generated, can't quite put their finger on why, but this may be the root cause to that tension, the missing humanity and missing "newness".

This is just an extension of the workslop I recently wrote about. It feels sloppy when we see obviously generated content. And then come the ramifications: Deloitte recently had to refund the Australian Government for a report filled with AI-generated errors, including fabricated academic references and false quotes.

Typically I'd say that the way to address this is through transparency, I've certainly been saying that transparent communications will lead to trust. I still stand by this opinion, despite the new study. Even OpenAI recognises this limitation—they're now hiring content managers to ensure authenticity, a clear signal that the technology alone isn't enough. Solving it requires more than simply disclosing AI use; it'll start with understanding where the humanity is in your business.

The Transparency Trap

So here's the paradox: people want authenticity and transparency, but research shows that when companies disclose AI use, trust actually decreases rather than increases. We're caught between two competing demands: be honest about using AI, but know that honesty might damage trust more than silence would.

I and others believe there is still a way forward, to provide the transparency, but to assuage any concerns — the value proposition. The value is… the humanity of it all.

Where in the process is the humanity, where is it that the human undertakes for example higher order thinking or the creative ideation? Where has AI removed the burden of a sticky time-consuming process? This is, I think, what people are asking for; for responsible, thoughtful use of AI. Industry leaders are recognising this too—focusing on where AI handles time-consuming work while humans drive strategy and judgement. So, this is what may be needed in transparency communication: emphasise the human element.

This is something I’ve exemplified throughout my journey of building Underfold, all articles have the same call out “Authored by John; ChatGPT 5 acting as an editor”; this article is slightly different, I’ve moved to Claude for this workflow.

Why did I do this from the very beginning? Simply to address what I thought was coming down the pipeline, that there would be a study that proved that people stop paying attention to content when they know it's generated by AI. I wanted to call out from the beginning that I was using AI as part of the process, but highlight the human in the process, the creative process is human throughout at Underfold. From conversations, I've only had positive feedback so far.

Leader Actions

Map The Humanity

Why: You can't communicate where humans add value if you don't know where they're actually involved in your AI-assisted processes.

Do:

  • Audit your current AI use cases (even simple ones like using ChatGPT for emails)

  • For each use case, identify the human value and call out the AI assistance

  • Document this as a simple map: "Human writes brief based on client conversation → AI expands into full proposal → Human edits for voice and accuracy."

Result: You'll have a clear story to tell about human involvement, rather than vague claims about "human oversight."

Reframe Your Disclosure

Why: Simply saying "AI was used here" triggers distrust. But explaining the human role can build confidence.

Do:

  • Replace generic AI disclosures with specific human contributions

  • Generic example: "This content was created with AI assistance"

  • Specific example: "Research and analysis by [Name]; AI used for initial drafts; all conclusions and recommendations reviewed and refined by our team"

Result: Stakeholders see where expertise lives and can assess quality accordingly.

Create a Feedback Loop for Published Content

Why: The brands that faced backlash didn't lack AI capabilities—they lacked early warning systems. By the time leadership noticed, the damage was done.

Do:

  • Assign someone to monitor comments and responses on AI-disclosed content (LinkedIn posts, articles, client communications) for the first 48 hours after publication

  • Look for specific signals: questions about quality, requests for clarification on what AI did vs. what humans did, comparisons to competitors' approaches

  • Create a simple internal report: What was published? What feedback appeared? What does it tell us about stakeholder concerns?

  • Use these insights to refine your disclosure language and process mapping (Actions 1 & 2)

Result: You catch concerns before they become crises. You learn what your specific audience needs to hear about your AI use. Your transparency evolves based on real feedback, not assumptions.

AI isn't the enemy of authenticity. Generic disclosure is. Silence is. Flooding the world with thoughtless, generated content is.

But AI used thoughtfully, with humans driving strategy and creativity, with clear communication about who did what? That's not a threat to trust. It's capacity. It's a small team delivering like a large one. It's freeing people from the sticky processes to focus on what actually requires human judgement.

The transparency paradox resolves when we stop defending AI use and start demonstrating human value. Map where that value lives. Show it clearly. Listen to how people respond. The businesses that get this right won't be the ones avoiding AI or hiding it. They'll be the ones making the humanity impossible to miss.

That's when you can truly Trust Your AI — not because you've disclosed it, but because you've built something worth trusting.

Sources

Next
Next

Workslop, Junk Learning, and the Future of Teams