The AI Trust Gradient

As AI continues to disrupt our culture and the world of work, it’s increasingly important to understand not only what it can and can’t do but what it should and shouldn’t be used for.

I recently have become a little more active on LinkedIn having not really spent much time there for a while. It was always one of the cheesier social networks, but AI has been used to take it to another level.

Among all the usual flurry of people prancing about trying to impress each other this week was one prominent AI influencer proudly proclaiming that he now uses AI to write his blog posts. It even ‘knows his voice.’

This kind of suggestion is common in AI discourse but betrays a fundamental misunderstanding of what your ‘work’ is really about.

Knowing your voice and thinking like you are two completely different things.

AI doesn’t actually ‘think’ in the manner of a human at all. It is not sentient, it does not engage with tension, it does not synthesise in the true sense of the word—that is to bring something new into existence.

On some level we all know this. This is why creative output made with AI just feels soulless. That is unless a human has taken a dominant role in the creative process using AI as a means for conveying a pure inspiration, rather than expecting AI to actually “create.” This is not what it actually does. It ‘generates,’ but it does not truly synthesise or sublate in a philosophical or psychological sense. Only a mind can do that. And a well-tuned mind can also know the difference. If AI could do that, it wouldn’t be artificial and hence wouldn’t be AI.

At the extreme end of this is something we can all recognise—the ubiquitous phenomenon of ‘AI slop.’ Still much of the responses to this blight on our culture by other AI proponents are arguments along the lines of “you just wait, it’s going to get better.” Yes it will, but these still fundamentally misunderstand the problem, and if anything may just exacerbate it.

The problem is not that AI is just not yet convincing enough. The problem is whether it’s being used in the right way. That is—whether humans are being freed up to do human things, and machines rightly allocated to machine things; or whether we’re actually just using machines to avoid what we ourselves should really be doing.

There are many tasks that are exclusively the domain of humans and will never leave that domain. At the heart of this is the need to engage in tension and consciously usher in the future. An AI simply cannot do this. Machine learning simply cannot do this. It can—very well nonetheless—machinate and generate from what already exists.

Which is also what makes it a profoundly useful and enabling tool. Its challenges navigated well will truly activate it as freedom tech. But only where it enables humans to be more human. Where humans use it to avoid the task of being human it will produce the worst kind of so-called art, deception and abuse.

One of the most useful applications of AI is in opening up access to truth and knowledge in both personal and organisational contexts. Who can really knock the use of AI for assisting with learning and opening up access to knowledge? (Provided that knowledge is true and accurately conveyed).

My own experiences working for and alongside large organisations showed me just how much life is wasted in simply trying to access current information and data. And that when you get an answer to any apparently benign and straightforward question, you often cannot even trust it.

I’m currently building a solution to exactly this kind of problem—including an AI guide for the Hestia community based on MiC frameworks. More on that soon.

While much is also being said about the dangers and challenges of AI adoption (I touch on some of them here), there is one particular danger I think isn’t receiving enough attention, and that is what I think of as the ‘AI trust gradient.’

This danger is about ‘AI trust’—because it arises from the dishonest mis-allocation of AI resources to human tasks, and the attempt to pass off AI’s generation as one’s own—and it’s a ‘gradient’ because it arises from a temporary imbalance of perceptions and tension that must eventually reconcile.

Attempts to exploit the AI trust gradient are also pretty much ubiquitous now and the above LinkedIn experience wasn’t my only source of inspiration for this week’s post.

Anyone who runs a business and has their email address on a list somewhere will be receiving countless automated emails now. Automated communication isn’t necessarily a problem. However, the majority of them open with brazen lies like “I’m reaching out to you personally” and “I couldn’t help but notice that…” Apparently hoping that you still won’t realise.

I don’t respond to 99% of them. The exception is where I feel there is alignment between what the sender is selling and doing and what they’re communicating, and there’s reason to believe they aren’t being fundamentally dishonest or contradicting themselves implicitly.

One reached out this week offering a certificate service for my Know Yourself Programme and I decided to respond. I heard back very quickly but did not reply for 48 hours. Had they not then followed-up again saying “Max, did I lose you?” I may have continued the conversation they wanted. But by then it was clear this communication had originated with someone who was not only not respecting my freedom, but was using AI to posture as themselves.

So I thought to have some fun by responding asking the question, and whatever I was talking to then made the appearance of coming clean. Whether or not the response I then received was actually a real person or not was besides the point—the point was I now had no way to know for sure nor any reason to trust them.

The problem here again was not the use of AI in itself—it was the pretence that I was speaking to a real person when they knew full well I wasn’t. It was a disconnect between what they were willing to give and what they expected by return—trying to engage my humanity while offering none; only an illusion of it. And yet somehow simultaneously expecting me to want to work with them. Another example of someone attempting to remain ignorant of their own contradictions by dumping them onto others.

Exploiting trust is nothing new in business. As it is with every transformative new technology that comes on the scene—genuine progress is countered to a large extent by those who use the tools for deception and personal power. The supposed ‘benefit’ here coming not from actual productivity gains but from other people not knowing what you’re doing. Now with AI, it’s very important to know the difference, and I think to adopt a new kind of etiquette that will help us use this technology responsibly going forward.

This is the AI trust gradient. It can alternatively be thought of as the ‘AI productivity-performance gradient.’ That is ‘performance’ in the sense of ‘performing’ for others; to convince them of a false narrative. There’s nothing wrong with being productive, and there’s nothing wrong with performing. The problem is when you attempt to do one under the guise of the other. What a lot of people are calling ‘AI productivity’ right now is really ‘AI performance’—it’s using AI covertly while pretending that you aren’t, and attempting to capitalise on the naiveté of those who don’t realise.

In digital marketing, this is the most recent iteration of what is often questionably described as “doing what’s working now.” There’s a place for operating in the way that meets with the world as it’s configured today. Nothing wrong with that. But that’s different from attempting to exploit a temporary productivity-performance gradient—a gradient that must necessarily collapse when—and because—people wise up to what is really happening.

It is surely ironic that, while many are claiming (largely erroneously) that AI is not actually enhancing productivity at all, a chunk of whatever measured productivity is happening is attributable not to actual productivity but to an exploitation of the productivity-performance gradient. As this technology’s evolution continues to play out, the real productivity gains will need to stay while the smoke and mirrors will need to dissipate. The test of whether your AI use is legitimate is whether it can survive disclosure, or whether it depends on concealment.

I’m not suggesting that everything AI generated from now on needs to be marked as such. Where this becomes relevant and a danger area is where it matters intrinsically to the task at hand (i.e. task misallocation is a possibility) or where people are being led to believe otherwise.

Where the task is not intended to be a personal expression or a human synthesis, there’s less risk of confusion and there’s more value in using AI. In tasks like technical report writing, preparing presentation slides, coding, and any time you need to simply and efficiently convey information, AI is a boon, and there is a case for using it liberally without prominent disclosure. In some of these use cases, AI use is pretty much already the norm. Still though, some aren’t aware and disclosure here at the very least should not harm you if your output and aims have substance.

Where everyone is already on the same page about the use of AI, it is not so necessary to make a great deal of noise about the tool. Coding in most contexts is a prime example. Also in some forms of media going forward—not least in blockbuster movies—use of AI will become standard, and no one will be deceived if it is not stated explicitly outside of a film’s credits. But attempting to portray an AI-generated character as a real human being in a paid YouTube ad—that won’t cut it.

It’s important though to not let any of this become a rationalisation for what is really concealment. What matters—if genuine progress and your own development are something you value (which it is, whether realised or not)—is that no one is being misled. Including yourself.

As for whether using AI to write your blog post is actually an ‘advantage,’ the suggestion misses the point of what creative output is really for. You don’t think, write and create merely to convey ideas—and not only even for other people. I write these blog posts, for example, not only for others but for myself as well—to help myself think and know what I think. I don’t use AI to write these posts, but I do use it to review them, suggest areas for improvement and test alternative phrasings.

Only a terrible creator would seriously consider using AI to replace their creative work. And its current inability in this department is not merely a hurdle that we’ll get over as the technology improves—because this isn’t what the technology is or what it’s for, and suggesting as much betrays a misunderstanding of both the tech and one’s art.

To portray creative and authoritative writing with AI as some kind of productivity ‘hack’ is to tacitly admit a lack of confidence in your own ability to create, and an unwillingness to do the difficult human work of synthesis. You might as well generate an image in MidJourney and say “look, I painted this.”

It suggests that external output and performance are all that matter, and that there is nothing inward to be gained.

The cover image was generated in Midjourney—in case anyone didn’t already know.