Rather than to be feared, AI is just that dysfunctional kid we never liked

The excitement around AI, both from businesses and finance, is as frothy as it is palpable. My LinkedIn wall, that repository of the most commercially obvious, cannot go two posts without someone bringing up a reference to unemployed lawyers or the NVidia share price. Much like the coming of the Internet, the true market will follow the bubble as surely as this bubble has followed the uneducated speculation. Yet I see far fewer posts from actual operators about AI’s current uses or the direction of travel, due no doubt to that fact that we are still at nascency.

Yet even at this early stage, I would offer a few groundless, untechnical observations about how this is heading, based partly on a recent comment made to me that we are heading away from ‘data science’ and towards ‘content science’. I stand to be proved wrong, of course, but I feel some of the underlying truths will be difficult to challenge. Perhaps in three years’ time it will all look very different, but I will put it down for the record.

1. AI is the student, not the teacher 

To some this may seem obvious, but the reality of GPT and its B2B and B2C offshoots, is that they remain permanently there to be taught by humans.

AI will be able to do huge amounts of work for us in the future, no doubt. We are told that they will replace lawyers and accountants for instance, and the workplace will look different. I would agree with this, they are a future colleague – but the question is which colleague will they be? The nature of it indicates that the role they will perform is basically that of the new graduate trainee, or possibly even the intern. They are there, and a useful resource for sure, but they will not be producing finished products until you give them feedback, repeatedly. After a while you will wish they were able to at least fetch the coffee for you. Within our lifetimes, they will never have their own office to lord it over the saps outside and disappear off to play golf at 2.00pm on a Thursday.

2. AI is a pathological liar

It has been well noted that AI tends to ‘lie’ and many wonder how and why a logical entity would do this. But of course the vast majority of content for generative AI comes from content that is already out there, and that content has been written by fallible human hands.

In the same way that ‘the Internet’ lies – or rather, is full of untruths because those producing content are all well-intentioned – so also the likes of ChatGPT. Machine learning can only summarise existent knowledge, which is at best imperfect and at worst malicious. This is a poor kid that has been born of a dysfunctional, almost sociopathic family, so what chance does it really have of being a good boy and living a normal adult life later? AI will find itself divorced with estranged children and alimony of its own in decades to come. Arguably, the power of AI will actually magnify lies and illogic through its distributive power; the calculation error of a second rate accountant in 2006 will now form a permanent fixture within our reservoir of collective knowledge, when it need not have done before.

3. AI will reduce innovation 

More controversial than the two previous points is that the net effect of AI, itself an ‘innovation’ driving rocket-high share prices, will be to dampen innovation whenever it used – certainly unless visionary leaders really manage it well.

Referring again to the automation processes which everyone believes will be the first to go, the fact is that even while people are stunned by the ability of AI to produce a legal opinion that formerly took a highly paid lawyer days, that opinion will be decidedly unexciting. It can, after all, only be the output of decades of other lawyer’s opinions. As a student, AI is that kid who does not really think but is only able to regurgitate the books he happens to have read whilst sat in the corner of the library alone, friendless. Possibly he does this because of his parents’ acrimonious divorce, I don’t know. Either way, by doing this, and by that homework being accepted as a decent B+, it means that room for original thinking is limited. In fact, it basically means the quality of all work will only ever be a B+.

Optimistically, of course, AI frees us up to spend more time on productive, innovative thinking. But be realistic: when robots are cleaning your house, how many people will use that unexpectedly free Saturday afternoon for reading Nietzsche or climbing Everest; and instead, how many will be sitting with a bag of crisps rewatching old Netflix series and shouting “they were on a break!” at Rachel (note: guilty as charged). Again, given the power of AI’s distribution, this shift from data to content will actually amplify this ‘anti-innovation’. When we sit down to raw data, we are forced to think; when we sit down to semi-finished content, we will not.

*********

These three issues – that AI is a child, that it comes from a broken home, and that it only studies other people’s books – all link to the end point: the limitations of AI mean that while the world of work may change radically, as with now, human innovators will be disproportionately rewarded then as now. Lots of processes will be automated, but what will not be is not only painting, music or fiction writing, but even areas such as marketing and advertising. I have friends who know how to sell, and sell hard. They already get the best of the opportunities in business today, and this will be even more pronounced tomorrow. The lesson is: where human beings work at things which are truly ‘human’, rather in jobs where they are poor substitutes for machines (such as road sweepers, or boy bands), they will always find a layer of economic reward above that of the machines which serve us. Good jobs aren’t dead.

Perhaps therefore the scariest thing about AI is the mirror it is holding up to our own faults and shortcomings. It is preserving and multiplying all of our past errors and sentiments, most of which we would rather forget, like a scene out of Monty Python.

And like that self destructive black sheep cousin, or that girlfriend who only likes guys who are bad for her, AI not learning from its mistakes however much people point them out. Because from its agnostic standpoint, AI will tell there is no right and wrong, good or bad, just the content.

And whose content is that? Yours.

Leave a comment