I know, I know, straight after starting this whole thing, I went on vacation. And how I went on vacation. Anyway I neglected this for a couple of weeks. But I did my work and in the next couple of days you will receive a heavy dose of three mails / posts.
The topics are
why predictions on the endgame of AI are currently futile and thus, why so many false prophets are around.
how I compare this to making Internet predictions in 1995 (which was useless) vs. 2000 (not useless) and finally
what does this mean, how can we responsibly act as a human or a company in those times.
Without further ado, here we go!
The Golden Age of Wrong Prophets
There are a lot of experts out there with a lot of predictions and advice. Where will AI go? How will it change employment or not? How does it change your role in your company? Etc.
What I am saying is the following:
Those who think they know everything, know nothing.
It’s just not the time for certainties, as uncomfortable as it may be. But thinking in certainties will lead you in the wrong direction. To explain a bit further, I’ll later go back in time, way back … to 1995.
My point is that viewed in parallel to the “development of the internet” it’s 1995, not 2000 - or even 2005 - and that means: we know nothing yet. Especially the prophets. So better not listen to them.
To also make this a bit of constructive and not simply “we have no clue, so duck and cover”, I’ll also mention what good practices are in times of “we know nothing”.
Let’s go for the ride!
It drives me crazy
It’s one thing that a ton of people that can read the letters “A” and “I” are now all of a sudden experts in AI (even though they don’t know the source of hallucinations in LLMs or why they can’t go away (at all)). Worse, there are also experts that give you generalized advice on how your role, your job will change. And what, following their advice, accordingly you have to do and change. Some of them are happy to sell you a freshly curated “AI for X” program. Some are simply out for reach and influence. The LinkedIn post hastily run through an AI of course. Automated probably. You sense it. No actual knowledge required. Where there is a gold rush (although as with any gold rush - and this is just a gold rush for some - the crooks are not far off the trek to the mines).
Disclaimer: AI is great! This is not to criticize AI as such, but the social systems around it. I love AI, I use it all the time. This text is simply to give context and understanding to the hype around. To be critical of AI and embracing the current state of AI and using it is not mutually exclusive. I know it’s tough to get the nuances across and this not to read “AI sucks” - but I genuinely embrace AI.
State of Predictions
It’s wild, to say the least. A little zoom into the range of predictions floating around:
As a Product Manager you do not have a chance to get another job without being proficient in vibe coding.
There will be no more Product Managers in n years
The Product Manager will become a technical role and morph with the developer role
We will have 99% unemployment by 2030.
As any other innovation before, AI will create new jobs and creative opportunities.
If you don’t invest now, it will be too late.
The bubble is near.
SaaS, especially B2B: From “finally it works” to “the SaaS model is over, everyone can easily build their own applications”.
it goes on and on …
Anecdote on experts: Before Nate Silver changed his business model from Baseball career/value predictions to politics and election prediction, he studied if experts actually give good predictions. He watched hundreds of interviews and analyzed them in hindsight and the basic outcome was: asking an expert is like throwing a dice and pundits have no better rate of successful predictions than non experts. That told him everything he needed to know to dare the jump into politics. At least he had data and a model that worked in one sector. And that was before the “LinkedIn-I am-an-expert-since-two-months” area.
This should give you some background on the reliability of the predictions on AI and what I documented above shows the “dice” element: A lot of those statements are mutually exclusive. They can not all be true at the same time.
Interlude: Limitations of the technology
While trying to squeeze the lemon, most pundits casually forget to mention the current state of technology:
LLMs, the backbone of the current wave, have the following unsolvable issues or “features”:
Unsolvable issues / features:
Hallucinations: LLMs make stuff up, basically to fill the void in the probability distribution. This is the result of the core architecture of LLMs and can not be solved. (Yes, I know about the paper that on first look reads like this could be solved (and by the right experts it is also cited as such), but all the paper describes is how hallucinations can be better mitigated.)
Non-Determinism: LLMs inherently produce non-deterministic output and this need not be seen as a problem, depending on the use case. Guidelines might be: the more open ended a task is (writing copy, storytelling, emails, creating images) non-determinism is potentially a feature. The more regulated the use case or context is, the bigger the issue: regulated environments, medical analysis (the great success cases you hear about in medical analysis, diagnosis etc. are in general not LLMs but specific, highly specialized more classical ML/AI solutions, etc.) If you are a bit more of an expert, you might have heard about the temperature setting in LLMs which dials in the “entropy” aka the level of non-determinism of LLMs. But setting it to zero, does not mean determinism.
Hallucinations and Non-Determinism: A huge part of the appeal of LLMs is actually based on those two properties. LLMs without hallucinations and on determinism would very, very often answer with “I don’t know”. While these properties are not built in intentionally, what they provide is the core of the general applicability of LLMs and as such are the basis of their success: anyone can ask anything and get some kind of a satisfying answer. The more non-determinism and hallucinations, the less there would be mass appeal and a sensation of “this helps” and surprise and anthropomorphism (”oh, this talks like a human”). It is a business decision regarding mass appeal to intentionally make use of hallucinations and non-determinism on a certain level, but both can not be reduced to zero.)
Prompt injection: It is very easy to influence an AI’s output in the prompt. You have certainly realized that AIs tend to approve of your opinion. It goes further. You can of course “dictate” or at least heavily influence a certain output. Cases are documented where applicants hide a message to potential LLMs used in the screening process to prioritize their CV and put it closer to the top of the stack. While this is a relatively harmless example, you can easily imagine scenarios which are not harmless at all.
Training data leakage
Bias inheritance
Energy costs
Copyright issues
… the list goes on and on.
Go ahead and use AI to your will, but know what you are using and if you want to bet your existence or company based on these boundary conditions.
Based on this, the experts cited are not that much to blame for the fuzziness of statements. We can very well blame them for blasting out their opinions as facts, though.
And that’s why the wild predictions about AI today remind me a lot of the internet’s early years. To see why, let’s rewind to 1995 … in the next part … hope to see you there.