Wrath of the AI-Zilla
I have seen a strange phenomenon over LinkedIn in the past 2 years. Ever since GenerativeAI popped out of nowhere in 2022, there’s been a gross fear plaguing a lot of folks in the industry due to inadequate understanding of the technology. Quintessentially GenAI was not the first AI product that companies were dabbling with. Earlier in artificial intelligence there were prediction engines, recommendation engines, insight engines which were part of narrow AI and covered simple use cases. The advent of GenAI which typically was the cross between deeplearning and #nlp created the first powerful AI product that could revolutionize things for industries and governments.
Now typically ever since GenAi products launched and evolved, many industry folks from all parts of the organisational utopia are finding it difficult to adapt to this meteoric change. Now with DeepSeek AI showing the world that powerful models can also be built using less capital and cheap hardware, things are getting seriously interesting. Everytime I speak with an industry leader I can sense their concerns around AI. In fact a lot of senior leaders in products, operations, marketing and sales are finding it difficult to understand the said technology and finding use cases around the same in their respective organisations.
So typically I find three different types of folks on LinkedIn and social media. There are barely 1% or less folks who actually have the competence to build powerful foundation models and products around them that could help solve a particular problem effectively. These folks understand AI in and out and have a comprehensive understanding of how it impacts industries in general across the globe. They are the silent or loud change makers who are actually doing a lot of stuff around #AI and are the ones progressing fast toward #AGI.
Then you have the second category of folks who might not be the most competent scientists or engineers but have somehow mastered using several different AI products. They understand these AI products in and out and are using it solve everyday problems that businesses or consumers face. They have a conceptual understanding of how the products work but are disconnected from the implementation details. In fact their abstraction makes them use the tool so much extensively that they can now do anything with the said tool/s. These are say just 5% to 7%.
The third category contains the vast majority of folks namely students, professionals and industry leaders. A large number haven’t ever written code in their lives so do not really understand the nuts and bolts of technologies. Professionals who were comfortable doing their jobs for years with bare minimum upskilling and grew organically to positions where they got deeply involved in people management roles are the ones most affected by this meteoric change. No doubt GenAI has rendered a lot of roles meaningless and with the sophistication in the said technology we are moving toward AGI . To the novice(funny choice of word considering I too am a novice) there are three different broad categories of AI- ANI or artificial narrow intelligence, AGI or artificial general intelligence and ASI or artificial superintelligence. ANI typically involves making use of certain AI algorithms to solve simple problems. The use cases don’t extend beyond a certain point. Then we have AGI where AI becomes as smart as humans or biological intelligence and starts doing everything we do effectively. GenAI is perhaps the first step toward that. ASI is where AI becomes far more smarter than biological intelligence and that’s pretty much in the realm of science fiction now.
What the GenAI explosion has done is to create an economic model where human workers can be replaced by synthetic agents which can tirelessly work for 24 hours or more and are economically more viable for the organisation. Imagine the time it would take a developer to code one component ,create unit tests and get their hands dirty in debugging as and when needed and a guy like that would charge a bomb. Today Devin AI or AI agents can easily do those jobs at a fraction of the money. So it will become more and more economical for the company to fire more and more engineers and keep a lean structure where a majority of the work could be assigned to agents under the supervision of competent engineers. That will compress the fat OPEX most companies had and will make them more and more profitable. In the agentic economy which is gradually kicking in, you’d find more and more jobs being replaced by agents. Now in this dynamic environment where we are slowly heading toward AGI, a lot of folks seem lost. I recently spoke to a few industry leaders and it seems AI is still a puzzle for most folks in the industry. Most people in the aforementioned third category are finding it difficult to see how they can adapt to this so called change.
Like for instance earlier product managers devoid of a technology background could still operate as long they were able to understand the customer pain points effectively. Now if you were to become an AI product manager, the metamorphosis is only possible if you have an innate understanding of how AI works. Unless you have built a small model yourself, you’ll not understand many algorithms, libraries or APIs that are extensively used in the AI product development lifecycle. Be it a 0 to 1 product or a 1 to n product, a comprehensive understanding of the technological aspects of AI is required before one delves into solving business use cases with the same. I would have spoken to hundreds of students and professionals over the past few months and I see this sense of fear that has lurked in. The fear might be due to a summation of global recession which led to organisations downsizing a lot of folks post the pandemic and partly due to AI coming up in a big way.
At this juncture all this large group of people are doing is reading news/books/blogs about AI, listening to podcasts about AI but not many are actually doing anything to understand the technology from scratch. The reason is because it takes a lot of hard work to understand data science and AI because one needs to have an innate understanding of fundamentals of maths, statistics, probability, linear algebra, calculus and co-ordinate geometry. Unless you understand the math behind algorithms, chances are you’ll have no clue if what you are doing is right or wrong. So one would have to delve deep into a lot of math and programming to be able to build models or use existing models or train them. Check out this wonderful podcast where Aravind Srinivas the CEO of Perplexity tells Lex Friedman what it takes to build the best search engine. So essentially the product management role will become insanely technical and product managers who find themselves not well versed with technology will not have a job sooner or later.
Conclusively as time passes we will see a very unequal world paved by AI and corresponding technologies where the experts and users of the technology will draw huge economic incentives whereas the rest will survive on meagre financial means and would be mostly at the disposal of the 3% or 7% who would control all the resources. From a democratic system we will enter the worst oligarchic model in the history of mankind. When most people are rendered jobless then governments will eventually resort to universal basic income. Earlier I was a bit skeptical about the damaging impacts of AI but now I am fully convinced how AI will reshape human history forever and maybe Harari might have to write another one of those bestsellers soon enough or who knows if Skynet will decimate human population forever-whatever the future might hold for us, it all started in 2022.