When people talk about “Artificial Intelligence” today, in 2025, they are probably talking about “neural nets.” A neural net is actually nothing like the network of biological neurons you’d find in an animal such as a fruit fly but rather a mathematical black box function running on a computer which takes some input and produces some output in response. Even more specifically when “AI” is mentioned today what’s being referred to is probably a chatbot which uses a Large Language Model (LLM) such as ChatGPT from OpenAI.
A great way to understand LLMs is to watch this video by Andrej Karpathy: Deep Dive into LLMs like ChatGPT
Once a deep learning model is “trained” (programmed more like - see video link above) on its raw data input it works very fast, like our “lizard brain.” I experienced my lizard brain first hand on a walk in the hills above Lake Como. My eyes saw a snake to the side of the path and before it had registered with my conscious mind my body had gotten an instant dose of adrenaline and leapt several feet into the air and down the hill. That wasn’t just a central nervous system reflex like a reaction to pain: it was image processing in some primitive part of my brain that didn’t involve thought. I was in the air before I was aware of the snake and only as I landed back on my feet did I realise why my body had taken to the air. Similarly, a neural net can respond immediately and does not think in the way we use that word when we talk about human thought.
“Reinforcement learning” is a type of neural net programming and AI salesmen call models programmed using RL “thinking” or “reasoning” models. But a neural net doesn’t actually reason or think. Logic inference steps known as “chains of thought” are included in the training data and become part of the generated output probabilistic token chains. However, the system is still using token sequence prediction and not logic. For a good explanation of what’s going on read this article by Sebastian Raschka
Neural nets are very useful systems and can do things that humans struggle to do well. For example neural nets are good for:
Playing chess or other games like go.
Detecting faults on a railway line by listening to the trains running over the tracks.
Find tumours in scans.
Summarise and organise tens of thousands of Chinese government policy documents.
Fly a coordinated swarm of drones.
In the examples above only Chinese government policy analysis would use an LLM. The other systems use deep learning in neural nets but not learning about words. Though neural nets are very good for specific narrow domains they are no good for any sort of art because all they are able to do is reproduce and average out the real artworks they have been trained on - see part one of this video from Pillar of Garbage: Better a Pig than a Fascist
Neural nets are very useful data processing systems but they are not intelligent. It has been said that today’s neural nets have the intelligence of a cat but from what I have seen so far I think it’s more like at the level of an insect, or perhaps a lizard. Of course a lizard can’t write a computer program and an LLM can emit a computer program with the right prompt. So an LLM is more intelligent? A neural net works like a reflex action in an animal: it takes an input which might be a prompt to write code to solve some problem and outputs a small computer program. But it didn’t use any intelligence in the human sense to do that: it generated the code based on code stored in it’s deep learning neural net which was in its training data.
I don’t mean to diminish the utility of neural networks and LLMs. They are useful for all sorts of specific domains. An LLM which is trained by a trustworthy organisation on data of interest would useful. Generative models which are well trained on software development languages and good code examples are useful as a coding assistants. Such tools will make weak devs better and might help good devs be a bit more efficient when used right. However, I don’t think an LLM will ever be able to build non-trivial, secure, maintainable and efficient systems without human expertise, as discussed in this fun video from ThePrimeTime: Its Finally Over For Devs (again, fr fr ong)
In today’s AI discussion actual intelligence, as used in English until 2020, is referred to as Artificial General Intelligence, AGI. OpenAI salesmen say it’s round the corner and the models simply need to be bigger to be better but adding more words and more “training” to an LLM will never make it intelligent. An LLM has no logical inference, no model of the universe, no perception and no self. Without any of those aspects there can be no actual intelligence.
So, given there is no present danger of any actual AI, what is the biggest danger of today’s “AI”? Propaganda & misinformation? Human workers becoming more efficient using “AI” tools and causing some unemployment as a result? Personally I am most worried about the effect on the stock market when the AI bubble bursts.