Think About “AI”

When people talk about “Artificial Intelligence” today, in 2025, they are probably talking about neural networks. A neural network is actually nothing like the network of biological neurons you’d find in an animal such as a fruit fly but rather a mathematical black box function running on a computer which takes some input and produces some output in response. Even more specifically when “AI” is mentioned today what’s being referred to is probably a chatbot which uses a Large Language Model (LLM) such as ChatGPT from OpenAI.

A great way to understand LLMs is to watch this video by Andrej Karpathy: Deep Dive into LLMs like ChatGPT

Once a deep learning model is “trained” (programmed more like - see video link above) on its raw data input it works very fast, like our “lizard brain”, but nothing like our conscious brain. We aren’t really aware of our lizard brain and, for example, humans walk without thinking. Like our lizard brain a neural net can respond immediately but does not think in the way we use that word when we talk about human thought.

“Reinforcement learning” is a type of neural net programming and AI salesmen call models programmed using RL “thinking” or “reasoning” models. These models can ingest logic steps known as “chains of thought” but a neural net that can link up chains of thought doesn’t actually reason or think.

Neural nets are very useful systems and can do things that humans struggle to do well. For example neural nets are good for:

In the examples above only Chinese government policy analysis would use an LLM. State of the art AI isn’t blithering LLMs, it’s in Precise Autonomous Mass (PAM) weapons like Helsing drones. See also this video from 14:00 and this one also from 14:00.

Though neural nets are very good for some specific tasks they are no good for any sort of art because all they are able to do is plaigerise and average out the data they have been trained on - see part one of this video from Pillar of Garbage: Better a Pig than a Fascist

I don’t mean to diminish the utility of neural networks and LLMs. They are useful for all sorts of specific tasks. An LLM which is trained by a trustworthy organisation on data of interest would be useful. I imagine generative models would be really useful for fuzzing cyber attacks. Generative models which are well trained on software development languages and good code examples are useful as coding assistants. Such tools can regenerate simple apps similar to what they have ingested and can produce code snippets. They can make weak devs better and might help good devs be a bit more efficient. However, I don’t think an LLM will ever be able to generate non-trivial, secure, maintainable and efficient systems without expert humans in control, as discussed in this fun video from ThePrimeTime: Its Finally Over For Devs (again, fr fr ong)

In today’s AI discussion actual intelligence, as used in English until 2020, is referred to as Artificial General Intelligence, AGI. OpenAI salesmen say it’s round the corner and the models simply need to be bigger to be better but adding more words and more “training” to an LLM will never make it intelligent. An LLM has no logical inference, no model of the universe, no perception and no self. Without any of those aspects there can be no actual intelligence. Today’s robots, on the other hand, are an insect-like start.

So, given there is no present danger of any actual AI, what is the biggest danger of today’s “AI”? Propaganda & misinformation? Human workers becoming more efficient using “AI” tools and causing some unemployment as a result? Personally I am most worried about the effect on stock markets when the world realises OpenAI is the biggest ever investment scam and the AI bubble bursts.


Website generated using pandoc from this source