Friday, February 6, 2026

Bang average!

All of LLM usage is regression to the mean. From everything I've read about how LLMs  work, the models have predictive text as their foundation - they're a higher end autocomplete to put down what would usually come next in a sequence. GPT means "Generative, Pretrained and Transformer". The first word, "Generative", means it's creating responses based on the data sets it's pretrained on, not on true real world knowledge. The friendly neighbourhood assistant that OpenAI et al are trying to sell everyone sounds human but "knows" nothing! It can mine its data sets for patterns that approximate sensible answers to the prompts we enter into the boxes, with a friendly personality layered on, depending on how its coded. (Which incidentally, is a reason that GPT-5 received some mixed reviews, some users thought it had become too terse or unfriendly!) And to return to the generative aspect, that's why an LLM tends to hallucinate, as well. Making up text is its raison d'etre, it can't stop doing what it's supposed to. It can't say "I don't know", like a human could, since it doesn't have any memory bank or real world training. If its training data doesn't have the requisite depth to build a coherent answer, it tends to make up something to compensate. 

My point is, making up text based on data averaging isn't innovative. It's creative plagiarism, not creation. Out-of-the-box or creative thinking is not the forte of an averager. If more and more creators stop writing their own text or keep using LLMs, won't everyone also start sounding exactly the same?

Can an LLM generate creative solutions to problems? Can it be an aid to critical thinking, an assistant that you can bounce ideas off to test whether they will work or not? Can a scientific researcher use it to generate the crucial ideas that can move the boundary of human knowledge? I believe the answers are a resounding "No".  What LLMs can do is help humans become better hacks!  LLMs can help organise, automate simple routine work and create templates for emails and reports. But crucially, only someone without conscientiousness or regard to their reputation would send out LLM-generated matter to clients or publish them without reviewing and cross-checking. And for all the hype over AI replacing humans, would a manager trust a non-human agent to take on the responsibility of a job? I doubt it. The reason? That text box at the bottom of every LLM message box today "AI -generated content may be incorrect"! 




No comments:

Post a Comment