20 Comments
Commenting has been turned off for this post
Comment deleted
Oct 18
Comment deleted
Expand full comment
author

Thanks!!

Expand full comment

Good parallel there, Holly. As a semi-retired engineer with 43 years in tech, I find it troubling when all manner of otherwise well educated folks start stirring up fear over how AI "thinks" and will become "self-aware" at some point and we should be enacting all kinds of laws etc.

The problem with fear mongering is that it elicits unwarranted actions. This is twofold; first people begin to think of LLM's and 'AI' as truly reasoning appliances, second, government LOVES to get their hoary paws into anything, and fear provides the requisite smoke screen to enact cumbersome and unnecessary restrictions to prop-up more intrusive and ever-expanding authority.

There are truly massive gaps in the public and scientific understandings of what truly is 'consciousness' and what accounts for 'reasoning' and 'thinking' -- I think you did well to touch on those. Personally, I think more distinctions are required--folks need to come to the understanding that our ability to think, reason and come to understandings can never be replicated by LLM's or AI, so-called.

I think that the biggest threat "AI" poses is in the attribution of more "respect" to it than it rightly deserves. This tends to drive the gullible into using its 'answers' to make important decisions (and take commensurate actions) without our using true critical thinking skills.

Expand full comment
author

Thank you! And yes, good Lord, keep the government as far away as possible.

Expand full comment

Excellent post. One of the corollaries of LLMs being gigantic predictive text engines (and it doesn't matter whether it predicts just the next word or the next paragraph) is that it is inherently hallucinative. It just makes things up as it goes along. Often what it makes up turns out to be true or true enough to be useful because it is regurgitating things that it was trained on that were true, but sometimes it will just create something that looks plausible but which is wrong.

That means that blindly trusting the output of an LLM is extremely dangerous as I wrote earlier this year - https://ombreolivier.substack.com/p/llm-considered-harmful?r=7yrqz

I'm about to edit that post to add a link to this one as a primer on how LLMs work

Expand full comment
founding

Nicely done Holly. I find myself constantly explaining to people that LLMs (which as you rightly say have become ubiquitously referenced as AI) are still computer algorithms that require logic and data and training. They are not reasoning. They are not sentient. They are gathering, sorting, and returning their best algorithmic guess. Thanks for this great write up!

Expand full comment
author

Thank you for reading!!

Expand full comment

One of the most useful pieces you’ve written - I’ve struggled to explain this to friends and colleagues before, and you’ve done a much better job. Perhaps as a follow on, a piece on how machine learning is not instantly synonymous with LLMs, and some of the ML/AI processes that are a little more black box.

Expand full comment
author

Thanks! I’m going to consider that. Random Forest, maybe.

Expand full comment

Thanks Holly! Great explanation.

You ask how we use GPT in homeschooling….

I don’t.

Using so many “shortcuts” (or outsourcing our heuristics?) concerns me. Especially in the formative years of teaching our little ones how to train their brains to think.

Yes, it’s inevitable that LLMs are becoming a powerful factor in human interactions with each other. It concerns me how the widespread use will change our individual and social way of doing things and brains’ abilities to stay strong in certain areas. Small analogy…. As writing and reading became more common, we lost our ability (and yes our pressing need) to memorize entire books-worth of material or even remember in which field we planted the wheat or the barley.

How has this loss of memory changed us in ways we don’t even understand?

How will LLMs change us? I think we will see the answer unfold over the next 10-100 years.

But for now, I’ll stick to good old-fashioned books and human conversations as much as possible. 😉

Expand full comment
author

Good for you! I approve of this approach.

Expand full comment

Very clear and concise summary, thank you very much for sharing it. I particularly like your points about why LLMs stink at math 🧮 (which has to be the weirdest thing on Earth) and why the biases are so obvious (which goes back to your other points about bad programming and indifferent programmers perhaps?). Would it be fair to say that LLMs are, at best, HAL 9s instead of the HAL 9000s that some people are making them out to be?

And I really admire the geometry bits!

Expand full comment

Thanks for this. If some readers are all TLDR, I have a shorter, less comprehensive version https://frank-hood.com/2024/01/10/fear-and-envy-of-ai/.

Also, very good point about people mistaking LLMs for all of AI, just because LLMs are all the rage right now. I even find myself falling into that trap occasionally. I think LLMs are a good teaching point to show young folks that Wikipedia and news organizations are frequently the same thing--not least because the LLMs are trained on them in an iterative feedback loop. I trained in history back in the old days, where a huge distinction was made between primary, secondary, and tertiary sources and their reliability. Do they still teach history that way? Do they still teach history at all?

Expand full comment

The feedback loop problem is a real problem for search engines. And potentially for the users of them as search engines get polluted with LLM generated false information

Expand full comment
author

It's going to get worse. As places like Reddit and Twitter are about half bots, now we have bots training LLMs. Lovely.

Expand full comment

I refer you to my comment earlier regarding LLMs being untrustworthy

Expand full comment

Your last point got me thinking. There's definitely a valuable distinction to be made between "the process of tuning model weights from data" and "additional context provided as input to the model" (which I believe is how personalization works?). But informally, outside ML jargon, we often refer to helping someone adjust to a new context as "training". For example, new employees at a company undergo "induction training" to familiarize them with how things work there. Also, I often refer to influencing the recommendations apps like Spotify and YouTube show me as "training my algorithm", and despite the abuse of terminology, it doesn't seem to cause confusion (that I've noticed). But like you say sometimes the lack of precision here introduces confusion, and I don't have any better words for this, so maybe the best way forward is to tighten up what "training" means.

Great summary, by the way! Keeping it in mind for next time I need to send someone an explainer.

Expand full comment
author

Thank you!

Expand full comment

True valuable AI does exist but is typically not publicly visible yet, doing things like drug formula discovery. I believe covid vaccines were developed using such tools, so we aren't seeing the true value of AI publicly yet. As with the examples you gave they still need a human to sanity-check outputs.

One key thing to remember is if a service is "free" you are the product. All the people using these systems are actually doing the training work for Google and the other providers.

Expand full comment

Pretty awesome explanation.

You have a talent for this, which is very appreciated.

Expand full comment
author

Thank you!!

Expand full comment