Discussion about this post

User's avatar
Michael Carter's avatar

Good parallel there, Holly. As a semi-retired engineer with 43 years in tech, I find it troubling when all manner of otherwise well educated folks start stirring up fear over how AI "thinks" and will become "self-aware" at some point and we should be enacting all kinds of laws etc.

The problem with fear mongering is that it elicits unwarranted actions. This is twofold; first people begin to think of LLM's and 'AI' as truly reasoning appliances, second, government LOVES to get their hoary paws into anything, and fear provides the requisite smoke screen to enact cumbersome and unnecessary restrictions to prop-up more intrusive and ever-expanding authority.

There are truly massive gaps in the public and scientific understandings of what truly is 'consciousness' and what accounts for 'reasoning' and 'thinking' -- I think you did well to touch on those. Personally, I think more distinctions are required--folks need to come to the understanding that our ability to think, reason and come to understandings can never be replicated by LLM's or AI, so-called.

I think that the biggest threat "AI" poses is in the attribution of more "respect" to it than it rightly deserves. This tends to drive the gullible into using its 'answers' to make important decisions (and take commensurate actions) without our using true critical thinking skills.

Expand full comment
WZ's avatar

Nicely done Holly. I find myself constantly explaining to people that LLMs (which as you rightly say have become ubiquitously referenced as AI) are still computer algorithms that require logic and data and training. They are not reasoning. They are not sentient. They are gathering, sorting, and returning their best algorithmic guess. Thanks for this great write up!

Expand full comment
15 more comments...

No posts