"AI" is just a fancy filter
That doesn't mean it isn't useful – but does it really warrant adding it to every new and old product? Here's a great perspective on AGI, whether it's possible, and how "AI" is actually useful.
Even if a technology does use AI, the haphazard inclusion of AI does not make it automatically better. If duct tape suddenly became a trendy technology, startups would start slapping duct tape on their keyboards just so they could say “made with duct tape,” and large corporations would start putting little pieces of duct tape on existing products so they could say “now made with duct tape.” This doesn’t make duct tape any less useful, but it does make it difficult to discern duct tape’s actual utility.1
I love that analogy.
I think it’s spot-on, too. There’s been a lot of chatter and hype about “AI,” to such an extent it’s losing all meaning. Artificial intelligence is a very broad category — sort of like medicine. When your foot hurts, you may say “I’m going to see the doctor,” but what you really mean is “I’m going to go see a podiatrist.” We make assumptions in language but often those assumptions get us into trouble.
The author, Daniel Warfield, goes on, adding:
You can make pretty much anything out of duct tape if you really wanted to, but because it’s so loose it has a tendency to fall apart if you try to make big things with it.
Also very apt, in the context of “AI.”
Daniel’s article, AGI Is Not Possible, is excellent. I highly recommend it. He explains what “AI” really is, and what we can do with it — and what its limitations are. He does a great job using some simple analogies about how machine learning, deep learning, and large language models operate. In the process, you’ll get a much better understanding of the limits that artificial intelligence bumps into.
Daniel’s key point in this article is that AI is nowhere near as developed or capable as human intelligence. AI is a mimic, not creator, not capable of independent thought — in fact, AI isn’t capable of thought at all. I think there’s one detail that Daniel could have added to his article to really drive this home. He writes that AI is just “filters which turn your input into some output.” He adds that he often gets responses such as, “well isn’t that all people do, they respond to stimuli, they react to the world around them. Maybe that [AI] is just as smart as a person.”
To which I say this: You can train a human to recognize a walrus by showing them one walrus. That’s it. Our amazingly complex brains will then extrapolate and figure out what a walrus is.
There is no AI that can do that. You have to train AI libraries using a huge amount of data. You have to show the AI hundreds or thousands of different walruses and then a lot of “not walruses.” And once you’ve done that — all you get is a machine that might be reasonably accurate at comparing a new image to it’s existing catalog of walruses and “not walruses,” and tell you that, “yes, there is a significant probability that matrix of pixels is a walrus.”
Daniel Warfield, AGI is Not Possible, Medium, April 14 2024.