Synecdoche, AI
I’m pretty skeptical about the current hype with machine learning and large language models and generative AI. There are thoughtful critics on this. For example Ed Zitron has good essays. I like Ted Chiang’s articles for The New Yorker, which are less topical, more broadly conceptual:
- Why A.I. Isn’t Going to Make Art
- ChatGPT Is a Blurry JPEG of the Web
- Why Computers Won’t Make Themselves Smarter
There’s something else that bugs me about the current hoopla. Something small but noticeable. The use of ChatGPT or of other generative AI tools in fashion today as synonymous with βA.I.β For instance I just looked at a news feed, and there’s this headline from CNN: AI is getting better at thinking like a person. Nvidia says its upgraded platform makes it even better.
Some people recognize that these tools are not intelligent, so the term A.G.I. is now thrown around. On one hand I’m glad of the tacit admission that generative A.I. is not real intelligence. On the other hand I rage at the salesmanship. Since the current tools are now effectively synonymous with A.I., the pie is grown with a bigger term. Semantic inflation devalues words.
Describing A.I. as Large Language Models and Machine Learning is about as meaningful as describing mathematics as differential geometry plus combinatorics.
Many of the predictions about the future from Sam Altman or Ray Kurzweil or
other opinion makers and shakers sound something like βIn the next few years,
mathematics is going to advance at such a pace that soon, all of physics will be
solved, and then physics will be over.β
No doubt, this objection proves my narrow-mindedness. See, Nvidia’s upgraded
platform will keep making generative A.I. “even better” until all of mathematics
is solved, and then of course so will physics.
I for one don’t expect to see Moore’s law making good on the wild predictions.