Lost Among Notes

<< Newer The worst code I ever saw — or, politics are unavoidable
Older >> Better table-driven tests (if using Ginkgo)

Synecdoche, AI

I’m pretty skeptical about the hype with today’s machine learning and large language models and generative A.I.
To be clear, I’m not skeptical about A.I. in general as a discipline. I am also not denying that the current tools can produce impressive demos and can have an economic impact. But I think these tools have been over-sold. The A.I. race we see now, is in my opinion a modern version of Tulip Mania.

There are thoughtful critics of the current hype. For example check Ed Zitron’s site1. I like Ted Chiang’s New Yorker articles2,3,4, which are less topical, more broadly conceptual.

There’s also a great article from Rodney Brooks, published after this post: Parallels between Generative AI and Humanoid Robots5. Brooks has real credentials, having been a pioneer of robotics and AI at MIT for many years. The full article is worth reading, but the conclusions are especially impressive. He says both generative AI and humanoid robots may make the lives of humans better, but:

it will not be at the physical scale or short timescale than proponents imagine. People will come to regret how much capital they have spent in these pursuits, both at existing companies and in start ups.

There’s something else that bugs me about the current AI propaganda. Something small but noticeable. The widespread use of ChatGPT or of other generative AI tools in fashion today as synonymous with “A.I.” For instance I just looked at a news feed, and there’s this headline from CNN: AI is getting better at thinking like a person. Nvidia says its upgraded platform makes it even better.

Some people recognize that these tools are not intelligent, so the term A.G.I. is now thrown around (Artificial General Intelligence). On one hand I’m glad of the tacit admission that generative A.I. is not real intelligence. On the other hand I rage at the salesmanship. Since the current tools are now effectively synonymous with A.I., the pie is grown with a bigger term. Semantic inflation devalues words.

Describing A.I. as Large Language Models and Machine Learning is about as meaningful as describing mathematics as differential geometry plus combinatorics.

Many of the predictions about the future from Sam Altman or Ray Kurzweil or other opinion makers and shakers read something like “In the next few years, mathematics is going to advance at such a pace that soon, all of physics will be solved, and then physics will be over.”
No doubt, this objection proves my narrow-mindedness. See, Nvidia’s upgraded platform will keep making generative A.I. “even better” until all of mathematics is solved, and then of course so will physics.

I for one don’t expect to see Moore’s law making good on the wild predictions.