Lost Among Notes

<< Newer The worst code I ever saw — or, politics are unavoidable
Older >> Better table-driven tests (if using Ginkgo)

Synecdoche, AI

I’m pretty skeptical about the hype with today’s machine learning and large language models and generative AI.
To be clear, I’m not skeptical about AI in general as a discipline with a bright future. I believe that one day there will be programs that deserve to be called intelligent. I see many people wielding arguments against the possibility of real AI, that go from invoking Gödel’s incompleteness theorem to talking about some quantum mechanical magic that apparently makes human brains possible, and could not be replicated but by god.
I’ve never been convinced by those arguments.

Back to the current hype: there are thoughtful critics on this. For example Ed Zitron has good essays. I like Ted Chiang’s articles for The New Yorker, which are less topical, more broadly conceptual:

There’s something else that bugs me about the current AI propaganda. Something small but noticeable. The use of ChatGPT or of other generative AI tools in fashion today as synonymous with “A.I.” For instance I just looked at a news feed, and there’s this headline from CNN: AI is getting better at thinking like a person. Nvidia says its upgraded platform makes it even better.

Some people recognize that these tools are not intelligent, so the term A.G.I. is now thrown around. On one hand I’m glad of the tacit admission that generative A.I. is not real intelligence. On the other hand I rage at the salesmanship. Since the current tools are now effectively synonymous with A.I., the pie is grown with a bigger term. Semantic inflation devalues words.

Describing A.I. as Large Language Models and Machine Learning is about as meaningful as describing mathematics as differential geometry plus combinatorics.

Many of the predictions about the future from Sam Altman or Ray Kurzweil or other opinion makers and shakers read something like “In the next few years, mathematics is going to advance at such a pace that soon, all of physics will be solved, and then physics will be over.”
No doubt, this objection proves my narrow-mindedness. See, Nvidia’s upgraded platform will keep making generative A.I. “even better” until all of mathematics is solved, and then of course so will physics.

I for one don’t expect to see Moore’s law making good on the wild predictions.