Presumably bullshit.
And yet:
This kind of nonsense is what’s feasible, for some AI everyone agrees is “not really intelligent.” No model thinks about what code does, but you can ask what code does, and it will try to tell you. You can also describe what code is supposed to be doing, and it will try to make the code do that thing. Looping that might turn GetAnAlbumCover into GetAnalBumCover. But the result should return an anal bum cover.
“What’s supposed to be happening here?” “Is that what’s happening here?” and “What would make that happen here?” are all questions a neural network could answer. Any functionality can be approximated. Any functionality. LLMs almost kinda sorta do it, already, and they’re only approximating “What’s the next word?” It is fucking bonkers that these models are so flexible.
Is this satire? If so, it’s very subtle.