Around half of people are worried they'll lose their job to AI. And they're right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more safely than humans, and do accurate medical diagnosis. And it's set to continue to improve rapidly. But what's less appreciated is that, while AI drives down the value of skills it can do, it drives up the value of skills it can't— because they become the bottlenecks to further automation (for a while at least).
That is the point where I stopped reading.
Yes, the author of this article should worry about AI, because AI is indeed quite effective in writing nonsense articles like this one. But AI is nowhere near replacing the real specialists. And it isn’t the question of quantity, it is a principal question of how modern “AIs” work. While those principles won’t change, AIs won’t be able to do any job that involves logic and stable repeated results.
It can complete coding tasks. But that’s not the same as replacing a developer. In the same way that cutting wood doesn’t make me a carpenter and soldering a wire doesn’t make me an electrician. I wish the AI crowd understood that.
It can complete coding tasks, but not well AND unsupervised. To get it to do something well I need to tell it what it did wrong over 4 or 5 iterations.
80000 hours are the same cultists from lesswrong/EA that believe singularity any time now and they’re also the core of people trying to build their imagined machine god in openai and anthropic
it’s all very much expected. verbose nonsense is their speciality and they did that way before time when chatbots were a thing