“doesn’t work” doesn’t mean the AI literally does not produce any output or do anything, it means it has so many flaws it’s just a fundamentally bad technology to be using.
And don’t worry, I’ve got sources.
LLMs still routinely hallucinate, and even implementations being used by AI safety researchers can’t help but automatically wipe email inboxes without permission. They atrophy your brain the longer you use them, cause both general dependency and emotional dependency, as well as deskill you at your job, they produce content favored worse by both humans and the AI models searching for trustworthy sources, and to top it all off, scaling laws are already failing to improve AI models enough to fix these problems, companies aren’t seeing returns, the economy gained essentially nothing from AI investment, usage, and growth, and public perception by the people actually affected most by AI is only getting worse while the people financially incentivized to keep building it say it’s going to get better, all while datacenters accelerate global warming and LLMs keep killing people.
I don’t know about you, but I’d rather not support a technology that makes you get fundamentally worse at most cognitive tasks, damages the planet, burns money that could otherwise go to something more valuable, all while randomly killing mentally vulnerable people.
“doesn’t work” doesn’t mean the AI literally does not produce any output or do anything, it means it has so many flaws it’s just a fundamentally bad technology to be using.
The doctors who used it daily said it worked fine, and it did. Then those doctors became 20% less capable at identifying tumors in their patients.
The Meta AI security researcher literally said, and I quote: “It’s been working well with my non-important email very well so far and gained my trust on email tasks” when asked why she’d give it access to her primary email, where it subsequently started trashing her whole inbox.
All of the participants in the cognitive debt paper’s research had the AI actually produce the results they were looking for, but they all became less capable mentally as a result.
And when a woman in South Korea killed two men using advice given to her by ChatGPT, it worked fine for her, didn’t it?
That’s not to say your use of AI makes you a murderer. Far from it. But we have quite well documented evidence of LLMs simply making people dumber. You are not an exception to that, unless your brain biologically operates entirely differently from everyone else’s.
When you use neurons less, the connections become weaker, and less new connections get made. When you offload work to something else, like an LLM, you stop training your brain to get better, and you let parts of it slowly die.
Using AI is like using a hydraulic robot to bench press for you. You’re going to move the weights, but your muscle mass ain’t growing.
The more you outsource the very function of thinking to a chatbot, the more reliant your brain will become on that chatbot to think as well as it used to, and when that chatbot regularly hallucinates faulty answers and logic, ignores best practices, inefficiently implements solutions, and gets things wrong, your brain is not improving as a result of that.
This doesn’t mean you should never use AI. I use it to automatically clean up the transcriptions of my voice notes sometimes, and all that does is save me time from correcting the output of the text I just spoke. It’s genuinely helpful, and doesn’t meaningfully deskill me in any way. But if I used it to try and do everything for me, not only would it have made a ton of mistakes, but I’d then be even less capable of fixing them.
I use it to automatically clean up the transcriptions of my voice notes sometimes, and all that does is save me time from correcting the output of the text I just spoke. It’s genuinely helpful, and doesn’t meaningfully deskill me in any way.
But still, it does deskill you at that task, lest we forget. So if that was a meaningful task at which you wanted to stay adept, you would lose that meaningful skill. AI consistently deskills us at everything we ask it to do instead of doing it ourselves. Anything we are not doing, we are getting worse at doing.
the overwhelming majority of chatbot users run on average about 5 prompts per week, or less than one prompt per day, according to OpenAI’s own usage stats.
“Why are they pushing AI? Nobody wants this!” Meanwhile chatgpt.com is the fifth-most-visited website in the world.
But I suppose people can just wrap themselves in a social media bubble where anyone who say something positive about AI gets downvoted through the floor, and then their view of the world gets curated to look a bit more like how they want it to be.
the overwhelming majority of chatbot users run on average about 5 prompts per week, or less than one prompt per day, according to OpenAI’s own usage stats.
There’s a big difference between having a website that you can choose to engage with and having LLMs jammed into your device’s operating system or programming IDE that make you jump through hoops just to disable them (or your email and then be told your emails are going to be used in training and if you don’t want that you have to turn off all the smart features, including the ones that aren’t LLM-based).
There would be certain use cases I’d be open to, but at least give me a choice when deploying it out as to whether it’s on or off, what it has access to and make it easy to change those settings.
Right. The website that people choose to engage with shows that people are choosing to engage with AI without being forced to. It shows that the demand for AI is organic and real. Lots of people want to use AI.
Of course they do. People want comfort and AI as it is marketed is the ultimate comfort.
Doesn’t change the harm it does at all, but lots of people are eager to dismiss the harm as long as their comfort is assured.
yeah exactly, I also love their ‘it produces NOTHING but GARBAGE’, as if I can’t see exactly what it’s producing every time I make a query which I do multiple times a day 🤯
You’re right, it’s just mostly garbage output. I eventually get to a moderately usable answer a lot of the time, but more often than not I have to constantly tweak the prompt or tell it to follow the goddamned system prompts I give it, and it still feeds me obvious bullshit on 1/4-1/2 of the responses.
Maybe you’re working in a common area where the AI doesn’t have to work hard to give you good outputs, but the AI is trash for the tasks I give it.
Yeah, I have deep reservations about the various AI companies, the environmental impacts of the industry, and many of the other issues that people are bringing up here. And, I have still found a few very practical uses.
My partner was fighting with their insurance company about getting reimbursed for several thousand dollars of medical expenses. After a couple of rounds of rejections I had them send me the paperwork, insurance information, and rejection letters and then asked ChatGPT what we should say to get them to reimburse us. It came up with a letter that had the right legal mumbo jumbo to convince the insurance company to agree and pay us. Yes, I could have hired a lawyer, but the legal fees would have eaten up most of the money. And I guess I could have gone to law school, gotten a specialization in insurance law, and figured it out myself. But that also would have cost more time and money.
I still think “AI” is overhyped and has a lot of ethical issues, but there are also some very practical uses.
Odd, no matter how many people keep insisting it’s a scam and it doesn’t work, it nevertheless keeps on working when I use it.
Maybe they’re not using it right.
You’re not a detail orientated person, are you?
“doesn’t work” doesn’t mean the AI literally does not produce any output or do anything, it means it has so many flaws it’s just a fundamentally bad technology to be using.
And don’t worry, I’ve got sources.
LLMs still routinely hallucinate, and even implementations being used by AI safety researchers can’t help but automatically wipe email inboxes without permission. They atrophy your brain the longer you use them, cause both general dependency and emotional dependency, as well as deskill you at your job, they produce content favored worse by both humans and the AI models searching for trustworthy sources, and to top it all off, scaling laws are already failing to improve AI models enough to fix these problems, companies aren’t seeing returns, the economy gained essentially nothing from AI investment, usage, and growth, and public perception by the people actually affected most by AI is only getting worse while the people financially incentivized to keep building it say it’s going to get better, all while datacenters accelerate global warming and LLMs keep killing people.
I don’t know about you, but I’d rather not support a technology that makes you get fundamentally worse at most cognitive tasks, damages the planet, burns money that could otherwise go to something more valuable, all while randomly killing mentally vulnerable people.
that’s odd i use it daily and it works fine
The doctors who used it daily said it worked fine, and it did. Then those doctors became 20% less capable at identifying tumors in their patients.
The Meta AI security researcher literally said, and I quote: “It’s been working well with my non-important email very well so far and gained my trust on email tasks” when asked why she’d give it access to her primary email, where it subsequently started trashing her whole inbox.
All of the participants in the cognitive debt paper’s research had the AI actually produce the results they were looking for, but they all became less capable mentally as a result.
And when a woman in South Korea killed two men using advice given to her by ChatGPT, it worked fine for her, didn’t it?
That’s not to say your use of AI makes you a murderer. Far from it. But we have quite well documented evidence of LLMs simply making people dumber. You are not an exception to that, unless your brain biologically operates entirely differently from everyone else’s.
When you use neurons less, the connections become weaker, and less new connections get made. When you offload work to something else, like an LLM, you stop training your brain to get better, and you let parts of it slowly die.
Using AI is like using a hydraulic robot to bench press for you. You’re going to move the weights, but your muscle mass ain’t growing.
The more you outsource the very function of thinking to a chatbot, the more reliant your brain will become on that chatbot to think as well as it used to, and when that chatbot regularly hallucinates faulty answers and logic, ignores best practices, inefficiently implements solutions, and gets things wrong, your brain is not improving as a result of that.
This doesn’t mean you should never use AI. I use it to automatically clean up the transcriptions of my voice notes sometimes, and all that does is save me time from correcting the output of the text I just spoke. It’s genuinely helpful, and doesn’t meaningfully deskill me in any way. But if I used it to try and do everything for me, not only would it have made a ton of mistakes, but I’d then be even less capable of fixing them.
But still, it does deskill you at that task, lest we forget. So if that was a meaningful task at which you wanted to stay adept, you would lose that meaningful skill. AI consistently deskills us at everything we ask it to do instead of doing it ourselves. Anything we are not doing, we are getting worse at doing.
Nobody needs to hear about your ai girlfriend.
I have a theory that supporters of genAI or LLMs are lonely angry neets who just want a sense of control in their radically tumultuous lives.
care to weigh in on my theory of when AI started to help out with this moment in your life?
there’s like 100 million+ users of ai, that’s a lot of neets
the overwhelming majority of chatbot users run on average about 5 prompts per week, or less than one prompt per day, according to OpenAI’s own usage stats.
thanks for the random stat! 👍
“Why are they pushing AI? Nobody wants this!” Meanwhile chatgpt.com is the fifth-most-visited website in the world.
But I suppose people can just wrap themselves in a social media bubble where anyone who say something positive about AI gets downvoted through the floor, and then their view of the world gets curated to look a bit more like how they want it to be.
the overwhelming majority of chatbot users run on average about 5 prompts per week, or less than one prompt per day, according to OpenAI’s own usage stats.
Okay. Not sure the relevance, though. They’re not forced to use it, they choose to go to that site and write those prompts because they want to.
There’s a big difference between having a website that you can choose to engage with and having LLMs jammed into your device’s operating system or programming IDE that make you jump through hoops just to disable them (or your email and then be told your emails are going to be used in training and if you don’t want that you have to turn off all the smart features, including the ones that aren’t LLM-based).
There would be certain use cases I’d be open to, but at least give me a choice when deploying it out as to whether it’s on or off, what it has access to and make it easy to change those settings.
Right. The website that people choose to engage with shows that people are choosing to engage with AI without being forced to. It shows that the demand for AI is organic and real. Lots of people want to use AI.
Of course they do. People want comfort and AI as it is marketed is the ultimate comfort. Doesn’t change the harm it does at all, but lots of people are eager to dismiss the harm as long as their comfort is assured.
yeah exactly, I also love their ‘it produces NOTHING but GARBAGE’, as if I can’t see exactly what it’s producing every time I make a query which I do multiple times a day 🤯
You’re right, it’s just mostly garbage output. I eventually get to a moderately usable answer a lot of the time, but more often than not I have to constantly tweak the prompt or tell it to follow the goddamned system prompts I give it, and it still feeds me obvious bullshit on 1/4-1/2 of the responses.
Maybe you’re working in a common area where the AI doesn’t have to work hard to give you good outputs, but the AI is trash for the tasks I give it.
Yeah, I have deep reservations about the various AI companies, the environmental impacts of the industry, and many of the other issues that people are bringing up here. And, I have still found a few very practical uses.
My partner was fighting with their insurance company about getting reimbursed for several thousand dollars of medical expenses. After a couple of rounds of rejections I had them send me the paperwork, insurance information, and rejection letters and then asked ChatGPT what we should say to get them to reimburse us. It came up with a letter that had the right legal mumbo jumbo to convince the insurance company to agree and pay us. Yes, I could have hired a lawyer, but the legal fees would have eaten up most of the money. And I guess I could have gone to law school, gotten a specialization in insurance law, and figured it out myself. But that also would have cost more time and money.
I still think “AI” is overhyped and has a lot of ethical issues, but there are also some very practical uses.
Thanks for being the example for us