LLMs do not possess the ability to reason over the information that it is fed. It converts it to numbers and performs arithmetics on it. Augmenting it with scripts won’t change the fundamental nature of how it works.
It takes information and regurgitates it. There is no analytical capability present that makes it able to distinguish the importance between a small segue and the main points. They can just as easily combine several separate facts into a single point, and phrase things in a way that a footnote has as much weight as the main subjects.
Hiding the actual workings behind silly marketing buzzwords serves to sensationalise what these things actually do. It feeds the AI hysteria and further muddles the discussion around them. It’s why laymen think these models are basically magic and buy into the idea that they’re somehow going to solve all our problems.
I love machine learning. It is, and has historically been a fantastic tool for plenty of tasks, but it isn’t magic.
If I implement a script to automate database migrations during application deployment I could definitely market that as Deployment Ready Database Optimisations or some other BS term, but that doesn’t make it more than a simple automation.
LLMs do not possess the ability to reason over the information that it is fed.
Ah, yes, I forgot that if an LLM has no conscious ability to reason, then we shouldn’t have any terminology to describe the general process it’s using to create an output. Case closed. I’m glad you’ve enlightened us about how useful jargon isn’t actually useful. Data goes in, data goes out; you can’t explain that.
That isn’t what I said. You’re doing a pretty good LLM impression yourself.
I hate it when people use unnecessary terms to describe something.
Hiding the actual workings behind silly marketing buzzwords serves to sensationalise what these things actually do.
That is why I hate marketing buzzwords.
Putting an LLM to process the output of a search in a repository of scientific papers isn’t going to automatically make the output useful or accurate. Papers aren’t necessarily high quality just because they’ve been published, just look at the garbage that Lisa Littman, Kenneth Zucker, and their ilk have shat out over the decades.
An LLM, no matter how many scripts or cleverly written prompts you augment it with, will never be able to differentiate good science from bad, and will just as easily give equal credence to garbage papers as it will to actual quality ones. That’s a problem, without “hallucinations” even entering the picture.
Edit: I think the overall idea of the site is awesome, knowledge should be freely available. I just don’t see the value add that an LLM provides. I only see problems with it.
Putting an LLM to process the output of a search in a repository of scientific papers isn’t going to automatically make the output useful or accurate. Papers aren’t necessarily high quality just because they’ve been published.
For someone who likes to get riled up about people not responding to “what you said”, this whole tangent about the accuracy of RAG and the fact scientific papers aren’t automatically 100% reliable is pretty hilarious. Literally nobody was arguing that it makes it “automatically useful or accurate” or that published papers are “necessarily high-quality”.
You’re genuinely acting like you’re taking issue with terminology describing a process because that process isn’t perfect. “RAG” adequately describes a general technique to improve the accuracy of an output of a query to an LLM, and all you’re doing now is pissing and moaning that “um, just because it’s published doesn’t mean it’s *high-quality” – which has categorical fuck-all to do with the usefulness of the term.
We’ll continue to use it, and you’re welcome to continue being annoyeed by it.
PS: I write material that LLMs are trained on as a hobby; sorry if it annoys you that my writing style is coincidentally similar.
LLMs do not possess the ability to reason over the information that it is fed. It converts it to numbers and performs arithmetics on it. Augmenting it with scripts won’t change the fundamental nature of how it works.
It takes information and regurgitates it. There is no analytical capability present that makes it able to distinguish the importance between a small segue and the main points. They can just as easily combine several separate facts into a single point, and phrase things in a way that a footnote has as much weight as the main subjects.
Hiding the actual workings behind silly marketing buzzwords serves to sensationalise what these things actually do. It feeds the AI hysteria and further muddles the discussion around them. It’s why laymen think these models are basically magic and buy into the idea that they’re somehow going to solve all our problems.
I love machine learning. It is, and has historically been a fantastic tool for plenty of tasks, but it isn’t magic.
If I implement a script to automate database migrations during application deployment I could definitely market that as Deployment Ready Database Optimisations or some other BS term, but that doesn’t make it more than a simple automation.
Ah, yes, I forgot that if an LLM has no conscious ability to reason, then we shouldn’t have any terminology to describe the general process it’s using to create an output. Case closed. I’m glad you’ve enlightened us about how useful jargon isn’t actually useful. Data goes in, data goes out; you can’t explain that.
That isn’t what I said. You’re doing a pretty good LLM impression yourself.
That is why I hate marketing buzzwords.
Putting an LLM to process the output of a search in a repository of scientific papers isn’t going to automatically make the output useful or accurate. Papers aren’t necessarily high quality just because they’ve been published, just look at the garbage that Lisa Littman, Kenneth Zucker, and their ilk have shat out over the decades.
An LLM, no matter how many scripts or cleverly written prompts you augment it with, will never be able to differentiate good science from bad, and will just as easily give equal credence to garbage papers as it will to actual quality ones. That’s a problem, without “hallucinations” even entering the picture.
Edit: I think the overall idea of the site is awesome, knowledge should be freely available. I just don’t see the value add that an LLM provides. I only see problems with it.
For someone who likes to get riled up about people not responding to “what you said”, this whole tangent about the accuracy of RAG and the fact scientific papers aren’t automatically 100% reliable is pretty hilarious. Literally nobody was arguing that it makes it “automatically useful or accurate” or that published papers are “necessarily high-quality”.
You’re genuinely acting like you’re taking issue with terminology describing a process because that process isn’t perfect. “RAG” adequately describes a general technique to improve the accuracy of an output of a query to an LLM, and all you’re doing now is pissing and moaning that “um, just because it’s published doesn’t mean it’s *high-quality” – which has categorical fuck-all to do with the usefulness of the term.
We’ll continue to use it, and you’re welcome to continue being annoyeed by it.
PS: I write material that LLMs are trained on as a hobby; sorry if it annoys you that my writing style is coincidentally similar.