• jballs@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      16
      ·
      14 days ago

      That’s exactly it. Here’s a quote from what he said during the article. Dude is so uniformed that he thinks AI is doing amazing stuff, but doesn’t understand that experts realize AI is full of shit.

      “I pinged Elon on at some point. I’m just like, dude, if I’m doing this and I’m super amateur hour physics enthusiast, like what about all those PhD students and postdocs that are super legit using this tool?” Kalanick said.

      • vzqq@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        14 days ago

        This PhD mostly uses it to summarize emails from the administration. It does a shit job, but it frees up time for more science so who cares.

        The real irony is that the administration probably used AI to write the emails in the first place. The mails have gotten significantly longer, less dense and the grammar has gotten better.

        Begun this AI arms race has.

      • shalafi@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        14 days ago

        Out of context, and I didn’t read the rest, that sounds reasonable.

        “If my dumbass is learning and finding, what about actual pros?!”

          • jballs@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            14 days ago

            “Turns out there are 319 letters in the alphabet and 16 Rs! When the experts get a hold of this, they’re going to be blown away!”

        • Mniot@programming.dev
          link
          fedilink
          English
          arrow-up
          5
          ·
          14 days ago

          Lots of things seem reasonable if you skip the context and critical reasoning. It’s good to keep some past examples of this that personally bother you in your back pocket. Then you have it as an antidote for examples that don’t bother you.

    • Kirp123@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      ·
      15 days ago

      It’s exactly what I was thinking. They should let the AI build a spaceship and all get into it. Would be the greatest achievement in humans history… when it blows up and kills all of them.

    • real_squids@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      6
      ·
      15 days ago

      If you think about it, they’ve been doing that for a while with experimental life extending stuff, of course now they’re a bit more likely not to die with modern medicine being so good

    • MBM@lemmings.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      14 days ago

      I just hope that in the process they don’t ruin the world for the rest of us

  • Nikls94@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    14 days ago

    LLMs: hallucinate like that guy from school who took every drug under the moon.

    Actual trained specially AI: finds new particles, cures for viruses, stars, methods…

    But the latter one doesn’t tell it in words, it does in the special language you use to get the data in the first place, like numbers and codes.

    • Eq0@literature.cafe
      link
      fedilink
      English
      arrow-up
      8
      ·
      14 days ago

      Just to built on this and give some more unasked for info:

      All of AI is a fancy dancy interpolation algorithm. Mostly, too fancy for us to understand how it works.

      LLMs use that interpolation to predict next words in sentences. With enough complexity you get ChatGPT.

      Other AIs still just interpolate from known data, so they point to reasonable conclusions from known data. Then those hypotheses still need to be studied and tested.

  • Armand1@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    3
    ·
    14 days ago

    LLMs are like Trump government appointees:

    • They hallucinate like they’re on drugs
    • They repeat whatever they’ve seen on the internet
    • They are easily maniuplated
    • They have never thought about a single thing in their lives

    Ergo, they cannot and will not ever discover anything new.

    • CompassRed@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      14 days ago

      LLMs have already discovered new proofs for math problems that were previously unsolved. Granted, this hasn’t been done with a commercially available model as far a I know, but you are technically wrong to say they will never discover anything new.

  • fckreddit@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    14 days ago

    One of the reasons they give for it is : physicists use LLMs in their workflows, so LLMs are close to make physics discoveries themselves.

    Clearly, these statements are meant to hype up the AI bubble even more.

  • A7thStone@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    14 days ago

    The worst part is some useful things could come from it, because we’re hurtling towards infinite monkeys. It’ll only be by pure happenstance, and unless they are lucky enough to randomly find a really great breakthrough it still won’t be worth the massive resources they’ve wasted.

  • Gyroplast@pawb.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    14 days ago

    Bah, humbug! In my days we used a rubber ducky, IF WE HAD ONE, or just the stick we were beaten with for using too many precious CPU cycles, and we were FINE!

  • Aelorius@jlai.lu
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 days ago

    Actually, AlphaEvole already did it. They discovered new algorithms that improve the computation efficienty of matrix multiplication for the first time for 50 years. And a lot of other things. It’s using a custom version of gemini.

  • Asswardbackaddict@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    13 days ago

    I’m running really promising (infant) simulations on my computer. The void is a concept, not a physical reality (my hypothesis which has no evidence, as of yet), and that actually leads to a sort of “bounce” or reactive (rather than active) physical law. The word salad sorters aren’t going to dismantle our false premises. Might as well write letters asking Santa for scientific advancement.

  • jsomae@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 days ago

    OpenAI’s new model was able to get 5 out of 6 questions (a gold medal) on the 2025 International Math Olympiad. I am very surprised by this result, though I don’t see any evidence of foul play.