• IrateAnteater@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    From what I understand from the sales brochure, these types of “AI” that are modeled on highly curated data are far less prone to hallucinations.

    • sobchak@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      I doubt it’s fine-tuned, it’s likely just one of the open-weight LLMs with RAG. I’ve done similar things, and they don’t really work as well as I’d like (the most relevant chunks of text aren’t always ranked the highest/have the least embedding distance, and the models still hallucinate sometimes).