• sobchak@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    I doubt it’s fine-tuned, it’s likely just one of the open-weight LLMs with RAG. I’ve done similar things, and they don’t really work as well as I’d like (the most relevant chunks of text aren’t always ranked the highest/have the least embedding distance, and the models still hallucinate sometimes).