It’s probably their own search/RAG backend, or at least their configuration of some open source project.
And that’s the important part. Get the article retrieval right, and the LLM performance isn’t that important; they could self-host Qwen 27B or something and it’d work fine.
It’s probably their own search/RAG backend, or at least their configuration of some open source project.
And that’s the important part. Get the article retrieval right, and the LLM performance isn’t that important; they could self-host Qwen 27B or something and it’d work fine.