• Jrockwar@feddit.uk
    link
    fedilink
    English
    arrow-up
    18
    ·
    18 days ago

    That’s because it doesn’t learn, it’s a snapshot of its training data frozen in time.

    I like Perplexity (a lot) because instead of using its data to answer your question, it uses your data to craft web searches, gather content, and summarise it into a response. It’s like a student that uses their knowledge to look for the answer in the books, instead of trying to answer from memory whether they know the answer or not.

    It is not perfect, it does hallucinate from time to time, but it’s rare enough that I use it way more than regular web searches at this point. I can throw quite obscure questions at it and it will dig the answer for me.

    As someone with ADHD with a somewhat compulsive need to understand random facts (e.g. “I need to know right now how the motor speed in a coffee grinder affects the taste of the coffee”) this is an absolute godsend.

    I’m not affiliated or anything, and if anything better comes my way I’ll be happy to ditch it. But for now I really enjoy it.

    • Nougat@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      18 days ago

      … t uses your data to craft web searches, gather content, and summarise it into a response.

      GPT 4-o does this, too.

      • Jrockwar@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 days ago

        Then that might not be the model the previous poster is talking about, because I have to press perplexity really hard to get it to hallucinate. Search-augmented LLMs are pretty neat.

    • Zerlyna@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      18 days ago

      Yes but I’m saying that snapshot in September is incorrect. Why is that. Is it rigged?