• FourWaveforms@lemm.ee
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    1 day ago

    Unfortunately, these articles are often written by people who don’t know enough to realize they’re missing important nuances.

    • datalowe@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      14 hours ago

      It also doesn’t help that the AI companies deliberately use language to make their models seem more human-like and cogent. Saying that the model e.g. “thinks” in “conceptual spaces” is misleading imo. It abuses our innate tendency to anthropomorphize, which I guess is very fitting for a company with that name.

      On this point I can highly recommend this open access and even language-wise accessible article: https://link.springer.com/article/10.1007/s10676-024-09775-5 (the authors also appear on an episode of the Better Offline podcast)

      • FourWaveforms@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        I can’t contemplate whether LLMs think until someone tells me what it means to think. It’s too easy to rely on understanding the meaning of that word only through its typical use with other words.