AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-21 day agoApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square240fedilinkarrow-up1720arrow-down133cross-posted to: [email protected]
arrow-up1687arrow-down1external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-21 day agomessage-square240fedilinkcross-posted to: [email protected]
minus-squareEl Barto@lemmy.worldlinkfedilinkEnglisharrow-up26arrow-down4·14 hours agoLLMs deal with tokens. Essentially, predicting a series of bytes. Humans do much, much, much, much, much, much, much more than that.
minus-squareZexks@lemmy.worldlinkfedilinkEnglisharrow-up1·31 minutes agoNo. They don’t. We just call them proteins.
LLMs deal with tokens. Essentially, predicting a series of bytes.
Humans do much, much, much, much, much, much, much more than that.
No. They don’t. We just call them proteins.