

What a piece of shit.
What a piece of shit.
He’s Cartman. They’re all Cartman.
Going for a shadow run to the grocery store. Don’t forget to pick up a decker to bypass the ice.
What I’ll never fully understand are the ostensibly capable knowledge workers that believe the shit Trump says. Our web admin hurts my heart. “Dude, I know you know how to fact check.” In their case, it’s just Christianity.
Gippity suggests the only word that starts with “febu” is febuxostat.
deleted by creator
Can’t go home gain. Can’t step in the same river twice.
To run Resolve properly, you apparently have to run DaVinci’s flavor of Rocky Linux 8.6. If you’re doing other things with that machine, this may be undesirable. And as far as I know, there’s no equivalent for After Effects.
I made a smartass comment earlier comparing AI to fire, but it’s really my favorite metaphor for it - and it extends to this issue. Depending on how you define it, fire seems to meet the requirements for being alive. It tends to come up in the same conversations that question whether a virus is alive. I think it’s fair to think of LLMs (particularly the current implementations) as intelligent - just in the same way we think of fire or a virus as alive. Having many of the characteristics of it, but being a step removed.
Deepseek took the training of foundation models from a billionaire’s game to a millionaire’s game. If Elon wants an AI monopoly, it’ll have to be done through litigation. Which, ya know, they’re also trying.
Flames burn and smoke asphyxiates, perfectly highlighting why relying on fire is a bad idea.
o1/o3 use a smaller model to summarize the reasoning, but they don’t show the actual CoT generation the way deepseek does.
It was a free o1/o3 equivalent at a time when there were only paid options. But in the short interim, Google’s made their r model free to use.
*Plus, deepseek doesn’t hide its internal monologue the way o1/o3 do. It’s fun to watch it go back and forth with itself.
True, but they also released a paper that detailed their training methods. Is the paper sufficiently detailed such that others could reproduce those methods? Beats me.
My final semester in American Sign Language was “Sex, Drugs, and Profanity,” and most of the signs are just exactly what you’d guess. (I held on to those textbooks.) Plus, facial expressions are a big part of the grammar of the language. I don’t recognize this scene, but assuming it’s from a comedy - it’s probably also not far off from accurate.
deleted by creator
To run the 671B parameter R1, my napkin math was something like 3/4 of a million dollars in hardware. But that (plus the much lower training cost) made this a millionaire’s game rather than a billionaire’s. Plus the distillations do seem better than anything else we have at the smaller sizes at the moment. That said, I’m more looking forward to the first use of deepseek’s methods with google’s Titan architectures.
They’re just setting up a locally run ChatGPT for the US National Laboratories to use. Much like all the software we already use, it’s probably airgapped from launch systems.
Batman The Animated Series was drawn this way.