

I was going to post the whitespace programming language but this wins
Cryptography nerd
Fediverse accounts;
[email protected] (main)
[email protected]
[email protected]
Bluesky: natanael.bsky.social
I was going to post the whitespace programming language but this wins
Malbolge
He wanted to be a war lord but all he’s got is a dull edge
The Nyquist-Shannon sampling theorem isn’t subjective, it’s physics.
Your example isn’t great because it’s about misconceptions about the eye, not about physical limits. The physical limits for transparency are real and absolute, not subjective. The eye can perceive quick flashes of objects that takes less than a thousandth of a second. The reason we rarely go above 120 Hz for monitors (other than cost) is because differences in continous movement barely can be perceived so it’s rarely worth it.
We know where the upper limits for perception are. The difference typically lies in the encoder / decoder or physical setup, not the information a good codec is able to embedd with that bitrate.
Why use lossless for that when transparent lossy compression already does that with so much less bandwidth?
Opus is indistinguishable from lossless at 192 Kbps. Lossless needs roughly 800 - 1400 Kbps. That’s a savings of between 4x - 7x with the exact same quality.
Your wireless antenna often draws more energy in proportion to bandwidth use than the decoder chip does, so using high quality lossy even gives you better battery life, on top of also being more tolerant to radio noise (easier to add error correction) and having better latency (less time needed to send each audio packet). And you can even get better range with equivalent radio chips due to needing less bandwidth!
You only need lossless for editing or as a source for transcoding, there’s no need for it when just listening to media
That’s more than a codec question, that’s a Bluetooth audio profile question. Bluetooth LE Audio should support higher quality (including with Opus)
Nobody needs lossless over Bluetooth
Edit: plenty of downvotes by people who have never listened to ABX tests with high quality lossy compare versus lossless
At high bitrate lossy you literally can’t distinguish it. There’s math to prove it;
https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem
At 44 kHz 16 bit with over 192 Kbps with good encoders your ear literally can’t physically discern the difference
Opus! It’s a merge of a codec designed for speech (from Skype!) with one designed for high quality audio by Xiph (same people who made OGG/Vorbis).
Although it needs some more work on latency, it prefers to work on bigger frames but default than Bluetooth packets likes, but I’ve seen there’s work on standardizing a version that fits Bluetooth. Google even has it implemented now on Pixel devices.
Fully free codec!
A drum roll can’t break an airplane window
Higher end controllers, yes. Often with integrated video encoding circuits to reduce the data volume to send to the main processor.
Technically yes but also no.
Synchronized reading is hard when the pixel count is high. At some point it’s hard enough to pull all the data through the controller at once quickly so you need either multiple circuits, or one circuit that reads a section of pixels at once (row by row = rolling shutter effect).
Some of this is processing limits in the internal controller in the sensor, but it’s also timing and signal routing and synchronized readout for a massive amount of pixel sensors. It’s literally tens of millions of triplets of RGB detectors which has to be read simultaneously 60 times per second, and basic color correction has to happen right in the controller, before the main CPU / GPU gets the image stream.
At some point you even get cooling issues, and need a cooling system behind the sensor.
Don’t forget the SD card reader, also PCIe
Catnip
It’s building out federation and you can already host your own account and control your own data and run your own feeds. The investors don’t have control of the company (public benefit corporation).
The only difference between it and Mastodon in terms of scraping is that scraping public data is a bit easier. Nothing about Mastodon makes that scraping difficult, it’s just more annoying to do. The company itself is not doing AI BS.
Client software (browsers, etc) would need to resolve it, just like mailto:
It certainly should happen, but it’s not likely because it takes too much momentum
https://www.timesofisrael.com/for-years-netanyahu-propped-up-hamas-now-its-blown-up-in-our-faces/
Strange how their attempts to establish peace through other groups was undermined by Netanyahu, who literally helped Hamas get cash
Local NAS, local security cameras, in-house streaming, LAN multiplayer, local torrent-like data sharing (FYI, Windows Update and more uses the local network to share update between computers by default, so it gets downloaded once and then shared internally)
Mostly empty oceans
It’s sliced along the center, rotating the axis of slice 360 degrees as it goes along the circle, cutting it in two halves which interlock