The bug is most likely in the scenario of a default Cloudflare config. Cloudflare pushes a captcha to all apps other than the Tor Browser that come over Tor (in the default config). This would of course cause the #Lemmy javascript to go apeshit.
Better or worse depends on who you ask.
I boycott Cloudflare and I avoid it. Some CF hosts are configured to whitelist Tor so we don’t encounter a block screen or captcha. For me that is actually worse because I could inadvertently interact with a CF website without knowing about the CF MitM. I want to be blocked by Cloudflare because it helps me avoid those sites.
The CF onion (IIUC) cuts out the exit node which is good. But CF is still a MitM so for me that’s useless.
Some users might not care that CF has a view on all their packets - they just don’t want to be blocked. So for them the onion is a bonus.
W.r.t CSAM, CF is pro-CSAM. When a CF customer was hosting CSAM, a whistleblower informed Cloudflare. Instead of taking action against the CSAM host, CF doxxed the ID of the whistleblower to the CSAM host admin, who then published the ID details so the users would retaliate against the whistleblower. (more details)
There is no way to “disable” cloudflare if an instance has chosen to use it. It will sit between you and the server for all traffic.
Some people use CF DNS and keep the CF proxy disabled by default. They set it to only switch on the CF proxy if the load reaches an unmanageable level. This keeps the mitm off most of the time. But users who are wise to CF will still avoid the site because it still carries the risk of a spontaneous & unpredictable mitm.
wow… that is terrible. You should not have had to go on a dig for such a simple limitation. All this fancy javascript and it failed to do a simple field length check.
emphasis mine:
Anti-nuclear is like anti-GMO and anti-vax: pure ignorance, and fear of that which they don’t understand.
First of all anti- #GMO stances are often derived from anti-Bayer-Monsanto stances. There is no transparency about whether Monsanto is in the supply chain of any given thing you buy, so boycotting GMO is as accurate as ethical consumers can get to boycotting Monsanto. It would either require pure ignorance or distaste for humanity to support that company with its pernicious history and intent to eventually take control over the world’s food supply.
Then there’s the anti-GMO-tech camp (which is what you had in mind). You have people who are anti-all-GMO and those who are anti-risky-GMO. It’s pure technological ignorance to regard all GMO equally safe or equally unsafe. GMO is an umbrella of many techniques. Some of those techniques are as low risk as cross-breeding in ways that can happens in nature. Other invasive techniques are extremely risky & experimental. You’re wiser if you separate the different GMO techniques and accept the low risk ones while condemning the foolishly risky approaches at the hands of a profit-driven corporation taking every shortcut they can get away with.
So in short:
I really cannot stand that phrase because it’s commonly used as poor rationale for not favoring a superior approach. Both sides of the debate are pushing for what they consider optimum, not “perfection”.
In the case at hand, I’m on the pro-nuclear side of this. But I would hope I could make a better argument than to claim my opponent is advocating an “impossible perfection”.
Ah, well if the front page is 80kb that might explain it. So apparently there’s just some really heavy text especially if each subsequent page is anywhere near 80kb.
thanks for the tip. Yes I can see that there is an attempt to load images but there is a little prohibited icon. So I’m not sure what that really means. If images are disabled in the browser settings, then I think there should not even be an attempt to fetch them. I wonder if javascript is bypassing the config and fetching the image, but then the browser is simply blocking them from display.
Glad to see they are tagged. It could evolve more but the tags are the most important thing.
I think this project has some tools that might automate that:
https://0xacab.org/dCF/deCloudflare
They ID and track every website that joins #Cloudflare. It’s a huge effort but those guys are on top of it. A script could check the list of domains against their list. There is also this service (from the same devs) which does some checks:
https://karma.crimeflare.eu.org:1984/api/is/cloudflare/html/
but caveat: if a non-CF domain (e.g. example.tld) has a CF host (e.g. somehost.example.tld), that tool will return YES for the whole domain.
> Manually adjusting availability is a can of worms that I don’t want to open
I would suggest not bothering with any complex math, and simply do the calculation as you normally do but then if a site is Cloudflare cap whatever the calculated figure is to 98%. Probably most (if not all) CF sites would be 100% anyway, so they would just be reduced by 2%. Though it would need to be explained somewhere – the beauty of which would be to help inform people that the CF walled garden is excluding people. Cloudflare’s harm perpetuates to a large extent because people are unaware that it’s an exclusive walled garden that marginalizes people.
> If the message is edited for typos/grammatical errors, then there’s really no need for a notification as the message displays the posted time in italics (e.g., ✏ 9 hours ago).
I’m not sure why the relevance of the posted time in this scenario, but indeed I agree simply that typos need not generate an update notice, in principle.
> If the message is so reworked as to say something else, “Bob” (your example) should do the right thing and post a new, separate reply to “Alice” in the same thread, donchathink?
This requires Bob to care whether Alice gets the update. Bob might care more about the aesthetics, readability, and the risk that misinfo could be taken out of context if not corrected in the very same msg where the misinfo occurred. If I discover something I posted contained some misinfo, my top concern is propagation of the misinfo. If I post a reply below it saying “actually, i was wrong, … etc”, there are readers who would stop reading just short of the correction msg. Someone could also screenshot the misinfo & either deliberately or accidentally omit Bob’s correction. So it’s only sensible to correct misinfo directly where it occurred.
> I get what you’re saying though, that there should be some real integrity toward post/reply history, like diff maybe.
It would be interesting to see exactly what Mastodon does… whether it has an algorithm that tries to separate typos/grammer from more substantive edits. I don’t frequently get notices on Mastodon when someone updates a status that mentions me, so I somewhat suspect it’s only for significant edits.
(update) one simple approach would be to detect when a strikethrough is added. Though it wouldn’t catch all cases.
> So let me get this straight… Bob does something no one else does
Straight away you don’t have it straight. Edits happen. The mere possibility of edits in fact encourages authors to produce ½-baked drafts in the 1st place knowing that they can always edit.
> edit messages on somewhere no one else goes, adding significant content to something no one sees
Not sure what drives this logic. If no one goes there, the post/comment is unlikely to happen in the 1st place. And with no interaction in the thread, refinements are even less likely. If you don’t have at least two people participating in a thread, there are no notifications to speak of.
> and then Bob wants to spam the world about the update with notification?
Bob wants to take no action at all and let a smart system handle notifications as needed. So your attempt to “get this straight” got everything crooked. Furthermore, your proposed solution is moreso aligned with Bob pushing “spam”, as Bob’s new & separate msg forces a notification as the platform has no way of distinguishing an update from a new msg. Thus it would be treated like a new msg and a notice would be sent.
> Also, in this context, this wouldn’t be a bug, but rather a feature request
One man’s bug is another man’s feature. Luckily bugs and feature requests are handled in the same venue so it’s a red herring.
> a feature that no one is asking for
Certainly not true anymore.
> and doesn’t make the software better
One man’s bug is another man’s feature.
> except to those that doesn’t follow social norms yet still demands to get into others’ inboxes.
You’ve misunderstood where the demand is coming from. It’s not the author; it’s the recipient. Someone posted a useful reply to Alice, Alice read it, marked it as read, & then Bob made a useful update. Alice did not receive the notice of the update. This “demand” comes from the recipient (Alice), not Bob the author. The update was for the recipient’s benefit not the author’s. It’s purely incidental that Alice discovered that an update happened because #Lemmy was not smart enough to notify me of the update (unlike Mastodon which is quite a bit more mature).
> Instead, the appropriate behaviour is to not allow Bob to make edits after sometime (which many softwares have such feature for)
That’d be fair enough, but it would not have helped in this case where the edit happened the same day.
> and/or make edit logs visible (also a common feature)
You’re imposing too much manual labor on humans. Machines are here to work for us not the other way around.
> such that people who doesn’t follow expected norms
The norms adapt to the software. When the software does an extra service for people, they abandon norms that attempt to compensate for a feature poor system. And rightly so.
Heh… the funny irony here is that you actually missed my update to the OP, which says:
“For comparison, note that Mastodon (at least some versions) notify you upon edits of msgs that you were previously notified on.”
That’s of course a different scenario since crossposts don’t update (which could be a separate interesting discussion). But funny nonetheless because you missed an update while saying that tools should not be improved in favor of social / cultural change. I guess you should have thought to read the OP and compare it for changes (the social solution) :)
> that’s kind of how things have been since pretty much early 2000s if not earlier.
We can dispense any sort of “conventional wisdom” in the course of moving forward with improvements.
Very specifically the comment that inspired my post was someone posting misinformation, then going back and adding a s̶t̶r̶i̶k̶e̶t̶h̶r̶o̶u̶g̶h̶ and highlighting their correction in red text
. No correction would be more readable than that. The problem with your proposal is that misinformation is left there persistently misinforming. That can then be taken out of context (e.g. someone screensnaps the misinfo & uses it against the author). There’s also the problem that readers often do not read a whole thread top to bottom. This is proven by the number of votes (up or down), which appear in high numbers on high comments and drop dramatically after ~3 or so replies. You might argue that the post can be deleted, but that then creates a problem of responses not having context. And it creates confusion as people wonder “didn’t person X say Y?”
> Are you sure it’s not just hiding the images but still loading them for layout purposes?
No, in fact this is my concern. It’s hard to know considering javascript is in the mix.
> Is this mobile Firefox or desktop Firefox?
I should have been more clear-- it’s actually Tor Browser on the desktop. I said firefox to simplify, but then it occurred to me it could be relevant. Although I think the tor overhead is negligible.
I meant to say if you vote on the /comment/ you are replying to… which is apparently captured in a github bug report already.
Cloudflared services like ani.social are getting a “100%” available stat. That site may be up but it’s unavailable (denying availability) to something like ~1-3% of the population 100% of the time. So in principle it should never be able to achieve the 100% availability stat.
I understand it would be quite difficult to calculate an availability figure that accounts for access restrictions to marginalized groups, because apart from Cloudflare you would not have a practical way of knowing how firewalls are configured. But one thing you could (and should) do is mark the known walled gardens in some way. E.g. put a “🌩” next to Cloudflare sites and warn people that they are not open access sites.
The lestat.org availability listing is like a competition that actually gives a perception advantage to services that exclude people, thus rewarding them for compromising availability. I would also subtract off ~2% for all CF sites as a general rule simply because you know it’s not 100% available to everyone. They do not deserve that 100% trophy, nor is it accurate.
Your timeline is backwards. The account compromise was July 10; the DoS attack came after that (July 15th). There is also no chatter of any kind about any attacks prior to July 10th.
I’d just like to know what your solution to DDOS and other bad actors is if it’s not cloudflare.
First of all DDoS from Tor is rarely successful because the Tor network itself does not have the bandwidth with so few exit nodes. But if nonetheless you have an attack from Tor you stand up an onion host and forward all Tor traffic from the clearnet site to the onion site. Then regardless of where the attack is coming from, on the clearnet side there are various tar-pitting techniques to use on high-volume suspect traffic. You can also stand up a few VPS servers and load balance them, similar to what Cloudflare does without selling everyone else’s soul to the US tech giant devil.
on something cloudflare already does extremely well.
CF does the job very poorly. The problem is you’re discounting availability to all users as a criteria. You might say #SpamHaus solves the spam problem “very well” if you neglect the fact that no one can any longer run their own home server on a residential IP and that it’s okay for mail to traverse the likes of Google & MS. A good anti-spam tool detects the spam without falsely shit-canning ham. This is why SpamHaus and Cloudflare do a poor job: they marginalize whole communities and treat their ham as spam.
A walled garden means there’s actual barriers to entry. Cloudflare isn’t a barrier to entry unless you’re planning to attack an instance
Yes to your first statement. Your 2nd statement is nonsense. The pic on the OP proves I hit a barrier to entry without “planning an attack”
or are using something like ToR
Tor users are only one legit community that Cloudflare marginalizes. People in impoverished areas have to use cheap ISPs who issue CGNAT IP addresses, which CF is also hostile toward. CF is also bot-hostile, which includes hostility toward beneficial bots as well as non-bots who appear as bots to CF’s crude detection (e.g. text browsers).
If that’s true then why are there reports of the attack bringing them down on July 15th?
Which git repo is used to host the article doesn’t matter. That project is mirrored on ½ dozen other repos. Did you follow the links of the citations? The article is well cited but sometimes the links go stale (or become cloudflared). If you had trouble reaching the cited sources plz let me know & I’ll get the author to fix it. Or you can file a bug report in the issues tab.