@djc i mean really, is there anywhere else?
@djc i’ll keep that in mind next time i need a plumber!
from “Confiscate Their Money”, by #HamiltonNolan https://www.hamiltonnolan.com/p/confiscate-their-money ht @scott
the “broken windows fallacy” is indeed a fallacy, but let the windowpane lobby gain a lot of influence and you’ll find policy develop to encourage just this kind of “growth”.
@carolannie You have solid values.
if i were a politician, i’d be the Congressman from Dadjokia. I’d be like, “If you elect me, I can’t promise to work miracles. I’ll work YOU-ricles!”
@djc you are the power behind the power behind the throne!
Do you have a personal relationship with your local government?
That is, do you personally know your representative to local government or policymaking executives (mayor, city manager)?
Do you physically attend and meaningfully participate in local government meetings?
Any of the above would suffice.
“The kleptocrats aren’t just stealing money. They’re stealing democracy” by @anneapplebaum https://www.ft.com/content/0876ef7a-bf88-463e-b8ca-bd9b4a11665c
@admitsWrongIfProven i’m not sure bunker life would be all that superior to the alternative, at least not for very long.
In the 1950s bunkers were a middle-class neurosis, but now they’re an upscale luxury.
@sqrtminusone @GuerillaOntologist The fact that WhatsApp is (at least in theory) secure helps immunize Zuckerberg from ownership. If a platform’s security is so weak that every state security service has everything they want all the time, there’s no need to own the principal. If it’s so secure the principal has no access, same. When the principal can get access but unaffiliated security services cannot, that’s when the principal becomes a very desirable target. 1/
@sqrtminusone @GuerillaOntologist All that said, I wouldn’t presume WhatsApp is secure from US state intelligence gathering, even though the label promises it should be. I expect Zuckerberg is owned, in that way. But Meta is a bureaucratic behemoth in a way Telegram is not. The person of the principal (as opposed to other sources of access) is arguably more relevant for Telegram. 2/
@sqrtminusone @GuerillaOntologist Telegram is arguably pretty unique in the scale and intelligence value of what it hosts, combined with how personally it is controlled. 3/
@sqrtminusone @GuerillaOntologist All that said, I’m not affirmatively arguing this is what happened. I do think it facially plausible, but of course the most likely thing is a state action is just what it claims and appears to be, not some conspiracy.
I do think, from the outside, both might be plausible here. Thus the poll, “just asking questions!” /fin
@louis I think we actually have to think about that, in an application-specific way. If a library makes a book available, is the library liable? Potentially yes, but we put pretty wide bumpers around that, because we see the harms of censoring books to be a greater hazard than making them available, and so choose a direction to err. 1/
@louis I suspect that for search engines, we’d make a quite similar choice, but we would apply more scrutiny to say TikTok. Customized content on a large platform is “complicated” relative to simple mass-broadcast, but the scale of potential harms can be similar, and I’m not sure why we’d want complications to become exonerations. Are the benefits of these institutions so great we want to bear more harms? That’s a value judgment we get to collectively make. 2/
@louis That said, we did deal with these issues, with television and film, and the bar to liability was pretty high. Getting rid of the blanket shield in Section 230 doesn’t mean any little thing will get you sued. Here’s a law review lamenting how hard prosecuting very similar events was during the 1980s. There was no Section 230. Courts still tended to err on the side of not chilling speech. 3/ https://digital.sandiego.edu/cgi/viewcontent.cgi?article=1659&context=sdlr
@louis ( Are you old enough to remember this event? I don’t know if there was ultimately any liability. https://archive.is/xxoYY ) /fin
@louis @matthewstoller I think you are responsible for the algorithms you deploy and the forums you provide. I don’t want to see internet forums, particularly small ones, disappear, so I’d include some safe harbors, but they’d be narrow and tailored to smaller-scale operators, from which the scale and probability of potential harms is mechanically lower. For large forums, it’s like 80s network TV. You have to be careful about what you broadcast.
i really dislike it when my internet acquaintances die. please don’t.
@louis @matthewstoller Section 230 is and has long been a very polarized issue! @mmasnick is very much on one side of it. A bit more ambivalently perhaps, but I’m on the other side. https://www.theatlantic.com/ideas/archive/2021/01/trump-fighting-section-230-wrong-reason/617497/ 1/
@louis @matthewstoller @mmasnick When Section 230 was passed and the early caselaw turned it broad and impenetrable, the issue was mostly what you might call “negative moderation”. Refraining from distributing things you think bad shouldn’t make you responsible, as happened perversely to Prodigy. 2/
@louis @matthewstoller @mmasnick But I think we as a society are coming to a decision that so broad an immunity is untenable for “positive moderation”, for what you choose to amplify. We all agree that moderation choices, positive or negative, are themselves 1st Amendment protected expressive speech. We all agree that refusing to carry something you think bad shouldn’t recruit new liability for what you allow relative to not moderating at all. 3/
@louis @matthewstoller @mmasnick But for content you choose to highlight or amplify in ways that go beyond some “neutral” presentation, and certainly for things you are paid to amplify, I think it now exceedingly likely that liability will be clipped, to some degree. 4/
@louis @matthewstoller @mmasnick Whether that’s good or bad, in my view, will depend upon details. There are obviously terrible devils in details of what “neutral” might mean, or “positive vs negative moderation”. 5/
@louis @matthewstoller @mmasnick I don’t think the fully expansive Section 230 status quo is politically sustainable, for the good reason that it’s bad policy. Publisher and distributor liability exist in other contexts for good reasons, and reasons that apply to algorithmic mass-audience publishers at least as much as they do traditionally. 6/
@louis @matthewstoller @mmasnick The very expansive interpretation of Section 230 that has obtained since the 1990s was based on a supposition that internet experiments could be utopian, and we wanted to err on the side of protecting rather than disciplining and potentially discouraging them. The results of that interpretation are in, no longer an experiment, and not utopian. We as a public are revisiting our courts’ earlier choices. /fin
The blanket liability courts have interpreted onto Section 230 is growing threadbare. cf @matthewstoller https://www.thebignewsletter.com/p/judges-rule-big-techs-free-ride-on
@sqrtminusone @GuerillaOntologist My understanding is he has plaintext access to nearly everything that goes across Telegram if he wants it. In that circumstance, there’s arguably no country and no security service and no amount of money that can protect you. You are too valuable for a state actor not to own. If that’s right, the question before you isn’t how to protect yourself from state actors, but which state actor to be protected (and owned) by.
@GuerillaOntologist I think that’s a fair view! I do see others expressing the more cloak and dagger view! I honestly don’t have a view, but I was curious what others thought.
@_dm right back atcha, agreed!