Saturday Morning Breakfast Cereal - Hop

Click here to go see the bonus panel!
Hovertext:
Of course you don't need AI. You can just crunch the numbers in excel.
Today's News:

Hovertext:
Of course you don't need AI. You can just crunch the numbers in excel.
Video from Reddit shows what could go wrong when you try to pet a—looks like a Humboldt—squid.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Hovertext:
They quietly cruise into a carwash and try to put his body in the trash.
At least some of this is coming to light:
Doublespeed, a startup backed by Andreessen Horowitz (a16z) that uses a phone farm to manage at least hundreds of AI-generated social media accounts and promote products has been hacked. The hack reveals what products the AI-generated accounts are promoting, often without the required disclosure that these are advertisements, and allowed the hacker to take control of more than 1,000 smartphones that power the company.
The hacker, who asked for anonymity because he feared retaliation from the company, said he reported the vulnerability to Doublespeed on October 31. At the time of writing, the hacker said he still has access to the company’s backend, including the phone farm itself.
Slashdot thread.
The scene: it’s a secret three-way, upstairs at the party. 2 men, one woman, 2 minutes, one orgasm. To hear more salacious deets, tune in. We’ve heard of speed dating…but this is a whole other level. Do you have a tale to tell? Write it up and send it in: Q@Savage.Love Do you have a … Read More »
The post After Action Report #9 appeared first on Dan Savage.
Like I said last week… I’m taking a break from Struggle Session until after the new year. But I’m gonna keep sharing a question from a reader that isn’t — for reasons of length or timing — going to make it into the column. I was calling these letters “GangBangs,” since the whole gang gets … Read More »
The post Struggle Session: Not Getting Married Today appeared first on Dan Savage.
I’m sure there’s a story here:
Sources say the man had tailgated his way <https://uk.news.yahoo.com/fury-passengers-major-london-aiport-152235291.html>through to security screening and passed security, meaning he was not detected carrying any banned items.
The man deceived the BA check-in agent by posing as a family member who had their passports and boarding passes inspected in the usual way.

Hovertext:
We could build this NOW. The future of damaging children is HERE.

Hovertext:
The anti-Susan pamphleteering outside the house is also a difficulty their marriage has to work through.
For two days in September, Afghanistan had no internet. No satellite failed; no cable was cut. This was a deliberate outage, mandated by the Taliban government. It followed a more localized shutdown two weeks prior, reportedly instituted “to prevent immoral activities.” No additional explanation was given. The timing couldn’t have been worse: communities still reeling from a major earthquake lost emergency communications, flights were grounded, and banking was interrupted. Afghanistan’s blackout is part of a wider pattern. Just since the end of September, there were also major nationwide internet shutdowns in Tanzania and Cameroon, and significant regional shutdowns in Pakistan and Nigeria. In all cases but one, authorities offered no official justification or acknowledgment, leaving millions unable to access information, contact loved ones, or express themselves through moments of crisis, elections, and protests.
The frequency of deliberate internet shutdowns has skyrocketed since the first notable example in Egypt in 2011. Together with our colleagues at the digital rights organisation Access Now and the #KeepItOn coalition, we’ve tracked 296 deliberate internet shutdowns in 54 countries in 2024, and at least 244 more in 2025 so far.
This is more than an inconvenience. The internet has become an essential piece of infrastructure, affecting how we live, work, and get our information. It’s also a major enabler of human rights, and turning off the internet can worsen or conceal a spectrum of abuses. These shutdowns silence societies, and they’re getting more and more common.
Shutdowns can be local or national, partial or total. In total blackouts, like Afghanistan or Tanzania, nothing works. But shutdowns are often targeted more granularly. Cellphone internet could be blocked, but not broadband. Specific news sites, social media platforms, and messaging systems could be blocked, leaving overall network access unaffected—as when Brazil shut off X (formerly Twitter) in 2024. Sometimes bandwidth is just throttled, making everything slower and unreliable.
Sometimes, internet shutdowns are used in political or military operations. In recent years, Russia and Ukraine have shut off parts of each other’s internet, and Israel has repeatedly shut off Palestinians’ internet in Gaza. Shutdowns of this type happened 25 times in 2024, affecting people in 13 countries.
Reasons for the shutdowns are as varied as the countries that perpetrate them. General information control is just one. Shutdowns often come in response to political unrest, as governments try to prevent people from organizing and getting information; Panama had a regional shutdown this summer in response to protests. Or during elections, as opposition parties utilize the internet to mobilize supporters and communicate strategy. Belarusian president Alyaksandr Lukashenko, who has ruled since 1994, reportedly disabled the internet during elections earlier this year, following a similar move in 2020. But they can also be more banal. Access Now documented countries disabling parts of the internet during student exam periods at least 16 times in 2024, including Algeria, Iraq, Jordan, Kenya, and India.
Iran’s shutdowns in 2022 and June of this year are good examples of a highly sophisticated effort, with layers of shutdowns that end up forcing people off the global internet and onto Iran’s surveilled, censored national intranet. India, meanwhile, has been the world shutdown leader for many years, with 855 distinct incidents. Myanmar is second with 149, followed by Pakistan and then Iran. All of this information is available on Access Now’s digital dashboard, where you can see breakdowns by region, country, type, geographic extent, and time.
There was a slight decline in shutdowns during the early years of the pandemic, but they have increased sharply since then. The reasons are varied, but a lot can be attributed to the rise in protest movements related to economic hardship and corruption, and general democratic backsliding and instability. In many countries today, shutdowns are a knee-jerk response to any form of unrest or protest, no matter how small.
A country’s ability to shut down the internet depends a lot on its infrastructure. In the US, for example, shutdowns would be hard to enforce. As we saw when discussions about a potential TikTok ban ramped up two years ago, the complex and multifaceted nature of our internet makes it very difficult to achieve. However, as we’ve seen with total nationwide shutdowns around the world, the ripple effects in all aspects of life are immense. (Remember the effects of just a small outage—CrowdStrike in 2024—which crippled 8.5 million computers and cancelled 2,200 flights in the US alone?)
The more centralized the internet infrastructure, the easier it is to implement a shutdown. If a country has just one cellphone provider, or only two fiber optic cables connecting the nation to the rest of the world, shutting them down is easy.
Shutdowns are not only more common, but they’ve also become more harmful. Unlike in years past, when the internet was a nice option to have, or perhaps when internet penetration rates were significantly lower across the Global South, today the internet is an essential piece of societal infrastructure for the majority of the world’s population.
Access Now has long maintained that denying people access to the internet is a human rights violation, and has collected harrowing stories from places like Tigray in Ethiopia, Uganda, Annobon in Equatorial Guinea, and Iran. The internet is an essential tool for a spectrum of rights, including freedom of expression and assembly. Shutdowns make documenting ongoing human rights abuses and atrocities more difficult or impossible. They are also impactful on people’s daily lives, business, healthcare, education, finances, security, and safety, depending on the context. Shutdowns in conflict zones are particularly damaging, as they impact the ability of humanitarian actors to deliver aid and make it harder for people to find safe evacuation routes and civilian corridors.
Defenses on the ground are slim. Depending on the country and the type of shutdown, there can be workarounds. Everything, from VPNs to mesh networks to Starlink terminals to foreign SIM cards near borders, has been used with varying degrees of success. The tech-savvy sometimes have other options. But for most everyone in society, no internet means no internet—and all the effects of that loss.
The international community plays an important role in shaping how internet shutdowns are understood and addressed. World bodies have recognized that reliable internet access is an essential service, and could put more pressure on governments to keep the internet on in conflict-affected areas. But while international condemnation has worked in some cases (Mauritius and South Sudan are two recent examples), countries seem to be learning from each other, resulting in both more shutdowns and new countries perpetrating them.
There’s still time to reverse the trend, if that’s what we want to do. Ultimately, the question comes down to whether or not governments will enshrine both a right to access information and freedom of expression in law and in practice. Keeping the internet on is a norm, but the trajectory from a single internet shutdown in 2011 to 2,000 blackouts 15 years later demonstrates how embedded the practice has become. The implications of that shift are still unfolding, but they reach far beyond the moment the screen goes dark.
This essay was written with Zach Rosson, and originally appeared in Gizmodo.
Meta, the company controlled by babyfash Trump fan Mark Zuckerberg, the board of which thought it was reasonable to support Fascist politicians in the hope of avoiding regulation, and whose Facebook service has or had a “17-strike” policy for known sex trafficking accounts and not only doesn’t remove fraud posts but charges known fraud operations higher rates for their ads, puked this vile mixture of plagiarism, artist’s blood, and AI sludge posing as photography onto BART station walls in San Francisco:

And of course it’s shit. Of course it’s shit. Holy gods, it is such hot garbage, and I’m not even talking about the implied higher situational awareness of someone wearing an AI PHONE ON THEIR FACE over people looking down at their regular phones
tho’ that’s a pretty fuckin’ hot take for them to have right there too, I have to say
I’m talking about the raw clownery of this image. Holy hell. Let’s zoom in at one of the insults to imagery:

And I’m not even mentioning the ghost in the room, by which I mean the four ghosts in this one particular rendered room:

And I have to ask:
HOW CAN ALL THIS STILL BE THIS SHITTY AND PASS MUSTER FOR THEM? HOW?
Christ it’s so insultingly bad. It’s infuriatingly bad. As photography substitute, as AI generated Not Art. It’s… it’s like it’s Anti-art, an opposite of art that mocks the real, that imitates while degrading both itself and its opposite.
Anybody can make bad art. I’ve made plenty. Also some good art.
But it takes real work to make anti-art.
And that’s what makes me want to fucking scream.
We all know how monstrously wealthy Fuckerberg is. How much money he and his company have. How he could jerk off with thousand dollar bills, wipe himself clean, and burn the dirties the rest of his wretched life and not even notice the difference.
So when you see that they’d rather put out this slapdash, revolting, uncaring – no sneering insult of a render than pay a photographer and a few models a few bucks for an afternoon photo shoot, what’s that say?
It’s not the money. He has all the money. All of it. Well, him, and the other TESCREAL fascists.
I think… I think I have to think… that it’s a matter of principle for them. A sick principle, but a principle nonetheless. It has to be, because otherwise it makes no. goddamn. sense.
I literally have to conclude that they hate art, and even more, hate artists. They have to, to consider this better. It must be principle for them to not care about artistic creative work, to not pay artistic workers. It has to be principle to hold all that in contempt, to say, “see? We just steal everything you’ve ever done, throw it into our churn machine, and then rub out our own version in half an hour to show you’re not any better than us. And you can’t do shit about it.”
They’ve made it clear that they’d not only spew this kind of rancid splatter, this metaphorical scrawl of shit, urine, blood, and theft across the walls of a city than break that principle.
And they’ll enjoy it.
I used to think, once upon a time, that Syndrome from The Incredibles was a little too on the nose,a little too pointed, maybe – dare I say it – a little too cartoonish for even a cartoon.
I’m starting to think maybe he wasn’t on the nose enough.
But that’s flippant, and maybe a little too easy.
What I really feel is that… I’m finally starting to understand – really understand, at a gut level – what Hayao Miyazaki meant when he called AI “art” an insult to life itself.
Because, well, almost anything can be art. Art is an observation and an intent, as much as anything else, and handing that mantle to something which has no awareness, no observation, no actual knowledge of meaning, no ability to opine, no personhood at all, a chum machine with less actual awareness than a housefly maggot…
…how could that be anything less than an insult to life, itself?
It took me a while to understand, Hayao. But I think I’ve finally got there.
Posted via Solarbird{y|z|yz}, Collected.

Hovertext:
Anyone complaining about the math just needs bigger or smaller pasta.
My partner is still in love with her ex. For ten years she believed him to be the love of her life. It’s been two years since they split and there’s no possibility of them getting back together because he betrayed her in the most despicable possible way. It was the kind of betrayal that … Read More »
The post Good Working Order appeared first on Dan Savage.
A married woman and her husband agreed to open the marriage up, with clear rules and boundaries. Guess what! He flagrantly broke the rules. Now, she is considering sleeping with someone else as a form of revenge. Too toxic? A divorced woman is still very good friends with her husband. She’s so close with him, … Read More »
The post When Subs Date Subs appeared first on Dan Savage.
New report: “The Party’s AI: How China’s New AI Systems are Reshaping Human Rights.” From a summary article:
China is already the world’s largest exporter of AI powered surveillance technology; new surveillance technologies and platforms developed in China are also not likely to simply stay there. By exposing the full scope of China’s AI driven control apparatus, this report presents clear, evidence based insights for policymakers, civil society, the media and technology companies seeking to counter the rise of AI enabled repression and human rights violations, and China’s growing efforts to project that repression beyond its borders.
The report focuses on four areas where the CCP has expanded its use of advanced AI systems most rapidly between 2023 and 2025: multimodal censorship of politically sensitive images; AI’s integration into the criminal justice pipeline; the industrialisation of online information control; and the use of AI enabled platforms by Chinese companies operating abroad. Examined together, those cases show how new AI capabilities are being embedded across domains that strengthen the CCP’s ability to shape information, behaviour and economic outcomes at home and overseas.
Because China’s AI ecosystem is evolving rapidly and unevenly across sectors, we have focused on domains where significant changes took place between 2023 and 2025, where new evidence became available, or where human rights risks accelerated. Those areas do not represent the full range of AI applications in China but are the most revealing of how the CCP is integrating AI technologies into its political control apparatus.
News article.
Cast your mind back to May of this year: Congress was in the throes of debate over the massive budget bill. Amidst the many seismic provisions, Senator Ted Cruz dropped a ticking time bomb of tech policy: a ten-year moratorium on the ability of states to regulate artificial intelligence. To many, this was catastrophic. The few massive AI companies seem to be swallowing our economy whole: their energy demands are overriding household needs, their data demands are overriding creators’ copyright, and their products are triggering mass unemployment as well as new types of clinical psychoses. In a moment where Congress is seemingly unable to act to pass any meaningful consumer protections or market regulations, why would we hamstring the one entity evidently capable of doing so—the states? States that have already enacted consumer protections and other AI regulations, like California, and those actively debating them, like Massachusetts, were alarmed. Seventeen Republican governors wrote a letter decrying the idea, and it was ultimately killed in a rare vote of bipartisan near-unanimity.
The idea is back. Before Thanksgiving, a House Republican leader suggested they might slip it into the annual defense spending bill. Then, a draft document leaked outlining the Trump administration’s intent to enforce the state regulatory ban through executive powers. An outpouring of opposition (including from some Republican state leaders) beat back that notion for a few weeks, but on Monday, Trump posted on social media that the promised Executive Order is indeed coming soon. That would put a growing cohort of states, including California and New York, as well as Republican strongholds like Utah and Texas, in jeopardy.
The constellation of motivations behind this proposal is clear: conservative ideology, cash, and China.
The intellectual argument in favor of the moratorium is that “freedom“-killing state regulation on AI would create a patchwork that would be difficult for AI companies to comply with, which would slow the pace of innovation needed to win an AI arms race with China. AI companies and their investors have been aggressively peddling this narrative for years now, and are increasingly backing it with exorbitant lobbying dollars. It’s a handy argument, useful not only to kill regulatory constraints, but also—companies hope—to win federal bailouts and energy subsidies.
Citizens should parse that argument from their own point of view, not Big Tech’s. Preventing states from regulating AI means that those companies get to tell Washington what they want, but your state representatives are powerless to represent your own interests. Which freedom is more important to you: the freedom for a few near-monopolies to profit from AI, or the freedom for you and your neighbors to demand protections from its abuses?
There is an element of this that is more partisan than ideological. Vice President J.D. Vance argued that federal preemption is needed to prevent “progressive” states from controlling AI’s future. This is an indicator of creeping polarization, where Democrats decry the monopolism, bias, and harms attendant to corporate AI and Republicans reflexively take the opposite side. It doesn’t help that some in the parties also have direct financial interests in the AI supply chain.
But this does not need to be a partisan wedge issue: both Democrats and Republicans have strong reasons to support state-level AI legislation. Everyone shares an interest in protecting consumers from harm created by Big Tech companies. In leading the charge to kill Cruz’s initial AI moratorium proposal, Republican Senator Masha Blackburn explained that “This provision could allow Big Tech to continue to exploit kids, creators, and conservatives? we can’t block states from making laws that protect their citizens.” More recently, Florida Governor Ron DeSantis wants to regulate AI in his state.
The often-heard complaint that it is hard to comply with a patchwork of state regulations rings hollow. Pretty much every other consumer-facing industry has managed to deal with local regulation—automobiles, children’s toys, food, and drugs—and those regulations have been effective consumer protections. The AI industry includes some of the most valuable companies globally and has demonstrated the ability to comply with differing regulations around the world, including the EU’s AI and data privacy regulations, substantially more onerous than those so far adopted by US states. If we can’t leverage state regulatory power to shape the AI industry, to what industry could it possibly apply?
The regulatory superpower that states have here is not size and force, but rather speed and locality. We need the “laboratories of democracy” to experiment with different types of regulation that fit the specific needs and interests of their constituents and evolve responsively to the concerns they raise, especially in such a consequential and rapidly changing area such as AI.
We should embrace the ability of regulation to be a driver—not a limiter—of innovation. Regulations don’t restrict companies from building better products or making more profit; they help channel that innovation in specific ways that protect the public interest. Drug safety regulations don’t prevent pharma companies from inventing drugs; they force them to invent drugs that are safe and efficacious. States can direct private innovation to serve the public.
But, most importantly, regulations are needed to prevent the most dangerous impact of AI today: the concentration of power associated with trillion-dollar AI companies and the power-amplifying technologies they are producing. We outline the specific ways that the use of AI in governance can disrupt existing balances of power, and how to steer those applications towards more equitable balances, in our new book, Rewiring Democracy. In the nearly complete absence of Congressional action on AI over the years, it has swept the world’s attention; it has become clear that states are the only effective policy levers we have against that concentration of power.
Instead of impeding states from regulating AI, the federal government should support them to drive AI innovation. If proponents of a moratorium worry that the private sector won’t deliver what they think is needed to compete in the new global economy, then we should engage government to help generate AI innovations that serve the public and solve the problems most important to people. Following the lead of countries like Switzerland, France, and Singapore, the US could invest in developing and deploying AI models designed as public goods: transparent, open, and useful for tasks in public administration and governance.
Maybe you don’t trust the federal government to build or operate an AI tool that acts in the public interest? We don’t either. States are a much better place for this innovation to happen because they are closer to the people, they are charged with delivering most government services, they are better aligned with local political sentiments, and they have achieved greater trust. They’re where we can test, iterate, compare, and contrast regulatory approaches that could inform eventual and better federal policy. And, while the costs of training and operating performance AI tools like large language models have declined precipitously, the federal government can play a valuable role here in funding cash-strapped states to lead this kind of innovation.
This essay was written with Nathan E. Sanders, and originally appeared in Gizmodo.
EDITED TO ADD: Trump signed an executive order banning state-level AI regulations hours after this was published. This is not going to be the last word on the subject.