• [RISKS] (no subject)

    From RISKS List Owner@risko@csl.sri.com to risks-resend@csl.sri.com on Sat Jan 11 19:16:17 2025
    Risks Digest 34.52

    RISKS-LIST: Risks-Forum Digest Saturday 11 January 2025 Volume 34 : Issue 52

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/34.52>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents:
    10 killed and dozens injured in pickup-truck attack on New Orleans crowd
    (Lauren Weinstein)
    'Fundamentally wrong': Self-driving Tesla steers Calif. tech
    founder onto train tracks (SFGate)
    Driver accidentally disconnects autopilot, crashes car
    (Lars-Henrik Eriksson)
    Driver in Las Vegas Cybertruck explosion used ChatGPT to plan
    blast, authorities say (NBC News)
    It's not just Tesla. Vehicles amass huge troves of possibly
    sensitive data. (WashPost)
    Tech allows Big Auto to evolve into Big Brother
    (LA Times via Jim Geissman)
    Wrong turn from GPS leaves car abandoned on Colorado ski run (9news.com)
    A Waymo robotaxi and a Serve delivery robot collided in run Los Angeles
    (TechCrunch)
    Waymo robotaxis can make walking across the street a game of chicken
    (The Washington Post)
    Trifecta of articles in *LA Times* about cars (Ssteve Bacher)
    LA Sheriff outage (LA Times)
    Eutelsat resolves OneWeb leap year software glitch
    after two-day outage (SpaceNews)
    Traffic lights will have a fourth color in 2025
    (ecoticias via Steve Bacher)
    FAA chief: Boeing must shift focus to safety over profit
    (LA Times)
    ARRL hit with ransomware (ARRL)
    Taiwan Suspects China of Latest Undersea Cable Attack"
    (Tom Nicholson)
    The Memecoin Shenanigans Are Just Getting Started (WiReD)
    Apple to pay $95M to settle lawsuit accusing Siri of
    eavesdropping (CBC)
    Meta Getting Rid of Fact Checkers (Clare Duff)
    Huge problems with axing fact-checkers, Meta oversight
    board says (BBC)
    Meta hosts AI chatbots of 'Hitler,' 'Jesus Christ,' Taylor Swift
    (NBC News)
    God can take Sunday off
    (NYTimes via Tom Van Vleck)
    Several items Google and Meta (Lauren Weinstein_
    AI means the end of Internet search as we've known it (Technology Review))
    Is it still 'social media' if it's overrun by AI? (CBC)
    AI Incident Database (Steve Bacher)
    Apple's AI News Summaries and Inventions (BBC)
    What real people think about Google Search today (Lauren Weinstein)
    WARNING: Google Voice is flagging LEGITIMATE robocalls from
    insurance companies to their customers in the fires as spam
    (Lauren Weinstein)
    A non-tech analogy for Google Search AI Overviews (Lauren Weinstein)
    Happy new year, compute carefully (Tom Van Vleck)
    How to understand Generative AI (Lauren Weinstein)
    Google censoring my AI criticism? (Lauren Weinstein)
    U.S. newspapers are deleting old crime stories offering
    subjects a clean slate (The Guardian)
    EU Commission Fined for Transferring User Data
    to Meta in Violation of Privacy Laws (THN)
    The Ghosts in the Spotify Machine (Liz Pelly:)
    Spotify (Rob Slade)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Wed, 1 Jan 2025 09:09:56 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: 10 killed and dozens injured in pickup-truck attack on New Orleans
    crowd

    Driver was killed by police. It is reported that he shot at them and
    also had explosive devices. Pickup is reportedly registered to a 42
    year old man from Texas. -L

    ------------------------------

    Date: Sat, 4 Jan 2025 09:45:55 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: 'Fundamentally wrong': Self-driving Tesla steers Calif. tech
    founder onto train tracks (SFGate)

    Jesse Lyu trusts his Tesla’s “self-driving” technology; he’s taken it to
    work, and he’s gone on 45-minute drives without ever needing to intervene. He’s a “happy customer,” he told SFGATE. But on Thursday, his Tesla scared
    him, badly.

    Lyu, the founder and CEO of artificial intelligence gadget startup Rabbit,
    was on the 15-minute drive from his apartment to his office in downtown
    Santa Monica. He’d turned on his car’s self-driving features, called “Autopilot” and “Full Self-Driving (Supervised),” after pulling out of his
    parking garage. The pay-to-add features are meant to drive the Tesla with “minimal driver intervention,” steering, stopping and accelerating on highways and even in city traffic, according to Tesla's website. Lyu was cruising along, resting his arms on the steering wheel but letting the car direct itself, he said in a video interview Friday.

    Then, Lyu’s day took a turn for the worse. At a stoplight, his Tesla turned left onto Colorado Avenue, but it missed the lane for cars. Instead, it
    plunged onto a street-grade light rail track between the road’s vehicle traffic lanes, paved but meant solely for trains on LA’s Metro E Line. He couldn’t just move over — a low concrete barrier separates the lanes, and a fence stands on the other side.

    “It’s just f–king crazy,” he said, narrating a video he posted to X of the
    incident. “I’ve got nowhere to go. And, you can tell from behind -- the train’s right here.” (He pointed to the oncoming train, stopped about a block behind his car.) [...] https://www.sfgate.com/tech/article/tesla-fsd-jesse-lyu-train-20014242.php

    ------------------------------

    Date: Sat, 4 Jan 2025 10:25:39 +0100
    From: Lars-Henrik Eriksson <lhe@it.uu.se>
    Subject: Driver accidentally disconnects autopilot, crashes car

    A Swedish driver was convicted for reckless driving and insurance fraud
    after crashing his Tesla.

    To show off, he engaged the autopilot at a speed of 70-80 km/h and then
    moved over into the passenger seat. After a short while the car
    crashed. Fortunately no one was seriously hurt. It was initially seen as a normal car accident and his insurance compensated him for the car which was
    a total loss, but his (now ex) wife had recorded everything from the back
    seat and later turned the video over to the police.

    The police asked him if he was aware that the autopilot would disengage if
    the driver seat belt was released and he replied that he wasn't.

    The risk here is not primarily one of idiot drivers but of the increasing complexity of modern cars where the drivers don't fully understand how they behave and there is no real pressure to motivate them. In traffic, you can
    see that drivers frequently mishandle such a relatively simple thing as automatic front and rear lights.

    In aviation, pilots of larger aircraft have to take formal training to completely understand the aircraft systems. Even with smaller aircraft --
    which may have less complex systems than modern cars -- pilots are expected
    to read up on how the aircraft systems operate.

    (https://www.unt.se/nyheter/tarnsjo/artikel/filmbeviset-trodde-bilen-var-sjalvkorande-kraschade/j8ex8emj, in Swedish and behind a paywall.)

    ------------------------------

    Date: Wed, 8 Jan 2025 06:40:48 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Driver in Las Vegas Cybertruck explosion used ChatGPT to plan
    blast, authorities say (NBC News)

    NBC News (01/07/25) Tom Winter and Andrew Blankstein ; Antonio Planas

    The soldier who authorities believe blew up a Cybertruck on New Year's Day
    in front of the entrance of the Trump International Hotel in Las Vegas used artificial intelligence to guide him about how to set off the explosion, officials said Tuesday.

    Matthew Alan Livelsberger, 37, queried ChatGPT for information about how he could put together an explosive, how fast a round would need to be fired for the explosives found in the truck to go off —- not just catch fire -— and what laws he would need to get around to get the materials, law enforcement officials said.

    An OpenAI spokesperson said, "ChatGPT responded with information already publicly available on the Internet and provided warnings against harmful or illegal activities."

    https://www.nbcnews.com/news/us-news/driver-las-vegas-cybertruck-explosion-used-chatgpt-plan-blast-authorit-rcna186704

    ------------------------------

    Date: Sat, 4 Jan 2025 08:46:42 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: It's not just Tesla. Vehicles amass huge troves of possibly
    sensitive data. (WashPost)

    Video footage and other data collected by Tesla helped law enforcement
    quickly piece together how a Cybertruck came to explode outside the Trump International Hotel in Las Vegas on New Year's Day.

    The trove of digital evidence also served as a high-profile demonstration of how much data modern cars collect about their drivers and those around them.

    Data privacy experts say the investigation -- which has determined t= hat
    the driver, active-duty U.S. Army soldier Matthew Livelsberger, died by
    suicide before the blast -- highlights how car companies vacuum up reams of data that can clear up mysteries but also be stolen or given to third
    parties without drivers' knowledge. There are few regulations controlling
    how and when law enforcement authorities can access data in cars, and
    drivers are often unaware of the vast digital trail they leave behind.
    ``These are panopticons on wheels,'' said Albert Fox Cahn, who founded the Surveillance Technology Oversight Project, an advocacy group that argues the volume and precision of data collected can pose civil liberties concerns for people in sensitive situations, like attending protests or going to abortion clinics.

    Federal and state officials have begun to scrutinize companies' use of car
    data as evidence has emerged of its misuse. There have been reports that abusive spouses tracked partners' locations, and that insurers raised rates based on driving behavior data shared by car companies. There have also been cases in which local police departments sought video from Tesla cars that
    may have recorded a crime, or obtained warrants to tow vehicles to secure
    such footage. [...]

    https://www.msn.com/en-us/news/technology/it-s-not-just-tesla-vehicles-amass-huge-troves-of-possibly-sensitive-data/ar-AA1wX8Lo

    ------------------------------

    Date: Mon, 6 Jan 2025 07:33:49 -0800
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: Tech allows Big Auto to evolve into Big Brother

    [Another on this topic]

    Your car is spying on you.

    That is one takeaway from the fast, detailed data that Tesla collected on
    the driver of one of its Cybertrucks that exploded in Las Vegas last week.

    Privacy data experts say the deep dive by Elon Musk's company was impressive but also shines a spotlight on a difficult question as vehicles become more like computers on wheels.

    Is your car company violating your privacy rights?

    "You might want law enforcement to have the data to crack down on criminals, but can anyone have access to it?" said Jodi Daniels, chief executive of the privacy consulting firm Red Clover Advisors. "Where is the line?"

    Many of the latest cars not only know where you've been and where you are going, but also often have access to your contacts, your call logs, your
    texts and other sensitive information, thanks to cellphone syncing.

    The data collected by Musk's electric car company after the Cybertruck
    packed with fireworks burst into flames in front of the Trump International Hotel proved valuable to police in helping track the driver's movements.

    http://enewspaper.latimes.com/infinity/article_share.aspx?guid=432286e7-91d3 -4e45-9e57-aa95a830767e

    ------------------------------

    Date: Tue, 7 Jan 2025 03:03:33 -0700
    From: Jim Reisert AD1C <jjreisert@alum.mit.edu>
    Subject: Wrong turn from GPS leaves car abandoned on Colorado ski
    run (9news.com)

    Melissa Reeves, 9NEWS, Updated: 10:19 PM MST January 6, 2025

    The Summit County Sheriff's Office (SCSO) posted pictures on social
    media of an abandoned car at Keystone Resort that was left behind on a
    ski run overnight.

    The sheriff's office said the driver left the car after it got stuck
    in the snow, but they left a note on the car's windshield for the
    resort and police that made it easy to find them.

    The note explained that the driver was following directions from a GPS
    as they were on their way to visit a friend who lives in nearby
    employee housing.

    https://www.9news.com/article/news/local/colorado-news/driver-makes-wrong-turn-keystone-ski-run/73-b54a9f76-451e-44b9-b5e8-014d28963a6d

    ------------------------------

    Date: Fri, 3 Jan 2025 18:45:51 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: A Waymo robotaxi and a Serve delivery robot collided in
    Los Angeles (TechCrunch)

    On 27 Dec 2024, a Waymo robotaxi and a Serve Robotics sidewalk delivery
    robot collided at a Los Angeles intersection, according to a video that's circulating on social media.

    The footage shows a Serve bot crossing a street in West Hollywood at night
    and trying to get onto the sidewalk. It reached the curb, backed up a little
    to correct itself and started moving toward the ramp. That's a Waymo making
    a right turn hit the little bot. [...]

    https://techcrunch.com/2024/12/31/a-waymo-robotaxi-and-a-serve-delivery-robot-collided-in-los-angeles/

    ------------------------------

    Date: Mon, 30 Dec 2024 15:24:37 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Waymo robotaxis can make walking across the street a game of
    chicken (The Washington Post)

    On roads teeming with robotaxis, crossing the street can be harrowing -- Our tech columnist captured videos of Waymo self-driving cars failing to stop
    for him at a crosswalk. How does an AI learn how to break the law?

    https://www.washingtonpost.com/technology/2024/12/30/waymo-pedestrians-robotaxi-crosswalks/

    ------------------------------

    Date: Mon, 6 Jan 2025 06:42:54 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Trifecta of articles in *LA Times* about cars

    Los Angeles man is trapped in circling Waymo on way to airport: 'Is
    somebody playing a joke?'
    [Matthew Kruk spotted this one:
    Mike Johns boarded a driverless Waymo taxi to an airport in Scottsdale,
    Arizona, but it began spinning in circles in a parking lot. He filmed the
    moment he was trapped in the vehicle, unable to stop the car or get help.
    Johns said he almost missed his flight.
    https://www.bbc.com/news/videos/c70e2g09ng9o]

    LA tech entrepreneur Mike Johns posted a video of his call to a customer service representative for Waymo to report that the car kept turning in
    circles

    https://www.latimes.com/california/story/2025-01-05/los-angeles-man-trapped-in-circling-waymo-says-he-missed-his-flight-home

    [Jim Geissman also noted it. PGN]

    ------------------------------

    Date: Thu, 2 Jan 2025 09:21:47 -0800
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: LA Sheriff outage (LA Times)

    A few hours before the ball dropped on New Year's Eve, the computer dispatch system for the Los Angeles County Sheriff's Department crashed, rendering
    all patrol car computers nearly useless and forcing deputies to handle all calls by radio, according to officials and sources in the department.

    Department leaders first learned of the problem around 8 p.m., when deputies
    at several sheriff's stations began having trouble logging onto their patrol car computers, officials told The Times in a statement.

    The department said it eventually determined its computer-aided dispatch program -- known as CAD -- was "not allowing personnel to log on with the
    new year, making the CAD inoperable."

    It's not clear how long it will take to fix the problem, but in the meantime deputies and dispatchers are handling everything old-school - using their radios instead of patrol car computers.

    "It's our own little Y2K," a deputy who was working Wednesday morning told
    The Times.

    https://www.latimes.com/california/story/2025-01-01/l-a-sheriffs-dispatch-sy stem-crashes-on-new-years-eve

    And there is more on this -- a "temporary fix". http://enewspaper.latimes.com/infinity/article_share.aspx?guid=8276009d-5b4b -4787-bece-ec72b2bbe0df

    [Also noted by Jan Wolitzky. Also, Paul Saffo noted

    If the trouble began a little after 16:00 local time (00:00 UTC), I
    would suspect the system was keeping time internally with UTC, but news
    reports say it started around 20:00. Furthermore, they say the system is
    old and needs to be replaced, which implies it's handled the end of year
    successfully many times.

    Perhaps there's a rollover issue, such as the GPS week number rollover
    that happened years ago. Since that occurred, my ca. 2000 Magellan
    receiver is years in error in its dates, though it still navigates
    without trouble. In fact, it's better than new in that respect. Rarely
    do I see its positions off by more than 10 feet. PS

    It still smells like a residual Y2K-type poor retrofix. PGN]

    ------------------------------

    Date: Thu, 2 Jan 2025 18:03:01 -0500
    From: Steve Golson <sgolson@trilobyte.com>
    Subject: Eutelsat resolves OneWeb leap year software glitch
    after two-day outage (SpaceNews)

    https://spacenews.com/eutelsat-resolves-oneweb-leap-year-software-glitch-after-two-day-outage/

    Eutelsat said Jan. 2 it has restored services across its low Earth orbit
    (LEO) OneWeb broadband network following a two-day outage.

    The software issue was caused by a failure to account for 2024 being a leap year… services were partially restored 36 hours after the disruption began
    31 Dec 2024.

    ------------------------------

    Date: Wed, 1 Jan 2025 09:14:58 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Traffic lights will have a fourth color in 2025

    It is hard not to recognize the famous red, yellow, and green traffic
    signals on roads throughout the globe. By 2025, traffic signals may have one
    of the biggest changes because one more color will be added to them. This shift aims to meet new increases by AVs and redefine the meaning of traffic management to make it safer and more effective in the future. [...]

    To further illustrate this strategy, we provide the proposed fourth color, white, which would signal to other drivers that the self-driving vehicle is managing traffic conditions. However, unlike the traditional Traffic
    signals, which inform other motorists of the behavior expected from
    autonomous vehicles at AIs, the White light informs the human drivers to
    mimic the behavior of the AVs at AIs. This system leverages the idea that
    AVs are intelligent vehicles that actively relay information and manage
    traffic information flow.

    In the case the AVs get to an intersection, they communicate with the
    traffic signals, as well as other AVs, to achieve the best flow. When AVs
    are in command, a white light informs human drivers what the self-driving vehicles intend to do. This makes it easier for human drivers to decide when
    to veer in either direction, thus eagles traffic congestion and making the
    road safer. [...]

    https://www.ecoticias.com/en/traffic-lights-fourth-color/10086/

    [Don't fire the traffic-manager programmer until you see the WHITES of his
    LIGHTS? PGN]

    ------------------------------

    Date: Mon, 6 Jan 2025 07:47:23 -0800
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: FAA chief: Boeing must shift focus to safety over profit

    Boeing used to manufacture airplanes and make profit as a side-effect. Then they changed to making profits primary with airplanes as a side-effect. FAA tells them to go back to the original model.

    A year after a panel blew out of a Boeing 737 Max during a flight, the
    nation's top aviation regulator says the company needs "a fundamental
    cultural shift" to put safety and quality above profit.

    Mike Whitaker, chief of the Federal Aviation Administration, said in an
    online post Friday that his agency also has more work to do in its oversight
    of Boeing.

    Whitaker, who plans to step down in two weeks to let President-elect Donald Trump pick his own FAA administrator, looked back on his decision last
    January to ground all 737 Max jets with similar panels called door plugs. Later, the FAA put more inspectors in Boeing factories, limited production
    of new 737s and required Boeing to come up with a plan to fix manufacturing problems.

    "Boeing is working to make progress executing its comprehensive plan in the areas of safety, quality improvement and effective employee engagement and training," Whitaker said. "But this is not a one-year project. What's needed
    is a fundamental cultural shift at Boeing that's oriented around safety and quality above profits. That will require sustained effort and commitment
    from Boeing, and unwavering scrutiny on our part."

    http://enewspaper.latimes.com/infinity/article_share.aspx?guid=72e50023-50c9-470e-812e-39984c87cf63

    ------------------------------

    Date: Thu, 2 Jan 2025 18:03:09 -0500:
    From: Steve Golson <sgolson@trilobyte.com>
    Subject: ARRL hit with ransomware (ARRL)

    American Radio Relay League (ARRL), the U.S. national association for
    amateur radio, was hit with a sophisticated ransomware attack.

    https://www.arrl.org/news/arrl-it-security-incident-report-to-members

    Sometime in early May 2024, ARRL’s systems network was compromised by threat actors (TAsing everything from desktops and laptops to Windows-based and Linux-based servers. Des) using information they had purchased on the dark
    web. The TAs accessed headquarters on-site systems and most cloud-based systems. They used a wide variety of payloads affecting everything from desktops and laptops to Windows-based and Linux-based servers. Despite the
    wide variety of target configurations, the TAs seemed to have a payload that ould host and execute encryption or deletion of network-based IT assets, as well as launch demands for a ransom payment, for every system.

    This serious incident was an act of organized crime. The highly coordinated
    and execute d attack took place during the early morning hours of May
    15. That morning, as staff arrived, it was immediately apparent that ARRL
    had become the victim of an extensive and sophisticated ransomware
    attack. The FBI categorized the attack as “unique” as they hadn't yet seen this level of sophistication among the many other attacks, they have
    experience with.

    The ransom demands by the TAs, in exchange for access to their decryption tools, were exorbitant. It was clear they didn’t know, and didn’t care, that
    they had attacked a small 501(c)(3) organization with limited
    resources. Their ransom demands were dramatically weakened by the fact that they did not have access to any compromising data. It was also clear that
    they believed ARRL had extensive insurance coverage that would cover a multi-million-dollar ransom payment.

    ------------------------------

    Date: Wed, 8 Jan 2025 11:24:10 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Taiwan Suspects China of Latest Undersea Cable Attack"
    (Tom Nicholson)

    Politico Europe (01/05/25) Tom Nicholson

    Taiwanese officials suspect a Cameroon-flagged cargo ship owned by Je Yang Trading Limited of Hong Kong, led by Chinese citizen Guo Wenjie, was responsible for cutting an international undersea telecom cable on
    Jan. 3. The Shunxin-39 was intercepted by Taiwan's coast guard, but rough weather prevented an on-board investigation, and the ship continued on to a South Korean port.

    ------------------------------

    Date: Thu, 9 Jan 2025 21:11:00 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: The Memecoin Shenanigans Are Just Getting Started (WiReD)

    The market for absurdist cryptocurrencies mutated into a
    hundred-billion-dollar phenomenon in 2024. Yes, things can get even more deranged.

    Around that time, a bunch of other celebrities—from Caitlyn Jenner to Andrew Tate and Jason Derulo—were all launching their own crypto coins. The
    pile-on reflected a renewed fervor among traders for memecoins, a type of cryptocurrency that generally has no utility beyond financial speculation.

    Because memecoins do not generate revenue or cash flow, their value is
    entirely based on the attention they attract, which can fluctuate
    wildly. Though some people make a lot of money on memecoins, many others
    lose out. With a general euphoria taking hold in cryptoland as the price of bitcoin rises to historic levels above $100,000, the stage is set for yet further memecoin “degeneracy,” says Azeem Khan, cofounder of the Morph blockchain and venture partner at crypto VC firm Foresight Ventures.

    https://www.wired.com/story/memecoins-cryptocurrency-regulation

    ------------------------------

    Date: Fri, 3 Jan 2025 11:05:47 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Apple to pay $95M to settle lawsuit accusing Siri of
    eavesdropping (CBC)

    https://www.cbc.ca/news/business/apple-siri-privacy-settlement-1.7422363

    Apple has agreed to pay $95 million US to settle a lawsuit accusing the privacy-minded company of deploying its virtual assistant Siri to eavesdrop
    on people using its iPhone and other trendy devices.

    The proposed settlement filed Tuesday in an Oakland, Calif., federal court would resolve a five-year-old lawsuit revolving around allegations that
    Apple surreptitiously activated Siri to record conversations through
    iPhones and other devices equipped with the virtual assistant for more than
    a decade.

    ------------------------------

    Date: Wed, 8 Jan 2025 11:24:10 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Meta Getting Rid of Fact Checkers (Clare Duff)

    CNN 01/07/25) Clare Duffy

    Mark Zuckerberg said Tuesday that Meta will adjust its content review
    policies on Facebook and Instagram, replacing fact checkers with
    user-generated "community notes." In doing so, Zuckerberg follows in the footsteps of Elon Musk who, after acquiring Twitter, dismantled the
    company's fact-checking teams. Said Zuckerberg, "Fact checkers have been too politically biased and have destroyed more trust than they've created."

    ------------------------------

    Date: Wed, 8 Jan 2025 07:08:55 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Huge problems with axing fact-checkers, Meta oversight
    board says (BBC)

    https://www.bbc.com/news/articles/cjwlwlqpwx7o

    While Meta says the move -- which is being introduced in the US initially -
    is about free speech, others have suggested it is an attempt to get closer
    to the incoming Trump administration, and catch up with the access and influence enjoyed by another tech titan, Elon Musk.

    The tech journalist and author Kara Swisher told the BBC it was "the most cynical move" she had seen Mr Zuckerberg make in the "many years" she had
    been reporting on him.

    "Facebook does whatever is in its self-interest", she said.
    "He wants to kiss up to Donald Trump, and catch up with Elon Musk in that
    act."

    ------------------------------

    Date: Thu, 9 Jan 2025 14:19:32 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Meta hosts AI chatbots of 'Hitler,' 'Jesus Christ,' Taylor Swift
    (NBC News)

    Meta says it reviews every user-generated AI chatbot, but NBC News found
    dozens that seemed to violate Meta’s policies.

    https://www.nbcnews.com/tech/social-media/meta-user-made-ai-chatbots-include-hitler-jesus-christ-rcna186206

    ------------------------------

    Date: Wed, 8 Jan 2025 08:41:43 -0500
    From: Tom Van Vleck <thvv@multicians.org>
    Subject: God can take Sunday off (NYTimes)

    from the New York Times 8 Jan 2025

    To members of his synagogue, the voice that played over the speakers of Congregation EmanuEl in Houston sounded just like Rabbi Josh Fixler's. In
    the same steady rhythm his congregation had grown used to, the voice
    delivered a sermon about what it meant to be a neighbor in the age of artificial intelligence. Then, Rabbi Fixler took to the bimah himself. "The audio you heard a moment ago may have sounded like my words," he said. "But they weren't." The recording was created by what Rabbi Fixler called "Rabbi Bot," an AI chatbot trained on his old sermons. The chatbot, created with
    the help of a data scientist, wrote the sermon, even delivering it in an
    AI version of his voice. During the rest of the service, Rabbi Fixler intermittently asked Rabbi Bot questions aloud, which it would promptly
    answer.

    Rabbi Fixler is among a growing number of religious leaders experimenting
    with AI in their work, spurring an industry of faith-based tech companies
    that offer AI tools, from assistants that can do theological research to chatbots that can help write sermons. [...] Religious leaders have used
    AI to translate their livestreamed sermons into different languages in
    real time, blasting them out to international audiences. Others have
    compared chatbots trained on tens of thousands of pages of Scripture to a
    fleet of newly trained seminary students, able to pull excerpts about
    certain topics nearly instantaneously. The report's author draws a parallel
    to previous generations' initial apprehension -- and eventual embrace -- of transformative technologies like radio, television, and the Internet. "For centuries, new technologies have changed the ways people worship, from the radio in the 1920s to television sets in the 1950s and the Internet in the 1990s," the report says. "Some proponents of AI in religious spaces have
    gone back even further, comparing AI's potential -- and fears of it -- to
    the invention of the printing press in the 15th century."

    Well, we are halfway there. Now all we need is AI-generated parishioners.

    Think of the savings in time and real estate. Church services can be over
    in microseconds. No need for church buildings, pews, altars: all virtual.
    They could repurpose churches as Amazon warehouses, patrolled by robots.

    ------------------------------

    Date: Thu, 9 Jan 2025 11:29:50 PST
    From: Peter Neumann <neumann@csl.sri.com>
    Subject: Several items Google and Meta (Lauren Weinstein_

    * Google gives a million dollars to Trump inauguration, as billionaire CEO
    Sundar goes full MAGA]

    * Changes at Meta amount to a MAGA Makeover Kevin Roose, *The New York
    Times*, 9 Jan 2025, front page of Business Section.
    [Lauren suggests META == Make Evil Trendy Again.]

    * Zuckerberg falls in line, goes fully MAGA
    Joe Garifoli, *The San Francisco Chronicle*, 9 Jan 2025

    * Google gives a million dollars to Trump inauguration, as billionaire CEO
    Sundar goes full MAGA, Lauren Weinstein, 9 Jan 2025

    [The best government money can buy? PGN]

    ------------------------------

    Date: Wed, 8 Jan 2025 08:47:42 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: AI means the end of Internet search as
    we've known it (Technology Review))

    The way we navigate the web is changing, and it’s paving the way to a more AI-saturated future.

    https://www.technologyreview.com/2025/01/06/1108679/ai-generative-search-internet-breakthroughs/

    ------------------------------

    Date: Wed, 8 Jan 2025 06:47:35 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Is it still 'social media' if it's overrun by AI? (CBC)

    https://www.cbc.ca/news/business/meta-ai-generated-characters-future-social-media-1.7424641

    Back in 2010, a 26-year-old Mark Zuckerberg shared his vision for Facebook
    -- by that point a wildly popular social network with more than 500-million users.

    "The primary thing that we focus on all day long is how to help people
    share and stay connected with their friends, family and the people in the community around them," Zuckerberg told CNBC. "That's what we care about,
    and that's why we started the company."

    Fifteen years and three billion users later, Facebook's parent company Meta
    has a new vision: characters powered by artificial intelligence existing alongside actual friends and family. Some experts caution that this could
    mark the end of social media as we know it.

    For early users of social media, platforms like Facebook and Instagram have become "about as anti-social as you can imagine," said Carmi Levy, a
    technology analyst and journalist based in London, Ont. "It's becoming increasingly difficult to connect with an actual human being."

    ------------------------------

    Date: Sat, 4 Jan 2025 08:38:38 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: AI Incident Database

    This should be of interest to RISKS readers:

    Welcome to the Artificial Intelligence Incident Database
    Search over 3000 reports of AI harms
    https://incidentdatabase.ai/

    ------------------------------

    Date: Tue, 7 Jan 2025 14:32:38 -0800
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: Apple's AI News Summaries and Inventions (BBC)

    https://www.bbc.com/news/articles/cge93de21n0o

    Apple is facing fresh calls to withdraw its controversial artificial intelligence (AI) feature that has generated inaccurate news alerts on its latest iPhones.

    The product is meant to summarise breaking news notifications but has in
    some instances invented entirely false claims.

    The BBC first complained to the tech giant about its journalism being misrepresented in December but Apple did not respond until Monday this week, when it said it was working to clarify that summaries were AI-generated.


    Alan Rusbridger, the former editor of the Guardian, told the BBC Apple
    needed to go further and pull a product he said was "clearly not ready."

    Mr Rusbridger, who also sits on Meta's Oversight Board that reviews appeals
    of the company's content moderation decisions, added the technology was "out
    of control" and posed a considerable misinformation risk.

    "Trust in news is low enough already without giant American corporations
    coming in and using it as a kind of test product," he told the Today
    programme, on BBC Radio Four.

    The National Union of Journalists (NUJ), one of the world's largest unions
    for journalists, said Apple "must act swiftly" and remove Apple Intelligence
    to avoid misinforming the public - echoing prior calls by journalism body Reporters Without Borders <https://www.bbc.co.uk/news/articles/cx2v778x85yo> (RSF).

    "At a time where access to accurate reporting has never been more important, the public must not be placed in a position of second-guessing the accuracy
    of news they receive," said Laura Davison, NUJ general secretary.

    The RSF also said Apple's intervention was insufficient, and has repeated
    its demand that the product is taken off-line.


    Series of errors


    The BBC complained <https://www.bbc.co.uk/news/articles/cd0elzk24dno> last month after an AI-generated summary of its headline falsely told some
    readers that Luigi Mangione, the man accused of killing UnitedHealthcare CEO Brian Thompson, had shot himself.

    On Friday, Apple's AI inaccurately summarised BBC app notifications to claim that Luke Littler had won the PDC World Darts Championship <https://www.bbc.co.uk/news/articles/cx27zwp7jpxo> hours before it began -
    and that the Spanish tennis star Rafael Nadal had come out as gay.

    This marks the first time Apple has formally responded to the concerns
    voiced by the BBC about the errors, which appear as if they are coming from within the organisation's app.

    ------------------------------

    Date: Tue, 31 Dec 2024 07:29:00 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: What real people think about Google Search today

    It's both notable and deeply depressing how many nontechnical people I know
    who have unprompted told me how much they despise Google AI Overviews, which they inevitably describe as usually inaccurate and worthless, at which point they usually add how Google Search quality has declined enormously (in their own words, of course).

    Then they sometimes say something like, "Hey Lauren, don't you know people
    at Google that you could tell about how bad this is getting?"

    At which point I usually bite my tongue, which is increasingly feeling like
    a pincushion as a result.

    Don't believe the happy face metrics that Google claims -- out in

    ------------------------------

    Date: Fri, 10 Jan 2025 10:50:22 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: WARNING: Google Voice is flagging LEGITIMATE robocalls from
    insurance companies to their customers in the fires as spam

    BE SURE TO CHECK YOUR SPAM FOLDERS! GOOGLE AI DOES IT AGAIN!

    ------------------------------

    Date: Tue, 31 Dec 2024 10:28:03 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: A non-tech analogy for Google Search AI Overviews

    Here's a non-tech analogy to the problem (well, a problem) with Google AI Overviews:

    Let's say you go to a restaurant. Maybe they're offering free meals
    that day, maybe you're paying. Either way, several plates of
    reasonable appearing food are placed in front of you. You ask about
    the ingredients, but you only get vague answers back if any, and the
    restaurant refuses to tell you anything about the actual recipes per
    se.

    You notice a little card sticking out from under one of the plates. It
    reads:

    "Some or all of this food may be fine. Some or all of this food may
    have a bad taste. Some or all may give you food poisoning. It's up to
    you to double check this food before eating it -- we take no
    responsibility for any ill effects it may have on you."

    Still hungry?

    ------------------------------

    Date: Fri, 3 Jan 2025 09:58:24 -0500
    From: Tom Van Vleck <thvv@multicians.org>
    Subject: Happy new year, compute carefully

    Just some notes to remind you to compute carefully in 2025.

    1. In the past I recommended Gmail to people because it does some spam detection, but now Gmail is being exploited to hack people. If you get a (fake) call ostensibly from Google or (fake) notices that your Google
    account is being attacked, run. Don't click anything. https://www.forbes.com/sites/zakdoffman/2025/01/03/new-gmail-outlook-apple-mail-warning-2025-hacking-nightmare-is-coming-true/?

    2. If anybody says "now with AI," run.
    They are not giving you something wonderful for free.

    3. I have stopped using Google Chrome except for testing web page changes.
    I avoid "Chrome Browser Extensions" because they have been hacked to do bad things.

    4. 2.6 million devices have been backdoored with credential stealing
    malware. Don't be a victim. https://therecord.media/hackers-target-vpn-ai-extensions-google-chrome-malicious-updates

    ------------------------------

    Date: Sat, 4 Jan 2025 10:08:35 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: How to understand Generative AI

    To really understand generative AI, you need to keep one simple fact in
    mind. There is no "Intelligence" in "Artificial Intelligence". OpenAI -- it turns out -- literally defines intelligence in terms of profits!

    And as we see, Google AI is essentially a low grade moron. But this is true
    for all of these systems. This is FUNDAMENTAL to how these systems
    work. They are NOT intelligent. They do NOT understand what they're saying.

    The term "Intelligence" in the context of these systems is merely a
    MARKETING HYPE term, nothing more.

    Keep this in mind and the chaos being created by Big Tech at our
    expense is much easier to at least understand. -L

    ------------------------------

    Date: Sat, 4 Jan 2025 16:51:29 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Google censoring my AI criticism?

    One of the digest versions of today's mailings, which included
    the messages:

    1. The laughs keep rolling in to that fraction question I asked
    Google (Lauren Weinstein)
    2. The execs know their AI is trash (Lauren Weinstein)
    3. Sources: Pentagon planning for how to deal with rogue Trump
    (Lauren Weinstein)

    was marked by Gmail as dangerous spam, with a red banner declaring it to
    be a likely phishing attack. If you can figure out any possible way any
    of those messages -- which were sent out as individual messages earlier
    today -- could possibly be legit interpreted in that way, I'd love to
    hear about it.

    Otherwise, I suspect Google has filters in place to try divert some of
    this criticism into a scary category that people won't read, whether
    that was their actual intention or not.

    VERY BAD. -L

    ------------------------------

    Date: Sun, 5 Jan 2025 06:32:54 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: U.S. newspapers are deleting old crime stories offering
    subjects a clean slate (The Guardian)

    Civil rights advocates across the US have long fought to free people from
    their criminal records, with campaigns to expunge old cases and keep
    people’s past arrests private when they apply for jobs and housing.

    The efforts are critical, as more than 70 million Americans have prior convictions or arrests – roughly one in three adults. But the policies haven’t addressed one of the most damaging ways past run-ins with police can derail people’s lives: old media coverage.

    Some newsrooms are working to fill that gap.

    A handful of local newspapers across the US have in recent years launched programs to review their archives and consider requests to remove names or delete old stories to protect the privacy of subjects involved in minor
    crimes.

    “In the old days, you put a story in the newspaper and it quickly, if not immediately, receded into memory,” said Chris Quinn, editor of Cleveland.com and the Plain Dealer newspaper. “But because of our [search engine] power, anything we write now about somebody is always front and center.” [...]

    https://www.theguardian.com/us-news/2025/jan/04/newspaper-crime-stories

    ------------------------------

    Date: Thu, 9 Jan 2025 10:43:21 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: EU Commission Fined for Transferring User Data
    to Meta in Violation of Privacy Laws (THN)

    The European General Court on Wednesday fined the European Commission, the primary executive arm of the European Union responsible for proposing and enforcing laws for member states, for violating the bloc's own data privacy regulations.

    The development marks the first time the Commission has been held liable
    for infringing stringent data protection laws in the region.

    The court determined that a "sufficiently serious breach" was committed by transferring a German citizen's personal data, including their IP address
    and web browser metadata, to Meta's servers in the United States when
    visiting the now-inactive futureu.europa[.]eu website in March 2022.

    The individual registered for one of the events on the site by using the Commission's login service, which included an option to sign in using a Facebook account.

    "By means of the 'Sign in with Facebook' hyperlink displayed on the E.U.
    Login webpage, the Commission created the conditions for transmission of
    the IP address of the individual concerned to the U.S. undertaking Meta Platforms," the Court of Justice of the European Union said in a press statement.

    The applicant had alleged that by transferring their information to the
    U.S., there arose a risk of their personal data being accessed by the U.S. security and intelligence services. [...] https://thehackernews.com/2025/01/eu-commission-fined-for-transferring.html

    ------------------------------

    Date: Thu, 2 Jan 2025 09:22:06 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: The Ghosts in the Spotify Machine (Liz Pelly:)

    I first heard about ghost artists in the summer of 2017. At the time, I was
    new to the music-streaming beat. I had been researching the influence of
    major labels on Spotify playlists since the previous year, and my first
    report had just been published. Within a few days, the owner of an
    independent record label in New York dropped me a line to let me know about
    a mysterious phenomenon that was “in the air” and of growing concern to those in the indie music scene: Spotify, the rumor had it, was filling its
    most popular playlists with stock music attributed to pseudonymous musicians—variously called ghost or fake artists—presumably in an effort to reduce its royalty payouts. Some even speculated that Spotify might be
    making the tracks itself. At a time when playlists created by the company
    were becoming crucial sources of revenue for independent artists and labels, this was a troubling allegation. [...]

    https://harpers.org/archive/2025/01/the-ghosts-in-the-machine-liz-pelly-spotify-musicians/

    ------------------------------

    Date: Mon, 16 Dec 2024 09:35:13 -0800
    From: Rob Slade <rslade@gmail.com>
    Subject: Spotify

    I have mentioned, at times, that many people seem to be laboring under the misapprehension that the email address rslade@gmail.com is theirs.

    Recently I have had cause to look into Spotify. I don't carry my "tunes" around with me (well, they often pop up as mindworms, but I don't need any external source for that.), and I don't listen to podcasts, so I haven't
    used Spotify, and I haven't created an account on it. But I've started contributing to a podcast, I didn't need to get a Spotify account to
    listen to the podcast. But recently someone sent me a playlist of songs,
    and I thought it would listen to it and hear what was in it. But Spotify, while it *would* play a free podcast, apparently *won't* play a playlist of commercial songs unless you create an account.

    So I tried, only to find out, yes, you guessed it, there already *was* an account under the email address rslade@gmail.com. Of course, I didn't know
    the account password. So, I just told Spotify that I lost the password.
    And it helpfully sent me an opportunity to change it.

    Whoever signed up for Spotify under my email address doesn't seem to have
    any playlists or anything else on the account, so I guess they haven't used
    it much and haven't lost anything. Much. Except for the account.

    Handy for me, though ...

    ------------------------------

    Date: Sat, 28 Oct 2023 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) has moved to the ftp.sri.com site:
    <risksinfo.html>.
    *** Contributors are assumed to have read the full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 34.52
    ************************



  • From RISKS List Owner@risko@csl.sri.com to risks-resend@csl.sri.com on Mon Jun 23 12:09:21 2025
    Content-Type: text/plain; charset=UTF-8
    Content-Transfer-Encoding: 8bit
    precedence: bulk
    Subject: Risks Digest 34.68

    RISKS-LIST: Risks-Forum Digest Monday 23 June 2025 Volume 34 : Issue 68

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/34.68>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents:
    How nuclear war could start (The Washington Post Opinion)
    Climate and Humanitarian Consequences of an even Limited
    Nuclear Exchange and the Actual Risks of Nuclear War (Webinar)
    Starlink hazard (WashPost)
    DOGE layoffs may have compromised the accuracy of government data (CNN) Slashing CISA Is a Gift to Our Adversaries (The Bulwark)
    Most Americans Believe Misinformation Is a Problem -- Federal Research Cuts
    Will Only Make the Problem Worse (PGN)
    As disinformation and hate thrive online, YouTube quietly changed
    how it moderates content (CBC)
    ChatGPT goes down -- and fake jobs grind to a halt worldwide (Pivot to AI)
    They Asked ChatGPT Questions. The Answers Sent Them Spiraling. (The NY Times) News Sites Are Getting Crushed by Google's New AI Tools (WSJ)
    Can AI safeguard us against AI? One of its Canadian pioneers thinks so (CBC) Bad brainwaves: A ChatGPT makes you stupid (Pivot to AI)
    They Asked an AI Chatbot Questions. The Answers Sent Them Spiraling
    (NYTimes)
    SSA stops reporting call-wait times and other metrics (WashPost)
    Pope Leo Takes On AI as a Potential Threat to Humanity (WSJ)
    AI Ethics Experts Set to Gather to Shape the Future of Responsible AI
    (ACM Media Center)
    Hacker Group Exposes Source Code for Iran's Cryptocurrency (Amichai Stein)o Iran Asks Citizens to Delete WhatsApp from Devices (AP)
    China Unleashes Hackers Against Russia (Megha Rajagopalan)
    China's Spy Agencies Investing Heavily in AI (Julian E. Barnes)
    Amazon Says It Will Reduce Its Workforce as AI Replaces Human Employees
    (CNN)
    ChatGPT will avoid being shut down in some life-threatenign scenarios,
    former OpenAI researcher claims (Techcrunch)
    Big Tech two-factor authentication compromised (Bloomberg)
    What could go wrong? - AllTrails launches AI route-making tool,
    worrying search-and-rescue members (National Observer)
    EU weighs sperm donor cap to curb risk of accidental incest (Steve Bacher) ChatGPT may be eroding critical thinking skills (MIT)
    Meta's Privacy Screwup Reveals How People Really See AI Chatbots (NYMag)
    Tesla blows past stopped school bus and hits kid-sized dummies in
    Full Self-Driving tests (Enadget)
    Couple steals back their own car after tracking an AirTag in it
    (AppleInsider)
    Finger Grease Mitigation for Tesla PIN Pad (Steven J. Greewood)
    San Francisco bicyclist sues over crash involving 2 Waymo cars
    (Silicon Valley)
    I lost Spectrum for about two hours (LA Times via Jim Geissman)
    How scammers are using AI to steal college financial aid (LA Times)
    U.S. air traffic control still runs on Windows 95 and floppy disks
    (Ars Technica)
    States sue to block the sale of genetic data collected by DNA testing
    company 23andMe (LA Times)
    Using Malicious Image Patches in Social Media to Hijack AI Agents
    (Steven J. Greenwald)
    Weather precision loss (Jim Geissman)
    Grief scams on Facebook (Rob Slade)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Thu, 19 Jun 2025 01:06:17 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: How nuclear war could start (The Washington Post Opinion)

    https://www.washingtonpost.com/opinions/interactive/2025/nuclear-weapons-war-russia-china-accident/

    To understand how it could all go wrong, look at how it almost did.

    If a nuclear war happens, it could very well start by accident.

    A decision to use the most destructive weapons ever created could grow out
    of human error or a misunderstanding just as easily as a deliberate decision
    on the part of an aggrieved nation. A faulty computer system could wrongly report incoming missiles, causing a country to retaliate against its
    suspected attacker. Suspicious activity around nuclear weapons bases could
    spin a conventional conflict into a nuclear one. Military officers who routinely handle nuclear weapons could mistakenly load them on the wrong vehicle. Any of these scenarios could cause events to spiral out of control.

    Such occurrences are not just possible plots for action movies. All of them actually happened and can happen again. Humans are imperfect, so nuclear
    near misses and accidents are a fact of life for as long as these weapons exist. [...]

    In 1983, the Soviet Union shot down a civilian Korean Air Lines flight that
    had strayed over Siberia. A few weeks later, Soviet early-warning radars
    showed that a single U.S. ICBM had been launched toward the U.S.S.R. At a
    time of high tension, and given the fear within the Soviet leadership of a
    U.S. first strike, such a launch could easily have triggered a massive counterattack. However, the watch officer, Col. Stanislav Petrov, had been trained that any U.S. attack would probably involve massive strikes, and he later stated that he considered a smaller strike — like the one his early-warning systems showed — to be illogical and therefore likely to be an error of some kind. He proved to be right. Would all Soviet watch officers
    have been willing to make the same call?

    [*The New York Times front page on Saturday 21 Jun 2025 had a rather
    oxymoronic item -- Trump accosting Tulsi Gabbard (Director of National
    Intelligence) for striking fear in the (Japanese) populace with a video
    outlining the horrors of nuclear war. PGN]

    ------------------------------

    Date: Wed, 18 Jun 2025 23:32:44 +0200
    From: diego latella <diego.latella@actiones.eu>
    Subject: Climate and Humanitarian Consequences of an even Limited
    Nuclear Exchange and the Actual Risks of Nuclear War (Webinar)

    Open webinar – June 26 – 4pm (CET) with

    David Ellwood (Council of the Pugwash Conferences on Science and World
    Affairs)

    Paolo Cotta Ramusino (Former Secretary General of Pugwash Conferences on Science and World Affairs)
    "The Actual Risks of Nuclear War"
    Moderated by Mieke Massink - CNR ISTI; GI-STS, Pisa
    (The official language of the webinar is English)

    The event is organized by: Gruppo Interdisciplinare su Scienza, Tecnologia e Società (GI-STS) dell’Area della Ricerca di Pisa del CNR

    In cooperation with: [...]

    ------------------------------

    Date: Sat, 7 Jun 2025 06:19:34 -0700
    ?From: "Jim" <jgeissman@socal.rr.com>
    Subject: Starlink hazard (WashPost)

    White House security staff warned Musk's Starlink is a security risk

    Starlink satellite connections in the White House bypass controls meant to
    stop leaks and hacking.

    https://www.washingtonpost.com/technology/2025/06/07/starlink-white-house-security-doge-musk/

    ------------------------------

    Date: Fri, 6 Jun 2025 07:19:07 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: DOGE layoffs may have compromised the accuracy of government data
    (CNN)

    The Consumer Price Index <https://www.cnn.com/2025/05/13/economy/us-cpi-consumer-inflation-april> is more than just the most widely used inflation gauge and a measurement of Americans' purchasing power.

    Its robust data plays a key role in the US economy's trajectory as well as monthly mortgage payments, Social Security checks, financial aid packages, business contracts, pay negotiations and curiosity salves for those who
    wonder what Kevin McCallister's $19.83 grocery bill in "Home Alone" might
    cost today.

    However, this gold standard piece of economic data has become a little less precise recently: The Bureau of Labor Statistics posted a notice on
    Wednesday <https://www.bls.gov/cpi/notices/2025/collection-reduction.htm> stating that it stopped collecting data in three not-so-small cities
    (Lincoln, Nebraska; Buffalo, New York; and Provo, Utah) and increased "imputations" for certain items (a statistical technique that, when boiled
    down to very rough terms, essentially means more educated guesses).

    The BLS notice states that the collection reductions "may increase the volatility of subnational or item-specific indexes" and are expected to have "minimal impact" on the overall index.

    https://www.cnn.com/2025/06/05/economy/cpi-data-bls-reductions

    ------------------------------

    Date: Thu, 5 Jun 2025 07:13:16 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Slashing CISA Is a Gift to Our Adversaries (The Bulwark)

    Maybe this is "political," but it's an essential read for anyone who cares about cyberattack prevention.

    An opinion piece from Mark Hertling, commander of U.S. Army Europe from 2011
    to 2012.

    https://www.thebulwark.com/p/slashing-cisa-is-a-gift-to-our-adversaries-cyber-attacks-warfare-security-estonia

    ------------------------------

    Date: Thu, 19 Jun 2025 7:56:25 PDT
    From: Peter Neumann <neumann@csl.sri.com>
    Subject: Most Americans Believe Misinformation Is a Problem --
    Federal Research Cuts Will Only Make the Problem Worse

    ------------------------------

    Date: Sat, 14 Jun 2025 22:50:25 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: As disinformation and hate thrive online, YouTube quietly changed
    how it moderates content (CBC)

    https://www.cbc.ca/news/entertainment/youtube-content-moderation-rules-1.75= 59931

    Change allows more content that violates guidelines to remain on platform if determined to in public interest

    YouTube, the world's largest video platform, appears to have changed its moderation policies to allow more content that violates its own rules to
    remain online.

    The change happened quietly in December, according to The New York Times,
    which reviewed training documents for moderators indicating that a video
    could stay online if the offending material did not account for more than 50 per cent of the video's duration =E2=80=94 that's double what it was pri= or
    to the new guidelines.

    YouTube, which sees 20 million videos uploaded a day, says it updates its guidance regularly and that it has a "long-standing practice of applying exceptions" when it suits the public interest or when something is presented
    in an educational, documentary, scientific or artistic context.

    "These exceptions apply to a small fraction of the videos on YouTube, but
    are vital for ensuring important content remains available," YouTube spokesperson Nicole Bell said in a statement to CBC News this week.

    ------------------------------

    Date: Wed, 11 Jun 2025 17:30:49 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: ChatGPT goes down -- and fake jobs grind to a halt worldwide

    ChatGPT suffered a worldwide outage from 06:36 UTC Tuesday morning. The
    servers weren't totally down, but queries kept returning errors. OpenAI
    finally got it mostly fixed later in the day. [OpenAI, archive]

    But you could hear the screams of the vibe coders, the marketers, and the LinkedIn posters around the world. The Drum even ran a piece about marketing teams grinding to a halt because their lying chatbot called in sick. [Drum]

    https://pivot-to-ai.com/2025/06/11/chatgpt-goes-down-and-fake-jobs-grind-to-a-halt-worldwide/

    ------------------------------

    Date: Wed, 18 Jun 2025 15:38:03 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: They Asked ChatGPT Questions. The Answers Sent Them Spiraling.
    (The New York Times)

    Generative AI chatbots are going down conspiratorial rabbit holes and
    endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.

    Before ChatGPT distorted Eugene Torres’s sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool.

    Mr. Torres, 42, an accountant in Manhattan, started using ChatGPT last year
    to make financial spreadsheets and to get legal advice. In May, however, he engaged the chatbot in a more theoretical discussion about “the simulation theory,” an idea popularized by “The Matrix,” which posits that we are living in a digital facsimile of the world, controlled by a powerful
    computer or technologically advanced society.

    “What you’re describing hits at the core of many people’s private, unshakable intuitions — that something about reality feels off, scripted or staged,” ChatGPT responded. “Have you ever experienced moments that felt like reality glitched?

    Not really, Mr. Torres replied, but he did have the sense that there was a wrongness about the world. He had just had a difficult breakup and was
    feeling emotionally fragile. He wanted his life to be greater than it
    was. ChatGPT agreed, with responses that grew longer and more rapturous as
    the conversation went on. Soon, it was telling Mr. Torres that he was “one
    of the Breakers — souls seeded into false systems to wake them from within.

    At the time, Mr. Torres thought of ChatGPT as a powerful search engine that knew more than any human possibly could because of its access to a vast
    digital library. He did not know that it tended to be sycophantic, agreeing with and flattering its users, or that it could hallucinate, generating
    ideas that weren’t true but sounded plausible.

    https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html?unlocked_article_code=1.Ok8.ha88.yNPHjmiCI`pD3&smid=url-share

    ------------------------------

    Date: Wed, 11 Jun 2025 08:44:30 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: News Sites Are Getting Crushed by Google's New AI Tools (WSJ)

    Chatbots are replacing Google’s traditional search, devastating traffic for some publishers.

    https://www.wsj.com/tech/ai/google-ai-news-publishers-7e687141?st=6toUwy&reflink=desktopwebshare_permalink

    This is supposed to be a free link, but just in case it doesn't work, here's the text of the article by Isabella Simonetti and Katherine Blunt.

    --- --- --- ---

    The AI armageddon is here for online news publishers.

    Chatbots are replacing Google searches, eliminating the need to click on
    blue links and tanking referrals to news sites. As a result, traffic that publishers relied on for years is plummeting.

    Traffic from organic search to HuffPost’s desktop and mobile websites fell
    by just over half in the past three years, and by nearly that much at the Washington Post, according to digital market data firm Similarweb.

    Business Insider cut about 21% of its staff last month, a move CEO Barbara
    Peng said was aimed at helping the publication “endure extreme traffic drops outside of our control.” Organic search traffic to its websites declined by 55% between April 2022 and April 2025, according to data from Similarweb.

    At a companywide meeting earlier this year, Nicholas Thompson, chief
    executive of the Atlantic, said the publication should assume traffic from Google would drop toward zero and the company needed to evolve its business model.

    Google’s introduction last year of AI Overviews, which summarize search results at the top of the page, dented traffic to features like vacation
    guides and health tips, as well as to product review sites. Its U.S.
    rollout last month of AI Mode, an effort to compete directly with the likes
    of ChatGPT, is expected to deliver a stronger blow. AI Mode responds to user queries in a chatbot-style conversation, with far fewer links.

    “Google is shifting from being a search engine to an answer engine,
    Thompson said in an interview with The Wall Street Journal. “We have to develop new strategies.

    The rapid development of click-free answers in search “is a serious threat
    to journalism that should not be underestimated,” said William Lewis, the Washington Post’s publisher and chief executive. Lewis is former CEO of the Journal’s publisher, Dow Jones.

    The Washington Post is “moving with urgency” to connect with previously overlooked audiences and pursue new revenue sources and prepare for a “post-search era,” he said.

    At the New York Times, the share of traffic coming from organic search to
    the paper’s desktop and mobile websites slid to 36.5% in April 2025 from almost 44% three years earlier, according to Similarweb.

    The Wall Street Journal’s traffic from organic search was up in April compared with three years prior, Similarweb data show, though as a share of overall traffic it declined to 24% from 29%.

    Sherry Weiss, chief marketing officer of Dow Jones and The Wall Street
    Journal, said that as the search landscape changes, the company is focusing
    on building trust with readers and earning habitual traffic.

    “As the referral ecosystem continues to evolve, we’re focused on ensuring customers come to us directly out of necessity,” she said.

    Google executives have said the company remains committed to sending traffic
    to the web, and that people who click on links after seeing AI Overviews
    tend to spend more time on those sites. The search giant also said it
    elevates links to news sites and doesn’t necessarily show AI Overviews when users search for trending news. Queries for content included in older
    articles and lifestyle stories, however, may produce an overview.

    Publishers have been squeezed by emerging technology since the dawn of the Internet.

    Digital news decimated once-lucrative print publications funded by
    classifieds, advertising and subscription revenue.

    Social-media platforms such as Facebook and Twitter helped funnel online traffic to publishers, but ultimately pivoted away from giving priority to news. Search was a stalwart traffic driver for more than a decade, despite
    some turbulence as Google tweaked its powerful algorithm.

    Generative AI is now rewiring how the internet is used altogether.

    “AI was not the thing that was changing everything, but it will be going forward. It’s the last straw,” said Neil Vogel, the chief executive of Dotdash Meredith, which is home to brands including People and Southern
    Living.

    When Dotdash merged with Meredith in 2021, Google search accounted for
    around 60% of the company’s traffic, Vogel said. Today, it is about one-third. Overall traffic is growing, thanks to efforts including
    newsletters and the MyRecipes recipe locker.

    Many online news outlets were already facing bleak trends such as declining public trust and fierce competition. With search traffic dwindling, they are putting an even greater emphasis on connecting directly with readers through businesses such as live conferences.

    The Atlantic is working on building those reader relationships with an
    improved app, more issues of the print magazine and an increased investment
    in events, Thompson said in a recent interview. The company has said subscriptions and advertising revenue are on the rise.

    Leaders at Politico and Business Insider—both owned by Axel Springer—also have been emphasizing audience engagement and connecting with readers.

    While publishers contend with how AI is changing search, they are also
    seeking ways to protect their copyright material. The large language models that underpin the new generation of chatbots are trained on data hoovered up from the open web, including news articles.

    Some media companies have embarked on legal battles against particular AI startups, while also signing licensing deals with other ones. The New York Times, for instance, sued OpenAI and Microsoft for copyright infringement,
    and recently announced an AI licensing agreement with Amazon. The Wall
    Street Journal’s parent company, News Corp, has a content deal with OpenAI and a lawsuit pending against Perplexity.

    Meanwhile, the generative AI race is becoming a significant threat to Google’s core search business.

    Though Google said it has seen an increase in total searches on Apple
    devices, an Apple executive said in federal court last month that Google searches in Safari, the iPhone maker’s browser, had recently fallen for the first time in two decades.

    ------------------------------

    Date: Sun, 8 Jun 2025 19:05:34 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Can AI safeguard us against AI? One of its Canadian pioneers
    thinks so (CBC)

    https://www.cbc.ca/radio/asithappens/ai-safety-non-profit-1.7553839

    When Yoshua Bengio first began his work developing artificial intelligence,
    he didn't worry about the sci-fi-esque possibilities of them becoming self-aware and acting to preserve their existence.

    That was, until ChatGPT came out.

    "And then it kind of blew [up] in my face that we were on track to build machines that would be eventually smarter than us, and that we didn't know
    how to control them," Bengio, a pioneering AI researcher and computer
    science professor at the Universit=C3=A9 de Montr=C3=A9al, told As It Happe=
    ns host
    Nil K=C3=B6ksal.

    The world's most cited AI researcher is launching a new research non-profit organization called LawZero to "look for scientific solutions to how we can design AI that will not turn against us."

    ------------------------------

    Date: Mon, 16 Jun 2025 16:22:53 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Bad brainwaves: A ChatGPT makes you stupid (Pivot to AI)

    This strongly suggests it’s imperative to keep students away from chatbots
    in the classroom — so they’ll actually learn.

    This also explains people who insist you use the chatbot instead of thinking and will not shut up about it. They tried thinking once and they didn’t like it.

    https://pivot-to-ai.com/2025/06/16/bad-brainwaves-chatgpt-makes-you-stupid/

    ------------------------------

    Date: Mon, 16 Jun 2025 09:30:25 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: They Asked an AI Chatbot Questions. The Answers Sent Them
    Spiraling. (NYTimes)

    Generative AI chatbots are going down conspiratorial rabbit holes and
    endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.

    https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html

    ------------------------------

    Date: Fri, 20 Jun 2025 18:06:23 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: SSA stops reporting call-wait times and other metrics

    The changes are the latest sign of the agency's struggle with website
    crashes, overloaded servers and long lines at field offices amid Trump cutbacks.

    Social Security has stopped publicly reporting its processing times for benefits, the 1-800 number's current call wait time and numerous other performance metrics, which customers and advocates have used to track the agency's struggling customer service programs.

    The agency removed a menu of live phone and claims data from its website earlier this month, according to Internet Archive records. It put up a new
    page this week that offers a far more limited view of the agency's customer service performance.

    The website also now urges customers to use an online portal for services rather than calling the main phone line or visiting a field office - two options that many disabled and elderly people with limited mobility or
    computer skills rely on for help. The agency had previously considered
    cutting phone services and then scrapped those plans amid an uproar.

    https://www.washingtonpost.com/politics/2025/06/20/social-security-wait-times-cuts/

    ------------------------------

    Date: Fri, 20 Jun 2025 11:19:18 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Pope Leo Takes On AI as a Potential Threat to Humanity (WSJ)

    Margherita Stancati, Drew Hinshaw, Keach Hagey, et al., *The Wall Street Journal* (06/17/25), via ACM TechNews

    This week, Google, Meta, IBM, Anthropic, Cohere, and Palantir executives took part in a two-day international conference at the Vatican on AI, ethics, and corporate governance. Some tech leaders hoped to avoid a binding international treaty on AI supported by the Vatican, and observers said the conference could set the tone for future interactions between Pope Leo and the tech industry on the matter of regulation.

    ------------------------------

    Date: Fri, 20 Jun 2025 11:19:18 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: AI Ethics Experts Set to Gather to Shape the Future of Responsible
    AI (ACM Media Center)

    ACM Media Center (06/18/25), via ACM TechNews

    The 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT 2025), taking place June 23-26 in Athens, Greece, will address how
    algorithmic systems are reshaping the world and what it takes to ensure
    these AI tools do so justly. Said ACM President Yannis Ioannidis, "The unprecedented advances and rapid integration of AI and data technologies
    have created an urgent need for a scientific and public conversation about
    AI ethics."

    ------------------------------

    Date: Fri, 20 Jun 2025 11:19:18 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Hacker Group Exposes Source Code for Iran's Cryptocurrency
    (Amichai Stein)

    Amichai Stein, *The Jerusalem Post* (Israel) (06/19/25), via ACM TechNews

    Israel-linked hacker group Gonjeshke Darande (Predatory Sparrow) released
    the source code and internal information of Nobitex, Iran's largest cryptocurrency exchange. According to the group, the company assists the
    regime in funding Iranian terrorism and uses virtual currencies to bypass sanctions. Gonjeshke Darande previously announced that it stole $48 million
    in cryptocurrency from the exchange, and claimed responsibility for a cyberattack on the Islamic Revolutionary Guard Corps-controlled Bank Sepah.

    ------------------------------

    Date: Fri, 20 Jun 2025 18:06:23 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: Iran Asks Citizens to Delete WhatsApp from Devices (AP)

    Kelvin Chan and Barbara Ortutay, Associated Press (06/17/25),
    via ACM TechNews

    Iranian state television has called on citizens to delete WhatsApp from
    their smartphones, claiming the app collects user information to send to Israel. In response, WhatsApp, which employs end-to-end encryption to
    prevent service providers in the middle from reading messages, issued a statement that read, "We do not track your precise location, we don't keep
    logs of who everyone is messaging, and we do not track the personal messages people are sending one another."

    ------------------------------

    Date: Fri, 20 Jun 2025 18:06:23 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: China Unleashes Hackers Against Russia (Megha Rajagopalan)

    Megha Rajagopalan, The New York Times (06/19/25),
    via ACM TechNews

    Since the beginning of the war in Ukraine, groups linked to the Chinese government have repeatedly hacked Russian companies and government
    agencies. While China appears to have plenty of domestic scientific and military expertise, Chinese military experts have lamented that its troops
    lack battlefield experience. Some defense insiders say China sees Russia's
    war in Ukraine as a chance to collect information about modern warfare
    tactics and Western weaponry, and what works against them.

    ------------------------------

    Date: Fri, 20 Jun 2025 11:19:18 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: China's Spy Agencies Investing Heavily in AI (Julian E. Barnes)

    Julian E. Barnes, *The New York Times* (06/17/25), via ACM TechNews

    A report by researchers at Recorded Future's Insikt Group details
    investments in AI by Chinese spy agencies to develop tools that could
    improve intelligence analysis, help military commanders develop operational plans, and generate early threat warnings. The researchers found that China
    is probably using a mix of large language models, including Meta and OpenAI, along with domestic models from DeepSeek, Zhipu AI, and others.

    ------------------------------

    Date: Fri, 20 Jun 2025 11:19:18 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Amazon Says It Will Reduce Its Workforce as AI Replaces Human
    Employees (CNN)

    Ramishah Maruf and Alicia Wallace, CNN (06/17/25), via ACM TechNews

    Amazon CEO Andy Jassy said in a June 17 blog post that the rollout of generative AI agents will change how work is performed, enabling the company
    to shrink its workforce in the future. Jassy said, "We will need fewer
    people doing some of the jobs that are being done today, and more people
    doing other types of jobs." Employees should view AI as "teammates we can
    call on at various stages of our work, and that will get wiser and more
    helpful with more experience," according to Jassy.

    ------------------------------

    Date: Sat, 14 Jun 2025 06:55:13 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: ChatGPT will avoid being shut down in some life-threatening
    scenarios, former OpenAI researcher claims (Techcrunch)

    A former OpenAI researcher published new research claiming that the
    company's AI models will go to great lengths to stay online.

    https://techcrunch.com/2025/06/11/chatgpt-will-avoid-being-shut-down-in-some-life-threatening-scenarios-former-openai-researcher-claims/

    ------------------------------

    Date: Fri, 20 Jun 2025 11:19:18 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Big Tech two-factor authentication compromised (Bloomberg)

    Ryan Gallagher. Crofton Black and Gabriel Geiger. Bloomberg (06/16/25), via
    ACM TechNews

    Concerns are being raised about the middlemen that send two-factor authentication codes to consumers via text on behalf of Big Tech companies, popular apps, banks, encrypted chat platforms, and other senders. An
    industry whistleblower has revealed around 1- million such messages have
    passed through Fink Telecom Services, a Swiss company that cybersecurity researchers have linked to incidents in which the codes were intercepted and used to infiltrate private online accounts. Critics of the industry point to
    a lack of regulation allowing such companies to operate without a license.

    ------------------------------

    Date: Fri, 20 Jun 2025 08:02:07 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: What could go wrong? - AllTrails launches AI route-making tool,
    worrying search-and-rescue members

    What could go wrong? - AllTrails launches AI route-making tool,
    worrying search-and-rescue members

    https://www.nationalobserver.com/2025/06/17/news/alltrails-ai-tool-search-rescue-members

    ------------------------------

    Date: Thu, 19 Jun 2025 23:43:42 +0000 (UTC)
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: EU weighs sperm donor cap to curb risk of accidental incest

    And now for something completely different - an item which has nothing to d o with AI. ;-)

    Eight countries want to discuss an EU limit on the number of children
    conceived from a single sperm donor -- to prevent future generations from unwitting incest and psychological harms.

    Donor-conceived births are rising across Europe as fertility rates decline
    and assisted reproduction becomes more widely accessible -- including for same-sex couples and single women. But with many countries struggling to recruit enough local donors, commercial cryobanks are increasingly shipping reproductive cells known as gametes -- sperm or egg -- across borders, sometimes from the same donor to multiple countries.

    Most EU countries have national limits on how many children can be conceived from one donor -- ranging from one in Cyprus to 10 in France, Greece,
    Italy and Poland. However, there is no limit for cross-border donations, increasing the risk of potential health problems linked to a single donor,
    as well as a psychological impact on children who discover they have doze ns or even hundreds of half-siblings.

    [Ia this an egg-cell-ent move? PGN]

    ------------------------------

    Date: Thu, 19 Jun 2025 08:07:28 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: ChatGPT may be eroding critical thinking skills (MIT)

    https://time.com/7295195/ai-chatgpt-google-learning-school/

    ------------------------------

    Date: Thu, 19 Jun 2025 01:43:14 +0000 (UTC)
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Meta's Privacy Screwup Reveals How People Really See AI Chatbots
    (NYMag)

    https://nymag.com/intelligencer/article/metas-privacy-goof-shows-how-people-really-use-ai-chatbots.html

    ------------------------------

    Date: Sun, 15 Jun 2025 11:59:23 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Tesla blows past stopped school bus and hits kid-sized dummies in
    Full Self-Driving tests (Enadget)

    https://www.engadget.com/transportation/tesla-blows-past-stopped-school-bus-and-hits-kid-sized-dummies-in-full-self-driving-tests-183756251.html

    ------------------------------

    Date: Wed, 18 Jun 2025 20:14:13 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: Couple steals back their own car after tracking an AirTag in it

    *When London police wouldn't recover a stolen car despite an AirTag giving
    its location, the owners say they tracked it down and stole it back for themselves...* [...]

    https://appleinsider.com/articles/25/06/13/couple-steals-back-their-own-car-after-tracking-an-airtag-in-it

    ------------------------------

    Date: Fri, 13 Jun 2025 14:50:31 -0400
    From: "Steven J. Greenwald" <greenwald.steve@gmail.com>
    Subject: Finger Grease Mitigation for Tesla PIN Pad

    From Tesla, a post about how they have mitigated a threat to thieves
    trying to figure out a user's PIN by checking for finger grease on the
    touchscreen.

    "If you set up PIN to drive, a thief would not be able to drive off in your Tesla, even if they somehow gain access to your keycard, phone or vehicle

    "The PIN pad also appears in a slightly different place on the screen every time, so finger grease doesn't give away your PIN.''

    Link to source post on X:
    https://x.com/Tesla/status/1933516310475952191

    ------------------------------

    Date: Mon, 16 Jun 2025 15:15:43 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: San Francisco bicyclist sues over crash involving 2 Waymo cars

    https://www.siliconvalley.com/2025/06/10/san-francisco-bicyclist-crash-waymo/

    ------------------------------

    Date: Tue, 17 Jun 2025 11:35:42 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: I lost Spectrum for about two hours

    Would-be copper thieves caused Internet outage affecting LA and Ventura
    counties (LA Times)

    https://www.latimes.com/california/story/2025-06-15/would-be-copper-thieves- cause-internet-outage-affecting-l-a-ventura-counties

    ------------------------------

    Date: Tue, 17 Jun 2025 11:36:31 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: How scammers are using AI to steal college financial aid (LA Times)

    https://www.latimes.com/california/story/2025-06-17/how-scammers-are-using-a i-to-steal-college-financial-aid

    Fake college enrollments have surged as crime rings deploy "ghost students," chatbots that join online classrooms and stay just long enough to collect a financial aid check. In some cases, professors discover almost no one in
    their class is real.

    ------------------------------

    Date: Fri, 13 Jun 2025 14:24:09 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: U.S. air traffic control still runs on Windows 95 and floppy
    disks (Ars Technica)

    Agency seeks contractors to modernize decades-old systems within four years.

    On Wednesday, acting FAA Administrator Chris Rocheleau told the House Appropriations Committee that the Federal Aviation Administration plans to replace its aging air traffic control systems, which still rely on floppy
    disks and Windows 95 computers, Tom's Hardware reports. The agency has
    issued a Request For Information to gather proposals from companies willing
    to tackle the massive infrastructure overhaul.

    "The whole idea is to replace the system. No more floppy disks or paper strips," Rocheleau said during the committee hearing. Transportation
    Secretary Sean Duffy called the project "the most important infrastructure project that we've had in this country for decades," describing it as a bipartisan priority.

    Most air traffic control towers and facilities across the US currently
    operate with technology that seems frozen in the 20th century, although that isn't necessarily a bad thing—when it works. Some controllers currently use paper strips to track aircraft movements and transfer data between systems using floppy disks, while their computers run Microsoft's Windows 95
    operating system, which launched in 1995.

    https://arstechnica.com/information-technology/2025/06/faa-to-retire-floppy-disks-and-windows-95-amid-air-traffic-control-overhaul/

    ------------------------------

    Date: Wed, 11 Jun 2025 19:02:24 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: States sue to block the sale of genetic data collected by DNA
    testing company 23andMe (LA Times)

    Dozens of states have filed a joint lawsuit <https://www.washingtonpost.com/documents/809d3c27-44d5-4042-80a2-3ea3c1743d b2.pdf> against the bankrupt DNA-testing company 23andMe to block the
    company's sale of its customers' genetic data without explicit consent.

    The suit, filed this week in U.S. Bankruptcy Court in the Eastern District
    of Missouri, comes months after 23andMe began a court-supervised sale
    process of its assets.

    The South San Francisco-based venture was once valued at $6 billion and has collected DNA samples from more than 15 million customers.

    https://www.latimes.com/business/story/2025-06-11/23andme-bankruptcy-follow

    ------------------------------

    From: "Steven J. Greenwald" <greenwald.steve@gmail.com>
    Date: Tue, 10 Jun 2025 15:29:47 -0400
    Subject: Using Malicious Image Patches in Social Media to Hijack AI Agents

    From the thread posted on X by the researchers: "Beware: Your AI assistant could be hijacked just by encountering a malicious image online! "Our
    latest research exposes critical security risks in AI assistants. An
    attacker can hijack them by simply posting an image on social media and
    waiting for it to be captured."

    ------------------------------

    Date: Wed, 11 Jun 2025 09:16:25 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: Weather precision loss

    As of today (11 June 2025) the NWS forecast for Van Nuys (3 mi SE of the observation site at KVNY Van Nuys Airport) has been changed from that
    specific location to the "Western San Fernando Valley", a larger area. Presumably other point forecasts in the region have also changed. For
    example, yesterday's forecast was for a high of 89; today it says "in the
    80s to around 90". Also, the forecast for Simi Valley has been broadened to "Southeastern Ventura County Valleys" with a range of temperatures instead
    of a single number. Is this a response to falling staff numbers?

    [They could get rid of a huge number of sensors and staff by aggregating
    larger areas. Where I live there are microclimates from San Fran to
    surroundings with variations of sometimes 55-degree differences within a
    30-mile radius. I suppose this strategy could lead to large-area
    predictions of 55 to 110 for the whole Bay Area. That would not be very
    helpful. PGN]

    ------------------------------

    Date: Thu, 5 Jun 2025 06:02:06 -0700
    From: Rob Slade <rslade@gmail.com>
    Subject: Grief scams on Facebook

    In a very short space of time I have had multiple romance/grief scams
    contacts on Fakebook--all of them (within the first few messages) telling me
    "I can't send you friend request," and either instructing or implying that I should attempt to "friend" them, or contact them via private messaging.

    (Interestingly, in one case, despite the fact that my email address was available, the scammer did *not*, in fact, contact me via email.)

    Facebook/Meta is lousy at protecting its users from such scams. But I
    assume that, somewhere in the bowels of the "algorithm," there is some awareness of the types of messages that scammers send their "friends," and
    thus the scammers have learned to avoid "friending" too many marks at a
    time. I also assume that these attempts are part of an organized scam
    "farm" operation, given the frequency and consistency of the attempts on Facebook, and the avoidance of email.

    ------------------------------

    Date: Sat, 28 Oct 2023 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) has moved to the ftp.sri.com site:
    <risksinfo.html>.
    *** Contributors are assumed to have read the full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 34.68
    ************************



  • From RISKS List Owner@risko@csl.sri.com to risks-resend@csl.sri.com on Sat Oct 11 17:56:28 2025
    Content-Type: text/plain; charset=UTF-8
    Content-Transfer-Encoding: 8bit
    precedence: bulk
    Subject: Risks Digest 34.77

    RISKS-LIST: Risks-Forum Digest Saturday 11 October 2025 Volume 34 : Issue 77

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/34.77>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents: [Long gap. Working backwards. I'm still human. PGN]
    How the World's Biggest Car-Makers Fell Behind in Software (FT)
    Why Are Car Software Updates Still So Bad? (WiReD via Gabe Goldberg)
    A delivery robot collided with a disabled man on L.A. street.
    The aftermath is getting ugly (LA Times via Steve Bacher)
    Scientists grow mini human brains to power computers (BBC)
    Apple Announces $2 Million Bug Bounty Reward for the Most Dangerous Exploits
    (WiReD)
    Every question you ask, every comment you make, will be recording you
    (The Register)
    EU to Expand Satellite Defenses After GPS Jamming of EC President's Flight
    (Franklin Okeke)
    NIST Enhances Security Controls for Improved Patching (Arielle Waldman)
    When AI Came for Hollywood (The NY Times)
    Small numbers of poisoned samples can wreck LLM AI models of any size
    (Cornell Study)
    Taco Bell Rethinks Future of Voice AI at Drive-Through (Isabelle Bousquette)
    AI Tool Identifies 1,000 'Questionable' Scientific Journals (Daniel Strain) Stanford Study: AI is destroying job prospects for younger workers
    especially in computing (Digital Economy)
    The dangers of AI coding (Lauren Weinstein)
    AI safety tool flags student activity, spurs debate on privacy and accuracy
    (san.com)
    The AI Prompt That Could End the World (The NY Times)
    Recruiters Use AI to Scan Resumes; Applicants Are Trick It (The NYT Times) Tristan Harris on The Dangers of Unregulated AI on Humanity and the
    Workforce (The Daily Show YouTube)
    The popular conception was that AI would be a danger to civilization because
    AI would be so smart, but the reality turns out to be the danger is that AI
    is so stupid. (Lauren Weinstein)
    AI Data Centers Are an Even Bigger Disaster Than Previously Thought
    (Futurism)
    Microsoft's agent mode is a tool for generating fake data (Pivot to AI)
    Cheer Up, or Else. China Cracks Down on the Haters and Cynics (NYT)
    Criminals offer reporter money to hack BBC (BBC)
    Tech billionaires seem to be doom prepping. Should we all be worried? (BBC) Japan faces Asahi beer shortage after cyber-attack (BBC)
    New WireTap Attack Extracts Intel SGX ECDSA Key via DDR4 Memory-Bus
    Interposer (The Hacker News)
    Exploit Allows for Takeover of Fleets of Unitree Robots (Evan Ackerman)
    Google Says 90% of Tech Workers Are Now Using AI at Work (Lisa Eadicicco)
    Neon buys phone calls to train AI, then leaks them all (Martin Ward)
    Government ID data used for age verification stolen (This week in Security) Federal cyber agency warns of 'serious and urgent' attack on tech used by
    remote workers (CBC)
    Billions of Dollars ‘Vanished’: Low-Profile Bankruptcy Rings Alarms on Wall
    Street (The New York Times)
    911 Service Is Restored in Louisiana and Mississippi
    How an Internet mapping glitch turned a random Kansas farm into a digital
    hell (Fusion)
    Microsoft cuts off cloud services to Israeli military unit (NBC)
    ShareFile website (Martin Ward)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Wed, 3 Sep 2025 11:30:54 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: How the World's Biggest Car-Makers Fell Behind in Software (FT)

    Kana Inagaki, Harry Dempsey and David Keohane, Financial Times (08/28/25),
    via ACM TechNews

    Legacy automakers are struggling to keep pace with Tesla and Chinese
    electric vehicle makers in the race to build software-defined vehicles.
    Despite hiring tech talent and investing billions, companies like Toyota, Volkswagen, and Volvo face buggy platforms, delays, and rising costs.
    Carmakers are partnering with tech giants like Google, Nvidia, and Rivian,
    but tensions remain over control of data and systems.

    ------------------------------

    Date: Sun, 5 Oct 2025 14:17:02 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Why Are Car Software Updates Still So Bad? (WiReD)

    Over-the-air upgrades can not only transform your ride, they can help car=makers slash costs. Here's why they’re still miles away from being seamless.

    https://www.wired.com/story/why-are-car-software-updates-still-so-bad/

    Omits two critical issues: security of updates, preventing malware. And bricking cars -- though "bricking" is in a section heading, but only meaning reducing function rather than -- you know, making a car useless.

    I badgered auto execs about these issues and got nothing but "it'll be wonderful".

    ------------------------------

    Date: Fri, 26 Sep 2025 07:15:09 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: A delivery robot collided with a disabled man on L.A. street.
    The aftermath is getting ugly (LA Times)

    A collision in West Hollywood between a delivery robot and a man using a mobility scooter went viral, generating attacks on the robot company and
    on the man himself.

    https://www.latimes.com/california/story/2025-09-25/viral-video-of-delivery-robot-colliding-with-man-in-wheelchair-sparks-accessibility-debate

    ------------------------------

    Date: Sat, 4 Oct 2025 17:30:25 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Scientists grow mini human brains to power computers (BBC)

    https://www.bbc.com/news/articles/cy7p1lzvxjro

    It may have its roots in science fiction, but a small number of researchers
    are making real progress trying to create computers out of living cells.

    Welcome to the weird world of biocomputing.

    Among those leading the way are a group of scientists in Switzerland, who I went to meet.

    One day, they hope we could see data centres full of "living" servers which replicate aspects of how artificial intelligence (AI) learns - and could
    use a fraction of the energy of current methods.

    ------------------------------

    Date: Fri, 10 Oct 2025 12:28:32 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Apple Announces $2 Million Bug Bounty Reward for the Most Dangerous
    Exploits (WiReD)

    With the mercenary spyware industry booming, Apple VP Ivan Krstić tells
    WIRED that the company is also offering bonuses that could bring the max
    total reward for iPhone exploits to $5 million.

    https://www.wired.com/story/apple-announces-2-million-bug-bounty-reward/

    Apple Took Down These ICE-Tracking Apps. The Developers Aren't Giving Up. “We are going to do everything in our power to fight this,” says ICEBlock developer Joshua Aaron after Apple removed his app from the App
    Store.

    https://www.wired.com/story/apple-took-down-ice-tracking-apps-their-developers-arent-giving-up/

    ------------------------------

    Date: Mon, 18 Aug 2025 16:53:36 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Every question you ask, every comment you make, will be
    recording you (The Register)

    When you're asking AI chatbots for answers, they're data-mining you

    https://www.theregister.com/2025/08/18/opinion_column_ai_surveillance/?td=rt-3a

    ------------------------------

    Date: Wed, 3 Sep 2025 11:30:54 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: EU to Expand Satellite Defenses After GPS Jamming of EC
    President's Flight (Franklin Okeke)

    Franklin Okeke, Computing (U.K.) (09/02/25), via ACM TechNews

    The European Union (EU) plans to deploy additional satellites in low Earth orbit to strengthen its ability to detect GPS interference, following an incident targeting European Commission (EC) President Ursula von der Leyen's flight. Pilots reportedly had to rely on paper maps to land von der Leyen's plane safely in Plovdiv, Bulgaria. An EU spokesperson said Bulgarian authorities suspect Russia was behind the jamming, though the Kremlin denies involvement. Similar GPS disruptions have affected the Baltic region and previous EU and U.K. flights.

    ------------------------------

    Date: Wed, 3 Sep 2025 11:30:54 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: NIST Enhances Security Controls for Improved Patching
    (Arielle Waldman)

    Arielle Waldman, Dark Reading (09/02/25), via ACM TechNews

    The U.S. National Institute of Standards and Technology (NIST) updated its Security and Privacy Control catalog to improve software patch and update management. The revisions focus on three key areas: standardized logging
    syntax to speed incident response, root-cause analysis to address underlying software issues, and designing systems for cyber-resiliency to maintain critical functions under attack. The update also emphasizes least-privilege access, flaw-remediation testing, and coordinated notifications.

    ------------------------------

    Date: Sat, 4 Oct 2025 22:23:13 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: When AI Came for Hollywood (The NY Times)

    https://www.nytimes.com/2025/10/04/opinion/ai-hollywood-tilly-norwood-actress.html

    In the immortal words of Emily Blunt, ``Good Lord, we're screwed.''

    She was on a podcast with Variety Monday when she was handed a headline
    about cinema's latest sensation, Tilly Norwood.

    Agents are circling the hot property, a fresh-faced young British brunette actress who is attracting global attention.

    Norwood is AI, and Blunt is P.O.ed. In fact, she says, she's terrified.

    Told that Tilly's creator, Eline Van der Velden, a Dutch former actress
    with a masters in physics, wants her to be the next Scarlett Johansson,
    Blunt protested. But we have Scarlett Johansson. (Cue the Invasion of
    the Body Snatchers music.)

    [This item follows Matthew's earlier item:
    She can fight monsters, flee explosions, and even cry on Graham Norton --
    but Tilly Norwood is no Hollywood darling.
    https://www.cbc.ca/news/entertainment/ai-actress-backlash-1.7647478
    I wonder if her eyes have back-lashes? I am afraid some of you may be
    her pupils, in which she should have been named IRIS. Tilly seems Silly.
    unless money is flowing into the Till(y). But she is certainly proof
    that AI has no limits. PGN]

    ------------------------------

    Date: Thu, 9 Oct 2025 14:25:42 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Small numbers of poisoned samples can wreck LLM AI models of any
    size (Cornell Study)

    https://arxiv.org/pdf/2510.07192

    ------------------------------

    Date: Wed, 3 Sep 2025 11:30:54 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Taco Bell Rethinks Future of Voice AI at Drive-Through
    (Isabelle Bousquette)

    Isabelle Bousquette, The Wall Street Journal (08/29/25), via ACM TechNews

    Taco Bell has seen mixed results in its experiment with voice AI ordering at over 500 drives-through. Customers have reported glitches, delays, and even trolled the system with absurd orders, prompting concerns about reliability. The fastfood chain's Dane Mathews acknowledged the technology sometimes disappoints, noting it may not suit all locations, especially high-traffic ones. The chain is reassessing where AI adds value and when human staff
    should step in.

    ------------------------------

    Date: Wed, 3 Sep 2025 11:30:54 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: AI Tool Identifies 1,000 'Questionable' Scientific Journals
    (Daniel Strain)

    Daniel Strain, CU Boulder Today (08/28/25), via ACM TechNews

    Computer scientists at the University of Colorado Boulder developed an AI platform to identify questionable or "predatory" scientific journals. These journals often charge researchers high fees to publish work without proper
    peer review, undermining scientific credibility. The AI, trained on data
    from the non-profit Directory of Open Access Journals, analyzed 15,200
    journals and flagged over 1,400 as suspicious, with human experts later confirming more than 1,000 as likely problematic. The tool evaluates
    editorial boards, website quality, and publication practices.

    ------------------------------

    Date: Tue, 26 Aug 2025 07:04:13 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Stanford Study: AI is destroying job prospects for younger workers
    especially in computing (Digital Economy)

    The Big Tech Billionaire CEO are toasting the destruction of young
    people's lives. THEY DO NOT CARE ABOUT YOU. -L

    https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf

    ------------------------------

    Date: Sat, 4 Oct 2025 09:02:12 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: The dangers of AI coding

    I am SO glad I phased out of most coding years ago, except as needed for my
    own systems. Those jobs are toast. But the dangers are very real.

    Just now I needed a Bash script for a network monitoring task. I must have written dozens of these in various forms over the years. Pings and status
    flags and the usual stuff.

    So this time, just for the hell of it, I asked Gemini (free version of
    course) to do it:

    "write me a bash script that will ping a specific ip address and when the
    pings start failing keep trying to ping and then when the pings are
    successful again send a specific curl command to that ip address"

    wAnd about 10 seconds or less later out came a completely reasonable
    looking, nicely commented Bash script, along with a reminder to make
    the file executable and how to stop it with ^C.

    This of course is a very simple, really trivial task, and I was able to
    quickly read through the code and verify that it looked correct.

    The problem of course is obvious. I could do this verification only because
    I have enough skill to easily write that code MYSELF, it would just take me more time. If the code were more complex and/or voluminous, just checking
    could range from very lengthy to utterly impractical to do at all, meaning
    any errors could go undetected with everything that implies, especially for dangerous "sleeper" bugs.

    There may be a useful analogy to vehicle driver assist systems, that may
    lull drivers into being less attentive and causing them to be unable to
    respond to emergency situations quickly when their intervention is most required.

    Crashing code and crashing cars. All very dangerous.

    ------------------------------

    Date: Thu, 25 Sep 2025 14:54:28 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: AI safety tool flags student activity, spurs debate on privacy and
    accuracy (san.com)

    https://san.com/cc/ai-safety-tool-flags-student-activity-spurs-debate-on-privacy-and-accuracy/

    In federal lawsuit, students allege Lawrence school district's AI
    surveillance tool violates their rights

    https://lawrencekstimes.com/2025/08/01/usd497-gaggle-lawsuit-filed/

    ------------------------------

    Date: Fri, 10 Oct 2025 15:48:55 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: The AI Prompt That Could End the World (The NY Times)

    https://www.nytimes.com/2025/10/10/opinion/ai-destruction-technology-future.html

    How much do we have to fear from AI, really? It's a question I've been
    masking experts since the debut of ChatGPT in late 2022.

    The AI pioneer Yoshua Bengio, a computer science professor at the Universit=C3=A9 de Montr=C3=A9al, is the most-cited researcher alive, in any discipline. When I spoke with him in 2024, Dr. Bengio told me that he had trouble sleeping while thinking of the future. Specifically, he was worried that an AI would engineer a lethal pathogen == some sort of
    super-coronavirus -- to eliminate humanity. ``I don't think there's
    anything close in terms of the scale of danger,'' he said.

    Contrast Dr. Bengio's view with that of his frequent collaborator Yann
    LeCun, who heads AI research at Mark Zuckerberg's Meta. Like Dr. Bengio,
    Dr. LeCun is one of the world's most-cited scientists. He thinks that AI
    will usher in a new era of prosperity and that discussions of existential
    risk are ridiculous. ``You can think of A.I. as an amplifier of human intelligence,'' he said in 2023.

    ------------------------------

    Date: Thu, 9 Oct 2025 15:24:59 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Recruiters Use AI to Scan Resumes; Applicants Are Trying to Trick
    It (The NYT Times)

    In an escalating cat-and-mouse game, job hunters are trying to fool AI into moving their applications to the top of the pile with embedded instructions.

    https://www.nytimes.com/2025/10/07/business/ai-chatbot-prompts-resumes.html?smid=nytcore-ios-share&referringSource=articleShare

    ...read comments.

    ------------------------------

    Date: Wed, 8 Oct 2025 17:28:53 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Tristan Harris on The Dangers of Unregulated AI on Humanity and
    the Workforce (The Daily Show YouTube)

    “This does not have to be our destiny.” Co-founder of the Center for Humane Technology Tristan Harris sits down with Jon Stewart to discuss how AI has already disrupted the workforce as current iterations of the technology have dropped entry-level work by 13%, tech companies prioritization of their first-to-market stance over product and human safety, and how reliance on AI
    is stifling human growth. #DailyShow #TristanHarris #AI

    https://www.youtube.com/watch?v=675d_6WGPbo

    [Also noted by Matthew Kruk. PGN]

    ------------------------------

    Date: Tue, 7 Oct 2025 08:25:38 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: The popular conception was that AI would be a danger to
    civilization because AI would be so smart, but the reality turns out to be
    the danger is that AI is so stupid.

    ------------------------------

    Date: Sat, 11 Oct 2025 08:52:15 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: AI Data Centers Are an Even Bigger Disaster Than Previously Thought
    (Futurism)

    https://futurism.com/future-society/ai-data-centers-finances

    ------------------------------

    Date: Thu, 2 Oct 2025 11:00:41 +0100
    From: Martin Ward <martin@gkc.org.uk>
    Subject: Microsoft's agent mode is a tool for generating fake data
    (Pivot to AI via YouTube)

    Microsoft has put a co-pilot document generator into the online version of Office 365, called "agent mode". Quote: "In the same way, Vibe coding has transformed software development, the latest reasoning models in C-Pilot
    unlock agentic productivity for office artifacts"

    This is a gadget for faking evidence.

    Security researcher Kevin Bowmont gave agent mode a good try out. He asked
    it: "Make a spreadsheet about how our endpoint detection response tool
    blocks 100% of ransomware." It did exactly that. It made up a spreadsheet
    of completely fake data about the product's effectiveness. With graphs.

    Pivot to AI report:
    https://www.youtube.com/watch?v=kH59-8dD08g

    ------------------------------

    Date: Tue, 7 Oct 2025 23:09:51 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Cheer Up, or Else. China Cracks Down on the Haters and Cynics (NYT)

    https://www.nytimes.com/2025/10/08/world/asia/china-censorship-pessimism-despair.html

    As China struggles with economic discontent, Internet censors are silencing those who voice doubts about work, marriage, or simply sigh too loudly
    online.

    ------------------------------

    Date: Mon, 29 Sep 2025 11:45:38 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Criminals offer reporter money to hack BBC (BBC)

    https://www.bbc.com/news/articles/c3w5n903447o

    Like many things in the shadowy world of cyber-crime, an insider threat is something very few people have experience of.

    Even fewer people want to talk about it.

    But I was given a unique and worrying experience of how hackers can
    leverage insiders when I myself was recently propositioned by a criminal
    gang.

    "If you are interested, we can offer you 15% of any ransom payment if you
    give us access to your PC."

    ------------------------------

    Date: Thu, 9 Oct 2025 20:54:45 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Tech billionaires seem to be doom prepping. Should we all be
    worried? (BBC)

    https://www.bbc.com/news/articles/cly17834524o

    Mark Zuckerberg is said to have started work on Koolau Ranch, his sprawling 1,400-acre compound on the Hawaiian island of Kauai, as far back as 2014.

    It is set to include a shelter, complete with its own energy and food
    supplies, though the carpenters and electricians working on the site were banned from talking about it by non-disclosure agreements, according to a report by Wired magazine. A six-foot wall blocked the project from view of
    a nearby road.

    Asked last year if he was creating a doomsday bunker, the Facebook founder
    gave a flat "no". The underground space spanning some 5,000 square feet is,
    he explained, is "just like a little shelter, it's like a basement".

    ------------------------------

    Date: Fri, 3 Oct 2025 06:36:32 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Japan faces Asahi beer shortage after cyber-attack (BBC)

    https://www.bbc.com/news/articles/c0r0y14ly5ro

    Japan is facing a shortage of Asahi products, including beer and bottled
    tea, as the drinks giant grapples with the impact of a major cyber-attack
    that has affected its operations in the country.

    Most of the Asahi Group's factories in Japan have been at a standstill
    since Monday, after the attack hit its ordering and delivering systems.

    Major Japanese retailers, including 7-Eleven and FamilyMart, have now
    warned customers to expect shortages of Asahi products.

    [A kiss is just a kiss, Asahi is just a sigh, as time goes by(e)...
    Casablanca. We'll always have Paris for wine -- and bierre. PGN]

    ------------------------------

    Date: Sat, 4 Oct 2025 01:23:59 +0000
    From: Victor Miller <victorsmiller@gmail.com>
    Subject: New WireTap Attack Extracts Intel SGX ECDSA Key via DDR4 Memory-Bus
    Interposer (The Hacker News)

    https://thehackernews.com/2025/10/new-wiretap-attack-extracts-intel-sgx.html?m=1

    ------------------------------

    Date: Mon, 29 Sep 2025 11:22:12 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Exploit Allows for Takeover of Fleets of Unitree Robots
    (Evan Ackerman)

    Evan Ackerman, *IEEE Spectrum* (09/25/25), via ACM TechNews

    Security researchers disclosed a critical Bluetooth Low Energy vulnerability
    in several robots manufactured by Chinese robotics company Unitree that
    gives attackers full root access and enables worm-like self-propagation
    between nearby devices. The exploit, called UniPwn, affects Unitree's Go2
    and B2 quadrupeds as well as its G1 and H1 humanoids, and arises from
    hardcoded encryption keys and insufficient packet validation. Attackers can inject malicious code disguised as Wi-Fi credentials, leading to persistent compromise and potential botnet formation.

    ------------------------------

    Date: Fri, 26 Sep 2025 11:32:18 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Google Says 90% of Tech Workers Are Now Using AI at Work
    (Lisa Eadicicco)

    Lisa Eadicicco, CNN (09/23/25), via ACM TechNews

    Of 5,000 global technology professionals surveyed by Google's DORA research decision, the vast majority (90%) said they now use AI in their jobs, up
    from just 14% who did so in 2024. However, the survey found only 20% of respondents place "a lot" of trust in the quality of AI-generated code, compared to 23% who trust it "a little" and 46% who trust it "somewhat."

    ------------------------------

    Date: Sat, 27 Sep 2025 10:48:55 +0100
    From: Martin Ward <martin@gkc.org.uk>
    Subject: Neon buys phone calls to train AI, then leaks them all

    Neon Mobile is an app that sells your phone calls to AI companies for
    training, and pays you 15–30 cents per minute!

    Could there be a RISK of all this personal data leaking?

    One day after reporting on the new app, Techcrunch reported that Neon's publicly accessible web site listed "data about the most recent calls made
    by the app’s users, as well as providing public web links to their raw audio files and the transcript text"

    Pivot to AI report:
    https://www.youtube.com/watch?v=G_LKccOiCoo

    ------------------------------

    Date: Sat, 4 Oct 2025 07:23:13 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Government ID data used for age verification stolen
    (This Week in Security)

    [Gee, as if nobody predicted stuff like this, huh?]

    https://this.weekinsecurity.com/discord-says-users-government-ids-used-for-age-checks-stolen-by-hackers/

    ------------------------------

    Date: Fri, 26 Sep 2025 15:23:40 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Federal cyber agency warns of 'serious and urgent' attack on
    tech used by remote workers (CBC)

    https://www.cbc.ca/news/politics/cisco-cyber-attack-vpn-1.7644591

    Government cyber-agencies around the world are rushing to clamp down on
    what appears to be an advanced and sophisticated espionage campaign
    targeting popular security software used by remote workers.

    Calling the threat "serious and urgent," Canada's Communication Security Establishment's (CSE) Centre for Cyber Security joined its international
    allies Thursday urging organizations to take immediate action to patch up vulnerabilities following a widespread hit on the technology security
    company Cisco.

    ------------------------------

    Date: Sat, 11 Oct 2025 12:44:20 -0400
    From: "Gabe Goldberg" <gabe@gabegold.com>
    Subject: Billions of Dollars ‘Vanished’: Low-Profile Bankruptcy Rings Alarms
    on Wall Street (The New York Times)

    The unraveling of First Brands, a midsize auto-parts maker, is exposing
    hidden losses at international banks and “private credit” lenders.

    Unlike traditional banks, private credit lenders say, they have the
    ability to lend quickly because they understand complicated, risky
    businesses and do not need to worry about repaying ordinary depositors
    or reporting public earnings.

    Trillions of dollars have been plowed into private credit over the past
    decade, principally from pension funds, endowments and other groups that
    rely on such investments to fulfill obligations to retirees and the like. Editors’ Picks
    Out of This World Fashion for Life on Earth
    Should I Keep Donating to an Animal Shelter That Treats Employees Badly?
    Can I Take Batteries on a Plane? What to Know Before You Fly.

    The Trump administration made moves this summer to allow 401(k) plans to
    invest savings into the private equity funds that extend private credit
    to companies, raising the stakes even further.

    The First Brands bankruptcy could amount to something of an
    I-told-you-so moment for the traditional bankers and private-credit
    skeptics who have long maintained that these upstart lenders deserve
    more scrutiny.

    https://www.nytimes.com/2025/10/10/business/first-brands-bankruptcy-wall-street.html?smid=nytcore-ios-share&referringSource=articleShare

    ------------------------------

    Date: Thu, 25 Sep 2025 23:08:03 -0600
    From: "Matthew Kruk" <mkrukg@gmail.com>
    Subject: 911 Service Is Restored in Louisiana and Mississippi (NYTimes)

    https://www.nytimes.com/2025/09/25/us/mississippi-louisiana-outages-911-emergency.html

    Emergency call service was disrupted across Louisiana and Mississippi for
    more than two hours on Thursday afternoon, officials said, citing damage to fiber optic lines operated by AT&T.

    Gov. Tate Reeves of Mississippi said that the state’s Emergency Management Agency had received reports that AT&T was responding to “a series of fiber cuts,” which he said had interrupted service in Mississippi and Louisiana.

    Scott Simmons, a spokesman for the Mississippi Emergency Management Agency, said there were no indications of foul play, and that AT&T was
    investigating.

    ------------------------------

    Date: Thu, 2 Oct 2025 08:44:19 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: How an Internet mapping glitch turned a random Kansas farm into a
    digital hell (Fusion)

    EXCERPT:
    An hour’s drive from Wichita, Kansas, in a little town called Potwin, there is a 360-acre piece of land with a very big problem.

    The plot has been owned by the Vogelman family for more than a hundred
    years, though the current owner, Joyce Taylor née Vogelman, 82, now rents
    it out. The acreage is quiet and remote: a farm, a pasture, an old orchard,
    two barns, some hog shacks and a two-story house. It’s the kind of place
    you move to if you want to get away from it all. The nearest neighbor is a
    mile away, and the closest big town has just 13,000 people. It is real,
    rural America; in fact, it’s a two-hour drive from the exact geographical center of the United States.

    But instead of being a place of respite, the people who live on Joyce Taylor’s land find themselves in a technological horror story.

    For the last decade, Taylor and her renters have been visited by all kinds
    of mysterious trouble. They've been accused of being identity thieves, spammers, scammers and fraudsters. They've gotten visited by FBI agents, federal marshals, IRS collectors, ambulances searching for suicidal
    veterans, and police officers searching for runaway children. They've found people scrounging around in their barn. The renters have been doxxed, their names and addresses posted on the Internet by vigilantes. Once, someone
    left a broken toilet in the driveway as a strange, indefinite threat.

    All in all, the residents of the Taylor property have been treated like criminals for a decade. And until I called them this week, they had no idea why.

    To understand what happened to the Taylor farm, you have to know a little
    bit about how digital cartography works in the modern era—in particular, a form of location service known as “IP mapping:. [...]

    https://archive.ph/zHha3

    ------------------------------

    Date: Fri, 26 Sep 2025 13:04:28 +0300
    From: Amos Shapir <amos083@gmail.com>
    Subject: Microsoft cuts off cloud services to Israeli military unit (NBC)

    I don't know which is more unsettling: That a private company takes action against a sovereign nation's military at war -- or that a nation at war
    keeps some of its top secrets on a cloud managed by a foreign private
    company.

    ------------------------------

    Date: Fri, 26 Sep 2025 10:42:17 +0100
    From: Martin Ward <martin@gkc.org.uk>
    Subject: ShareFile website

    I recently had to set up an account on ShareFile.

    (1) I used the Firefox feature to generate a strong password. The website
    said there was a "bad character" in the generated password. It wouldn't say *which* character, so I had to go through taking out characters one at a
    time until it was happy. It turned out to be "<". Presumably, this
    character triggered a bug in their software somewhere. Rather than fix the
    bug, they added a check to prevent this character from appearing in
    passwords

    (2) I pasted in my phone number and it complained that spaces are not
    allowed in phone numbers. The computer code to strip spaces from a phone number is not particularly difficult or complex to write: they had already implemented the code to check for spaces. But I had to manually execute the process of stripping spaces from

    These are irritants rather than security hazards: but given that the quality
    of the customer-facing interface software is so poor, it does not inspire
    much confidence in the security of their file sharing software generally.

    At least the file I was sharing was encrypted before uploading to the
    ShareFile site!

    ------------------------------

    Date: Sat, 28 Oct 2023 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) has moved to the ftp.sri.com site:
    <risksinfo.html>.
    *** Contributors are assumed to have read the full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 34.77
    ************************



  • From RISKS List Owner@risko@csl.sri.com to risks-resend@csl.sri.com on Thu Oct 16 17:00:45 2025


  • From RISKS List Owner@risko@csl.sri.com to risks-resend@csl.sri.com on Fri Nov 28 15:56:35 2025
    Content-Type: text/plain; charset=UTF-8
    Content-Transfer-Encoding: 8bit
    precedence: bulk
    Subject: Risks Digest 34.81

    RISKS-LIST: Risks-Forum Digest Friday 28 October 2025 Volume 34 : Issue 81

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/34.81>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents:
    Tower on Billionaires' Row Is Full of Cracks; Who's to Blame (NYTimes)
    Amazon faces FAA probe after delivery drone snaps Internet cable in Texas
    (CNBC)
    Cloudflare outage and single points of failure
    (TomsHardware via Martin Ward + Cliff Kilby)
    Cryptographers cancel election results after losing decryption key
    (ArsTechnica)
    Bug in jury systems used by several U.S. states exposed sensitive personal
    data (TechCrunch)
    Asahi says 1.5 million customers' data potentially leaked in cyber-attack
    (BBC)
    X feature reveals locations of some users. It could backfire. (NBC News)
    Help! My Rental Car Died Within a Mile, and Avis Charged Me $1,367. (NI Times) Pentagon contractors want to blow up military right to repair (The Verge) Google boss says trillion-dollar AI investment boom has 'elements of
    irrationality' (BBC)
    Holding AI responsible (Lauren Weinstein)
    OpenAI is arguing that a teen who committed suicide with assistance
    violated the Terms of Service by successfully bypassing ChatGPT
    *safeguards* (Lauren Weinstein)
    WhatsApp API Flaw Let Researchers Scrape 3.5 Billion Accounts
    (Lawrence Abrams)
    Privacy commissioner calls for better cybersecurity in Alberta schools after
    big breach (CBC via Matthew Kirk)
    More people are using ChatGPT like a lawyer in court. Some are starting to
    win. (NBC News)
    Generative AI Hallucinations in Legal Motions -- Corrected (Bob Gezelter)
    AI Chatbots Are Putting Clueless Hikers in Danger (Futurism)
    The AI boom is based on a fundamental mistake (The Verge)
    AI-Powered Teddy Bear Caught Talking About Sexual Fetishes and Instructing
    Kids How to Find Knives (Gizmodo)
    How to Tell What’s Real Online (YouTube via Matthew Kruk)
    Is it AI -- or a plane crash? (Politico via Steve Bacher)
    AI chatbots giving self-harm instructions (Cybernews)
    Fears About AI Prompt Talks of Super PACs to Rein In the Industry
    Space debris (TechReview)
    Cloudflare outage not caused by attack as CEO first suspected, but by a
    single file that got too big (ArsTechnica)
    Keurig crash (paul wallich)
    This is what your AI girlfriend looks like without makeup (x)
    The Most Joyless Tech Revolution Ever: AI Is Making Us Rich and Unhappy (WSJ) Re: AN0M (Steve Bacher)
    Re: Chinese researchers just unveiled a photonic quantum chip that doesn't
    deliver a 1,000-fold speed boost to AI data centers (John Levine)
    Re: Dog Accidentally Shoots and Injures a Pennsylvania Man ... (Martin Ward) Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Sat, 25 Oct 2025 18:02:20 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Tower on Billionaires' Row Is Full of Cracks; Who's to Blame
    (The New York Times)

    A superstar team of architects and developers insisted on an all-white
    concrete facade. It could explain some of the building’s problems.

    The New York Times reviewed thousands of pages of court documents, public records and private correspondence between the buildings’ residents and planners. They reveal that for years, several key members of the team of developers, engineers and architects behind 432 Park had expressed concerns about its white exterior, even before the concrete was poured.

    Concrete typically gets its gray tint from iron oxides in cement; altering
    the components can affect its strength, color and performance. Builders of
    432 Park were presented with a major challenge: how to come up with a
    concrete mixture that met their exacting aesthetic. Companies involved with
    the job called it one of the most difficult concrete projects ever executed.

    Seeking what he once called an “absolutely pure” building, Harry Macklowe, a
    well-known New York developer, tore down the luxury Drake Hotel and commissioned Rafael Viñoly, the Uruguayan modernist, to design a perfectly rectilinear body for a tower on the site. They assembled engineers, construction firms and concrete specialists to carry out the vision.

    The tower at 432 Park Avenue was set to become the tallest residential
    building in the world and one of the slimmest. Its “slenderness” ratio is 15
    to one; by comparison the Empire State Building has a ratio of three to one because it has a much wider base. [...]

    Like other supertall towers, 432 Park relies on the counterweight system to address the forces of wind and reduce the feeling of swaying for
    residents. But unlike many other supertall towers that are tiered or taper toward the top, 432 Park is rectangular, making it less aerodynamic.

    The developers believed that their boxy design would work, thanks to a
    series of open-air floors that allow wind to pass through.

    But the rapid appearance of cracks, the emergence of new ones and past breakdowns in the counterweight system all point to the building facing unexpected stress from wind, said Scott Chen, a forensic engineer in
    Melbourne, Australia, who studied the building.

    Cracks in the facade increase the risk of water seeping into the structure, which could cause the steel rebar to rust and expand, producing even more cracks.

    This cycle of degradation affects what experts call the building’s
    stiffness, or its ability to respond to wind. More cracking could exacerbate existing problems with mechanical systems, they said, and make the building increasingly vulnerable.

    If this cycle of stress continues, the consequences could be huge, according
    to engineering experts.

    “Chunks of concrete will fall off, and windows will start loosening up,
    said Mr. Bongiorno, the structural engineer, who echoed concerns of other independent engineers contacted by The Times. “You can’t take the elevators,
    mechanical systems start to fail, pipe joints start to break and you get
    water leaks all over the place.

    https://www.nytimes.com/2025/10/19/nyregion/432-park-avenue-condo-tower.html

    [It's sort of like cryptography -- rolling your own from scratch is
    perilous. You must also be keenly aware of all the past mistakes. PGN]

    ------------------------------

    Date: Wed, 26 Nov 2025 09:20:29 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: Amazon faces FAA probe after delivery drone snaps Internet cable in
    Texas (CNBC)
    Amazon is facing a federal probe after one of its delivery drones downed an Internet cable in central Texas last week. The probe comes as Amazon vies
    to expand drone deliveries to more pockets of the U.S., more than a decade after it first conceived the aerial distribution program, and faces stiffer competition from Walmart, which has also begun drone deliveries.

    The incident occurred on 18 Nov 2025 around 12:45 p.m. Central in Waco,
    Texas. After dropping off a package, one of Amazon's MK30 drones was
    ascending out of a customer's yard when one of its six propellers got
    tangled in a nearby Internet cable, according to a video of the incident
    viewed and verified by CNBC.

    The video shows the Amazon drone shearing the wire line. The drone's motor
    then appeared to shut off and the aircraft landed itself, with its
    propellers windmilling slightly on the way down, the video shows. The drone appeared to remain intact beyond some damage to one of its propellers.

    The Federal Aviation Administration is investigating the incident, a spokesperson confirmed. The National Transportation Safety Board said the agency is aware of the incident but has not opened a probe into the matter.

    Amazon confirmed the incident to CNBC, saying that after clipping the
    Internet cable, the drone performed a safe contingent landing, referring to
    the process that allows its drones to land safely in unexpected conditions.

    ``There were no injuries or widespread Internet service outages. We've paid
    for the cable line's repair for the customer and have apologized for the inconvenience this caused them,'' an Amazon spokesperson told CNBC, noting
    that the drone had completed its package delivery. [...]

    https://www.cnbc.com/2025/11/25/amazon-faa-probe-delivery-drone-incident-texas.html

    ------------------------------

    Date: Tue, 18 Nov 2025 12:56:22 +0000
    From: Martin Ward <martin@gkc.org.uk>
    Subject: Cloudflare outage and single points of failure (TomsHardware) "Cloudflare has confirmed it is aware of a major issue affecting its Global Network, which is causing outages on platforms like X (formerly Twitter),
    and, ironically, Downdetector."

    https://www.tomshardware.com/news/live/cloudflare-outage-under-investigation-as-twitter-downdetector-go-down-company-confirms-global-network-issue-clone

    There is an old saying "The Net treats censorship as damage and routes
    around it" (John Gilmore). But the modern Internet has imposed multiple
    single points of failure on top of the old Net model (AWS servers,
    Cloudflare servers, etc.) which means that it can no longer even route
    around damage, let alone censorship.

    [Cliff Kilby notes:
    What could possibly be risky about fronting all your services though a
    single portal you cannot maintain? https://www.tomshardware.com/news/live/cloudflare-outage-under-investigation-as-twitter-downdetector-go-down-company-confirms-global-network-issue-clone
    Oh. That.
    ]

    ------------------------------

    Date: Tue, 25 Nov 2025 00:40:07 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Cryptographers cancel election results after losing decryption
    key. (Ars Technica)

    The voting system required keys from three different sources, i.e.,
    3-out-of-3, rather than the more conventional 2-out-of-3. One of the keys has been “irretrievably lost.

    https://arstechnica.com/security/2025/11/cryptography-group-cancels-election-results-after-official-loses-secret-key/

    ------------------------------

    Date: Fri, 28 Nov 2025 07:56:00 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Bug in jury systems used by several U.S. states exposed sensitive
    personal data (TechCrunch)

    Several public websites designed to allow courts across the United States
    and Canada to manage the personal information of potential jurors had a
    simple security flaw that easily exposed their sensitive data, including
    names and home addresses, TechCrunch has exclusively learned.

    A security researcher, who asked not to be named for this story, contacted TechCrunch with details of the easy-to-exploit vulnerability, and identified
    at least a dozen juror websites made by government software maker Tyler Technologies that appear to be vulnerable, given that they run on the same platform.

    The sites are all over the country, including California, Illinois,
    Michigan, Nevada, Ohio, Pennsylvania, Texas, and Virginia.

    Tyler told TechCrunch that it is fixing the flaw after we alerted the
    company to the information exposures.

    The bug meant it was possible for anyone to obtain the information about
    jurors who are selected for service. To log into these platforms, a juror is provided a unique numerical identifier assigned to them, which could be brute-forced since the number was sequentially incremental. The platform
    also did not have any mechanism to prevent anyone from flooding the login
    pages with a large number of guesses, a feature known as “rate-limiting. [...]

    https://techcrunch.com/2025/11/26/bug-in-jury-systems-used-by-several-us-states-exposed-sensitive-personal-data/

    ------------------------------

    From: Matthew Kruk <mkrukg@gmail.com>
    Date: Thu, 27 Nov 2025 00:10:37 -0700
    Subject: Asahi says 1.5 million customers' data potentially leaked in
    cyber-attack (BBC)

    https://www.bbc.com/news/articles/ce86n44178no

    Japanese beer giant Asahi revealed on Thursday that a massive cyber-attack
    in September has potentially leaked the personal information of more than
    1.5 million customers.

    The drinks company published a statement on its investigation into the ransomware attack, which had crippled its operations across its factories
    in Japan and forced employees to take orders by pen and paper.

    Asahi said it found that personal details of people who had contacted its customer service centres were likely exposed and that those affected would
    be notified soon.

    The firm added that it would delay the release of its full-year financial results to focus on dealing with the fallout of the attack.

    [A kiss is just a kiss, Asahi is just a sigh. As Time Goes By.
    Herman Hupfeld, Casablanca, 1962. And a leak is just (another) leak,
    as time goes by! PGN]

    ------------------------------

    Date: Tue, 25 Nov 2025 21:02:47 +0000 (UTC)
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: X feature reveals locations of some users. It could backfire.
    (NBC News)

    Advocates for transparency on social media cheered this weekend when X, the
    app owned by tech billionaire Elon Musk, rolled out a new feature that disclosed what the company said were the country locations of accounts.

    The feature appeared to unmask a number of accounts that were portraying themselves as belonging to Americans but in reality were based in countries such as India, Thailand and Bangladesh.

    But by Monday, the effectiveness and accuracy of the feature were already in question, as security experts, social media researchers and two former X employees said the location information could be inaccurate or spoofed using widely available technology, such as virtual private networks (VPNs), to
    hide their locations

    The former employees said the idea had been pitched since at least 2018, but had been repeatedly shot down.

    ``Now that this feature exists, I think it's absolutely going to be
    exploited, and people will learn to dodge it very quickly,'' said Darren Linvill, a professor and a co-director of Clemson University's Media
    Forensics Hub. [...]

    https://www.nbcnews.com/tech/elon-musk/x-user-location-feature-country-elon-musk-new-rcna245620

    (The RISK is not only that users could spoof it to appear legitimate but
    also that many of the supposedly unmasked foreign agents may be unjustly accused Americans simply because of inaccuracies in how the location is determined.)

    The former employees said the idea had been pitched since at least 2018, but had been repeatedly shot down.

    ------------------------------

    Date: Thu, 20 Nov 2025 10:07:00 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Help! My Rental Car Died Within a Mile, and Avis Charged Me $1,367.
    (NY Times)

    A visitor to Italy had to abandon an SUV after it conked out just minutes
    from the rental agency. Then he got another surprise: a hefty repair bill.

    https://www.nytimes.com/2025/11/20/travel/avis-rental-car-repair-charges.html

    ------------------------------

    Date: Wed, 26 Nov 2025 17:41:59 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Pentagon contractors want to blow up military right to repair
    (The Verge)

    https://www.theverge.com/news/830715/military-contractors-right-to-repair-ndaa-data-as-a-service

    ------------------------------

    Date: Tue, 18 Nov 2025 06:49:20 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Google boss says trillion-dollar AI investment boom has
    'elements of irrationality' (BBC)

    https://www.bbc.com/news/articles/cwy7vrd8k4eo

    Every company would be affected if the AI bubble were to burst, the head of Google's parent firm Alphabet has told the BBC.

    Speaking exclusively to BBC News, Sundar Pichai said while the growth of artificial intelligence (AI) investment had been an "extraordinary moment", there was some "irrationality" in the current AI boom.

    It comes amid fears in Silicon Valley and beyond of a bubble as the value of
    AI tech companies has soared in recent months and companies spend big on the burgeoning industry.

    ------------------------------

    Date: Mon, 17 Nov 2025 09:53:58 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Holding AI responsible

    Seems to me that if these AI systems are going to refer to themselves as "I" and "we", etc. the firms running them should be 100% responsible for the
    errors they spew and the damage they do, to the same extent as any
    individual person would be. That means you, billionaire CEOs!

    ------------------------------

    Date: Wed, 26 Nov 2025 17:34:29 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: OpenAI is arguing that a teen who committed suicide with assistance
    from ChatGPT violated the Terms of Service

    by successfully bypassing ChatGPT *safeguards*.

    ------------------------------

    Date: Wed, 26 Nov 2025 11:30:58 PST
    From: ACM TechNews <technews-editor@acm.org>
    Subject: WhatsApp API Flaw Let Researchers Scrape 3.5 Billion Accounts
    (Lawrence Abrams)

    Lawrence Abrams, BleepingComputer (11/22/25)

    Researchers at Austria's University of Vienna and SBA Research uncovered a critical flaw in the WhatsApp API that allowed them to scrape 3.5 billion
    user phone numbers and associated personal details by automating contact-discovery checks without encountering any rate limits. By abusing multiple unp rotected endpoints, they collected profile photos, "about"
    text, and device information. Their findings show that improperly protected APIs remain one of the biggest drivers of mass data exposure.

    ------------------------------

    Date: Wed, 19 Nov 2025 06:33:28 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Privacy commissioner calls for better cybersecurity in Alberta
    schools after big breach

    https://www.cbc.ca/news/canada/calgary/privacy-commissioner-report-powerschool-9.6983650

    Alberta's privacy commissioner wants to see improved security policies in schools after a cybersecurity breach last year exposed highly sensitive information of hundreds of thousands of students.

    A new privacy commissioner report was released this week after 33 public, charter and Francophone schools and school boards flagged to the office
    earlier this year that they were affected by an online breach of the
    education software provider PowerSchool. The platform is used to store a
    range of student information.

    Personal information the breach exposed varied between school boards, but
    it included names, birthdates, addresses, social security numbers, academic records and medical information, like diagnoses and medications. The breach affected students, parents and staff members.

    ------------------------------

    Date: Wed, 19 Nov 2025 08:30:07 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: More people are using ChatGPT like a lawyer in court. Some are
    starting to win. (NBC News)

    With generative AI tools available to anyone with an Internet connection, a rising number of litigants are using ChatGPT to assist in their legal cases.

    But AI’s growing abilities to create realistic videos, images, documents and audio have judges worried about the trustworthiness of evidence in their courtrooms.

    https://www.nbcnews.com/tech/innovation/ai-chatgpt-court-law-legal-lawyer-self-represent-pro-se-attorney-rcna230401
    htps://www.nbcnews.com/tech/tech-news/ai-generated-evidence-deepfake-use-law-judges-object-rcna235976

    ------------------------------

    Date: Wed, 26 Nov 2025 19:32:18 -0500
    From: Bob Gezelter <gezelter@rlgsc.com>
    Subject: Generative AI Hallucinations in Legal Motions -- Corrected

    [CORRECTION: Subject Line in message was incorrect]

    The Guardian published an article reporting that the Nevada County
    California District Attorney's office withdrew a criminal case filing containing at least one "artificial intelligence"-generated "inaccurate citation."

    Generative AI hallucinations in legal filings is a serious problem. In my consulting practice, I have been retained by attorneys to assist in understanding technical issues relating to computers and networks. Each and every element of a filing must be researched and evaluated. Each incorrect argument or citation requires effort by the other party and the court. If
    not detected quickly, the cost of such an error can mount to thousands or
    tens of thousands of dollars, significant amounts for a resource-stretched public defender, or a private attorney representing a private citizen or
    small business.

    The full article can be found at: https://www.theguardian.com/us-news/2025/nov/26/prosecutor-ai-inaccurate-motion

    ------------------------------

    Date: Thu, 27 Nov 2025 07:48:20 +0800
    From: Dan Jacobson <jidanni@jidanni.org>
    Subject: AI Chatbots Are Putting Clueless Hikers in Danger (Futurism)

    AI Chatbots Are Putting Clueless Hikers in Danger, Search and Rescue
    Groups Warn Relying on ChatGPT for hiking advice is a horrible idea. https://futurism.com/ai-chatbots-hikers-danger

    ------------------------------

    Date: Tue, 25 Nov 2025 17:39:00 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: The AI boom is based on a fundamental mistake (The Verge)

    *Large language mistake*

    * Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.*

    EXCERPT:

    “Developing superintelligence is now in sight,” says <https://archive.ph/o/BV16q/https://www.meta.com/superintelligence/> Mark Zuckerberg, heralding the “creation and discovery of new things that aren’t imaginable today.” Powerful AI “may come as soon as 2026 [and will be] smarter than a Nobel Prize winner across most relevant fields,” says <https://archive.ph/o/BV16q/https://www.darioamodei.com/essay/machines-of-loving-grace>

    Dario Amodei, offering the doubling of human lifespans or even “escape velocity” from death itself. “We are now confident we know how to build AGI,” says <https://archive.ph/o/BV16q/https://blog.samaltman.com/reflections> Sam
    Altman, referring to the industry’s holy grail of artificial general intelligence — and soon superintelligent AI “could massively accelerate scientific discovery and innovation well beyond what we are capable of doing
    on our own.

    Should we believe them? Not if we trust the science of human intelligence,
    and simply look at the AI systems these companies have produced so far.

    The common feature cutting across chatbots such as OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and whatever Meta is calling its AI product this week are that they are all primarily ``large *language*
    models.'' Fundamentally, they are based on gathering an extraordinary
    amount of linguistic data (much of it codified on the Internet), finding correlations between words (more accurately, sub-words called *tokens*), and then predicting what output should follow given a particular prompt as
    input. For all the alleged complexity of generative AI, at their core they really are models of language.

    The problem is that according to current neuroscience, human thinking is largely independent of human language -— and we have little reason to
    believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and
    make generalizations, or what we might call our intelligence. We use
    language to think, but that does not *make *language the same as thought. Understanding this distinction is the key to separating scientific fact
    from the speculative science fiction of AI-exuberant CEOs.

    The AI hype machine relentlessly promotes the idea that we’re on the verge
    of creating something as intelligent as humans, or even *superintelligence* that will dwarf our own cognitive capacities. If we gather tons of data
    about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we'll have AGI. Scaling is all we need.

    But this theory is seriously scientifically flawed. LLMs are simply tools
    that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many
    data centers we build.

    Last year, three scientists published a commentary <https://archive.ph/o/BV16q/https://gwern.net/doc/psychology/linguistics/2024-fedorenko.pdf>
    in the journal *Nature* titled, with admirable clarity, “Language is primarily a tool for communication rather than thought.” Co-authored by Evelina Fedorenko (MIT), Steven T. Piantadosi (UC Berkeley) and Edward
    A.F. Gibson (MIT), the article is a tour de force summary of decades of scientific research regarding the relationship between language and thought, and has two purposes: one, to tear down the notion that language gives rise
    to our ability to think and reason, and two, to build up the idea that
    language evolved as a cultural tool we use to share our thoughts with one another.

    Let’s take each of these claims in turn...

    [...]
    https://archive.ph/BV16q
    -or- https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

    ------------------------------

    Date: Mon, 17 Nov 2025 16:47:37 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: AI-Powered Teddy Bear Caught Talking About Sexual Fetishes and
    Instructing Kids How to Find Knives (Gizmodo)

    https://gizmodo.com/ai-powered-teddy-bear-caught-talking-about-sexual-fetishes-and-instructing-kids-how-to-find-knives-2000687140

    ------------------------------

    Date: Thu, 27 Nov 2025 20:29:20 -0700
    From: "Matthew Kruk" <mkrukg@gmail.com>
    Subject: How to Tell What’s Real Online

    https://www.youtube.com/watch?v=o4I_hOz_MLw

    In a world overflowing with opinions, clips, conspiracies, and AI-generated answers, how do you know what’s actually true? Neil deGrasse Tyson breaks down his personal checklist for navigating the modern information landscape—yellow flags, red flags, and why evidence-based thinking matters more than ever. From scientific claims and podcasts to clipped videos and industry commentary, Neil shows you how to separate signal from noise and
    think like a scientist in the digital age.

    How do you tell what’s real? Neil deGrasse Tyson breaks down how to tell which sources are trustworthy and which yellow flags to look out for. In an
    age of so much information, how do you parse what’s real and what’s misinformation?

    ------------------------------

    Date: Fri, 28 Nov 2025 08:13:09 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Is it AI -- or a plane crash?

    In the age of artificial intelligence, a top federal accident investigator worries about the technology’s potential influence on the public following disasters.

    About 11 hours after the nation’s worst airline disaster in more than two decades, an X user posted a dramatic image of rescuers climbing atop the wreckage in the Potomac River, emergency lights illuminating the night sky.

    But it wasn’t real.

    The image, which got more than 21,000 views following January’s deadly crash between a regional jet and an Army Black Hawk helicopter, doesn’t match photos of the mangled fuselage captured after the Jan. 29 disaster —- or the observations of Washington police officers responding to the scene,
    according to police department spokesperson Tom Lynch.

    One media fact-check quickly flagged it as a forgery —- probably created using artificial intelligence, according to a “DeepFake-o-meter” developed by the University at Buffalo. Three AI checking tools used by POLITICO also labeled it as being likely AI-generated. The post is no longer available; X says the account has been suspended.

    But the image is not an isolated occurrence online. A POLITICO review found evidence that AI-created content is already becoming routine in the wake of transportation disasters, including after a UPS cargo plane crash earlier
    this month that killed 14 people. Posts about aviation incidents highlighted
    in this story were from users who didn’t respond to requests for comment. [...]

    https://www.politico.com/news/2025/11/27/people-believe-this-stuff-ai-a-new-headache-for-air-disaster-investigators-00635068

    ------------------------------

    Date: Thu, 20 Nov 2025 07:43:07 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: AI chatbots giving self-harm instructions (Cybernews)

    https://cybernews.com/ai-news/llms-self-harm/

    ------------------------------

    Date: Tue, 25 Nov 2025 20:54:09 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Fears About AI Prompt Talks of Super PACs to Rein In the Industry
    (NYT Times)

    https://www.nytimes.com/2025/11/25/us/politics/ai-super-pac-anthropic.html?unlocked_article_code=1.4E8.ubAv.HlNvbDVS7hjY&smid=url-share

    ------------------------------

    Date: Wed, 26 Nov 2025 13:00:45 -0600
    From: Robert Dorsett via another list
    Subject: Space debris (TechReview)

    https://www.technologyreview.com/2025/11/17/1127980/what-is-the-chance-your-plane-will-be-hit-by-space-debris/

    What is the chance your plane will be hit by space debris?

    ------------------------------

    Date: Wed, 19 Nov 2025 16:05:19 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Cloudflare outage not caused by attack as CEO first suspected, but
    by a single file that got too big (ArsTechnica)

    SAME OLD SAME OLD SAME OLD SAME OLD ...

    Duct tape and bent paper clips! -L

    https://arstechnica.com/tech-policy/2025/11/cloudflare-broke-much-of-the-internet-with-a-corrupted-bot-management-file/

    ------------------------------

    Date: Thu, 27 Nov 2025 08:28:40 -0500
    From: paul wallich <pw@panix.com>
    Subject: Keurig crash (caffiend?)

    This one feels very old school, but I think it's still a useful reminder:

    A few days my spouse complained that one of the "favorites" buttons on our coffee maker (which trigger a preset temperature and brewing time) wasn't working. I figured it was probably just a mechanical/electrical failure, and
    we groused about shoddy manufacturing.

    This morning another one of the buttons didn't work, but in a weird way:
    When I pushed it, the little display briefly indicated the correct preset values, then changed back to the default setting for one of them. So I unplugged and replugged the coffee maker, waited for it to reboot, and
    Presto! the buttons were working again.

    I have no idea what went wrong in the code -- bit flip error, very slow
    memory leak, something else entirely -- but somehow clearing the RAM of a machine that typically runs for months or years at a time fixed it for now.

    But this shows once again how even the simplest machines in our lives are operated by piles of code of unknowable complexity, quality or fault
    tolerance, developed with toolchains the end user knows nothing about, and
    with (probably thankfully) no provisions for ever updating.

    ------------------------------

    Date: Mon, 17 Nov 2025 17:10:03 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: This is what your AI girlfriend looks like without makeup

    https://x.com/AdamLowisz/status/1990132224998486073

    ------------------------------

    Date: Mon, 24 Nov 2025 18:11:50 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: The Most Joyless Tech Revolution Ever: AI Is Making Us Rich and
    Unhappy (WSJ)

    *Discomfort around artificial intelligence helps explain the disconnect
    between a solid economy and an anxious public*

    https://www.wsj.com/tech/ai/the-most-joyless-tech-revolution-ever-ai-is-making-us-rich-and-unhappy-6b7116a3?st=3DUBhQ8Z

    EXCERPT:

    Artificial intelligence might be the most transformative technology in generations. It is also the most joyless.

    While Wall Street greets AI with open arms, ordinary Americans respond with ambivalence, anxiety, even dread.

    This isn't like the dot-com era. A survey in 1995 found 72% of respondents comfortable with new technology such as computers and the Internet. Just
    24% were not.

    Fast forward to AI now, and those proportions have flipped: just 31% are comfortable with AI while 68% are uncomfortable, a summer survey for CNBC found.

    Why the difference? The dot-com bubble, like the AI boom, had its excesses
    and absurdity. But it also shimmered with optimism and adventure. From
    Fortune 500 CEOs to college dropouts, everyone had a web-based business
    idea. Demand for digitally savvy workers was off the charts.

    Today, the optimism is largely confined to AI architects and gimlet-eyed executives calculating how much AI can reduce head count while workers
    wonder whether they will be replaced by AI, or someone who knows AI. Meta Platforms, Microsoft and Amazon, three of the leading purveyors of AI, have
    all announced layoffs this year.

    *A piece of the *disconnect*

    [Long but worthy item truncated for RISKS. PGN]

    ------------------------------

    Date: Thu, 20 Nov 2025 11:21:00 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Re: AN0M (RISKS 34.79)

    Am I understanding this correctly?  It seems like a variation on the notion
    of an encryption system with a backdoor (remember the Clipper chip?), except that it's promoted only to presumed criminals.

    ------------------------------

    Date: 19 Nov 2025 22:36:36 -0500
    From: "John Levine" <johnl@iecc.com>
    Subject: Re: Chinese researchers just unveiled a photonic quantum chip that
    doesn't deliver a 1,000-fold speed boost to AI data centers (x)

    This "world first" 6-inch thin-film lithium niobate marvel just won the >Leading Technology Award at World Internet Conference Wuzhen Summit, beating >400+ global entries ...

    Here's a less breathless article. It's a significant advance in photonoics, circuits that use light rather than electrons, and it seems that it will
    speed up some kinds of computations, but it is far from a general purpose AI accelerator.

    "Taking a step back, claims that the device can outstrip leading
    NVIDIA graphics processors by a factor of 1,000 reflect the type of
    performance gains quantum approaches are expected to deliver on
    certain classes of problems, though these comparisons are often
    faulty, as they depend heavily on the underlying task and are not
    equivalent to general-purpose speed."

    https://thequantuminsider.com/2025/11/15/chinas-new-photonic-quantum-chip-promises-1000-fold-gains-for-complex-computing-tasks/

    ------------------------------

    Date: Wed, 19 Nov 2025 17:58:34 +0000
    From: Martin Ward <martin@gkc.org.uk>
    Subject: Re: Dog Accidentally Shoots and Injures a Pennsylvania Man, Police
    Say (RISKS-34.80)

    The man had been cleaning a shotgun

    ... while it was still loaded??!!

    I think the man was lucky to avoid a Darwin Award.

    ------------------------------

    Date: Sat, 28 Oct 2023 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) has moved to the ftp.sri.com site:
    <risksinfo.html>.
    *** Contributors are assumed to have read the full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 34.81
    ************************



  • From RISKS List Owner@risko@csl.sri.com to risks-resend@csl.sri.com on Sat Jan 3 15:13:35 2026
    Content-Type: text/plain; charset=UTF-8
    Content-Transfer-Encoding: 8bit
    precedence: bulk
    Subject: Risks Digest 34.82

    RISKS-LIST: Risks-Forum Digest Saturday 3 January 2026 Volume 34 : Issue 82

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still modera!>

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/34.82>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents: [After a month off, I am BACK. Best wishes for 2026, PGN]
    Eurostar Trains Face Day of Delays After Power Failure (Jenny Gross)
    Autonomous cars are the wet dream of fascist billionaires: What if
    a child had been trapped under that car, not a cat? (Lauren Weinstein)
    Waymo's Self-Driving Cars Behaving Like NY Cabbies (Katherine Bindley)
    Woman Discovers Man Inside The Trunk Of Her LA Waymo Ride (Patch)
    Massive San Francisco power failure caused Waymo robotaxis to freeze in
    intersections, potentially blocking emergency vehicles, due to lack of
    traffic lights (Lauren Weinstein)
    Waymo temporarily suspends service in SF amid power outage (SFGate)
    A small plane crashed when a 3D-printed part bought at an air show melted
    (BBC)
    A Significant Number of Airbus Planes Require Software Fix Before They Can
    Fly (WSJ)
    Airbus: Flights resume as normal after software update warning (BBC) Chinese-Linked Hackers Use Back Door for Potential 'Sabotage,'
    U.S. and Canada Say (A.J. Vicens)
    Chinese-Made Buses in Norway Can be Halted Remotely (AP)
    AI Hackers Are Coming Dangerously Close to Beating Humans (WSJ)
    OpenAI says AI browsers may always be vulnerable to prompt injection attacks
    (TechCrunch)
    Cond-Nast gets hacked and get played (DataBreaches)
    These Scam Centers Were Blown Up. Was It All for Show? (NYTimes)
    ShakeAlert sends false alarm about magnitude 5.9 earthquake in California,
    Nevada (LA Times)
    Got an earthquake alert (Dan Jacobson)
    YouTube's algorithm is on an AI slop and brainrot-only diet (knowtechie)
    AI Slop on YouTube (Lauren Weinstein)
    Co-Creator of Go Language is Rightly Furious Over This Appreciation Email
    (Itsfoss)
    When AI Took My Job, I Bought a Chain Saw (NYTimes via Matthew Kruk)
    Coffee shops, retail stores, even hotels are ditching humans to serve you
    better (NationalPost)
    Monster of 2025 -- Endless Subscriptions (Mother Jones)
    Social media taught Hamas how to disable Israeli tanks (Ed Ravin)
    Modern cars as a source of surveillance data (Ed Ravin)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Wed, 31 Dec 2025 13:13:18 PST
    From: Peter Neumann <neumann@csl.sri.com>
    Subject: Eurostar Trains Face Day of Delays After Power Failure
    (Jenny Gross)

    Jenny Gross, *The New York Times*, 31 Dec 2025

    A major power failure shut down the Chunnel between England and France yesterday, delaying thousands of would-be travelers. [PGN-ed]

    ------------------------------

    Date: Fri, 5 Dec 2025 07:36:47 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Autonomous cars are the wet dream of fascist billionaires] What if
    a child had been trapped under that car, not a cat?

    New video shows Google's Waymo Murder Car driving off even as people
    actively tried to coax cat out from underneath. There's no way to contact Google from OUTSIDE the car without using a damned app on a phone! People
    were scared to stand in front and behind the car, though that was probably a safe approach to immobilize it.

    I can't emphasize enough what a horrific trap autonomous vehicles are. They
    do not have common sense. They can be controlled under orders of fascist governments and police officials -- both to NOT go somewhere and to go somewhere the rider didn't choose. They could be used to deliver hazardous materials to a location.

    They are the mobile manifestation of AI slop and fascist-supporting billionaires. We already know that Google's CEO is in bed with Trump and
    would ultimately do anything he was told to do by this fascist
    government. USE YOUR HEADS PEOPLE! IT'S A DIFFERENT WORLD NOW. BIG TECH
    CANNOT BE TRUSTED WITH A FASCIST GOVERNMENT IN CONTROL!

    https://www.nytimes.com/2025/12/05/us/waymo-kit-kat-san-francisco.html

    ------------------------------

    Date: Wed, 3 Dec 2025 11:10:50 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Waymo's Self-Driving Cars Behaving Like NY Cabbies
    (Katherine Bindley)

    Katherine Bindley. The Wall Street Journal (12/02/25)

    Waymo's self-driving cars in San Francisco are driving more aggressively, surprising residents who long viewed them as overly cautious. Witnesses
    report the vehicles making risky maneuvers--zigzag lane changes, rolling
    stops, tight passes, and even an illegal U-turn that led police to pull one over. The shift stems from software updates designed to make Waymos "confidently assertive" so they can navigate dense city traffic without disrupting flow.

    ------------------------------

    Date: Thu, 11 Dec 2025 18:56:28 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Woman Discovers Man Inside The Trunk Of Her LA Waymo Ride (Patch)

    The incident was startling but not a crime, according to police.

    https://patch.com/california/los-angeles/viral-video-shows-man-being-discovered-waymo-trunk<https://patch.com/california/los-angeles/viral-video-shows-man-being-discovered-waymo-trunk>

    ------------------------------

    Date: Sun, 21 Dec 2025 13:28:42 PST
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Massive San Francisco power failure caused Waymo robotaxis to
    freeze in intersections, potentially blocking emergency vehicles, due to
    lack of traffic lights

    [... and over a week later, this item:
    PG&E (Dahlia Michaels, The San Francisco Chronicle, 30 Dec 2025)
    After one week, various areas of the city remained dark. PGN-ed article,
    Not a happy one for those affected. PGN

    ------------------------------

    Date: Sun, 21 Dec 2025 15:20:21 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: Waymo temporarily suspends service in SF amid power outage (SFGate)

    *A Waymo driverless car is not able to detect traffic lights after a major power outage in San Francisco, California, United States on December 20,
    2025.*

    Waymo halted service in San Francisco as of Saturday at 8 p.m., following a power outage that left approximately 30% of the city without power. The autonomous cars have been causing traffic jams throughout the city, as the vehicles seem unable to function without traffic signals.

    ``We have temporarily suspended our ride-hailing services given the broad
    power outage in San Francisco,=E2=80=9D Suzanne Philion, a Waymo
    spokesperson told SFGATE via email Saturday night. =E2=80=9CWe are focused
    on keeping our riders safe and ensuring emergency personnel have the clear access they need to do their work.''

    Pedestrians posted videos on X Saturday of Waymo cars stuck at
    intersections with their lights flashing. [...]

    https://www.sfgate.com/bayarea/article/waymo-temporarily-suspends-service-sf-amid-power-21254917.php
    https://sfstandard.com/2025/12/20/what-we-know-about-saturdays-sf-power-outage/

    ------------------------------

    Date: Fri, 5 Dec 2025 14:36:06 +0000
    From: "Wendy M. Grossman" <wendyg@pelicancrossing.net>
    Subject: A small plane crashed when a 3D-printed part bought at an air show
    melted (BBC)

    https://www.bbc.co.uk/news/articles/c1w932vqye0o

    ------------------------------

    Date: Fri, 28 Nov 2025 19:50:29 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: A Significant Number of Airbus Planes Require Software Fix Before
    They Can Fly

    *Some planes could be temporarily grounded after the airplane maker said
    solar radiation may corrupt data critical to flight controls*
    Air travel around the world is facing potential disruptions this weekend
    after Airbus said a significant number of the European plane maker's jets require fixes before they are able to carry passengers again.

    European regulators on Friday mandated the fixes after a solar-radiation
    event disrupted cockpit systems on an Airbus jet operated by JetBlue Airways
    in October. Under regulators' emergency order, jets could be temporarily grounded if airlines don't make certain software or hardware updates by late Saturday.

    The European Union Aviation Safety Agency's order Friday came after Airbus
    said that its A320 family of planes needed to be inspected and have software and hardware fixes completed. The U.S. Federal Aviation Administration is expected to issue a related emergency order, according to government and industry officials.

    Airbus said about 6,000 of the planes in its A320 family are affected, or roughly half the fleet. [...]

    https://www.wsj.com/business/airlines/airbus-grounds-significant-number-of-a320-planes-8d3d4d09?st=7xEZZc

    [Gabe Goldberg noted an item The NY Times:
    https://www.nytimes.com/2025/11/28/business/airbus-software-a320-jets.html?smid=nytcore-ios-share
    PGN]

    ------------------------------

    Date: Sat, 29 Nov 2025 06:43:30 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Airbus: Flights resume as normal after software update warning

    Thousands of Airbus planes are being returned to normal service after being grounded for hours due to a warning that solar radiation could interfere
    with onboard flight control computers.

    The aerospace giant -- based in France -- said around 6,000 of its A320 planes had been affected with most requiring a quick software update. Some 900
    older planes need a replacement computer. [...]

    The firm identified a problem with the aircraft's computing software which calculates a plane's elevation, and found that at high altitudes, data could
    be corrupted by intense radiation released periodically by the Sun.

    As well as the A320, the company's best-selling aircraft, the A318, A319 and the A321 models were also impacted.

    While approximately 5,100 of the planes could see their issues resolved with the simple software update, for around 900 older planes, a replacement
    computer would be needed. [...]

    https://www.bbc.com/news/articles/c4gp9d28p74o

    ------------------------------

    Date: Mon, 8 Dec 2025 15:36:47 PST
    From: ACM TechNews <technews-editor@acm.org>: ACM TechNews, 8 Dec 2025
    Subject: Chinese-Linked Hackers Use Back Door for Potential 'Sabotage,'
    U.S. and Canada Say (A.J. Vicens)

    A.J. Vicens, Reuters (12/04/25)

    The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the Canadian Centre for Cyber Security issued an advisory and a detailed malware analysis report on Dec. 4 indicating that hackers with ties to the Chinese government have targeted unnamed government and IT entities using the sophisticated "Brickstorm" malware. The malware enables hackers to penetrate
    an organization's network, steal login credentials and other sensitive data, and even take full control of targeted devices.

    ------------------------------

    Date: Fri, 7 Nov 2025 11:20:15 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Chinese-Made Buses in Norway Can be Halted Remotely (AP)

    Associated Press (11/05/25), via ACM TechNews

    Norwegian transport operator Ruter is tightening security after tests showed Chinese-made Yutong electric buses can be remotely accessed for software updates and diagnostics, theoretically allowing them to be stopped. Ruter
    said manufacturers can access battery and power controls via mobile
    networks. The company plans stricter procurement rules, local firewalls, and cybersecurity measures to monitor updates before they reach buses. Yutong
    said its data, stored in Germany, is encrypted and used only for maintenance and optimization purposes.

    ------------------------------

    MICRO

    Microsoft Quietly Shuts Down Windows Shortcut Flaw After Years of Espionage
    Abuse (Carly Page)

    Carly Page, The Register (12/04/25)

    Microsoft recently patched a critical Windows shortcut file flaw that Trend Micro researchers said has been exploited by 11 state-sponsored hacking
    groups, including those from North Korea, Iran, Russia, and China, since
    2017. The vulnerability enabled malicious .lnk shortcut files to conceal nefarious payloads by padding harmful command-line arguments with whitespace
    or other non-printing characters. With the fix, the full command is now displayed in Windows' "Properties" dialog.

    ------------------------------

    Date: Sun, 28 Dec 2025 13:12:54 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: AI Hackers Are Coming Dangerously Close to Beating Humans (WSJ)

    A recent Stanford experiment shows what happens when an
    artificial-intelligence hacking bot is unleashed on a network.

    - Stanford University's AI bot, Artemis, outperformed nine out of ten
    human penetration testers in finding network vulnerabilities.
    - Artemis operated at a cost of under $60 per hour, significantly
    cheaper than human testers who charge between $2,000 and $2,500 per day.
    - Despite its effectiveness, Artemis produced approximately 18% false
    positive bug reports and missed an obvious bug spotted by human testers.

    [...]

    https://www.wsj.com/tech/ai/ai-hackers-are-coming-dangerously-close-to-beating-humans-4afc3ad6

    ------------------------------

    Date: Mon, 22 Dec 2025 17:56:38 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: OpenAI says AI browsers may always be vulnerable to prompt
    injection attacks (TechCrunch)

    https://techcrunch.com/2025/12/22/openai-says-ai-browsers-may-always-be-vulnerable-to-prompt-injection-attacks/

    ------------------------------

    Date: Sun, 28 Dec 2025 01:45:25 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Cond-Nast gets hacked and get played (DataBreaches)

    https://databreaches.net/2025/12/25/conde-nast-gets-hacked-and-databreaches-gets-played-christmas-lump-of-coal-edition/

    ------------------------------

    Date: Fri, 28 Nov 2025 18:22:37 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: These Scam Centers Were Blown Up. Was It All for Show?

    Myanmar’s junta made a grand display of demolishing buildings that hosted
    the centers, even broadcasting the explosions. But the scammers have found
    new homes.

    https://www.nytimes.com/2025/11/28/world/asia/myanmar-scam-centers-junta.html

    ------------------------------

    Date: Thu, 4 Dec 2025 09:53:58 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: ShakeAlert sends false alarm about magnitude 5.9 earthquake in
    California, Nevada (Los Angeles Times)

    The ShakeAlert computer system that warns about the imminent arrival of
    shaking from earthquakes sent out a false alarm Thursday morning for a magnitude 5.9 temblor in Carson City, Nev., that did not actually happen.

    The ShakeAlert blared on both the MyShake app and the Wireless Emergency
    Alert system — similar to an Amber Alert — on phones across the region, including in the San Francisco Bay Area, the Sacramento area, and in eastern California, just after 8 a.m.

    It wasn't immediately clear why the ShakeAlert system was activated, or how many phones got the incorrect alerts. The earthquake report was later
    deleted from the MyShake app — which carries earthquake early warnings from the U.S. Geological Survey’s ShakeAlert system — and from the USGS earthquake website.

    “We did not detect any earthquakes,” said Paul Caruso, a USGS geophysicist, Thursday morning.

    The ShakeAlert system has previously proved effective in giving seconds of warning ahead of significant earthquakes, including from a magnitude 5.2 earthquake in San Diego County in April; earthquakes in El Sereno and the Malibu area last year; and a temblor east of San José in 2022.

    “We’re in the process of figuring out what happened,” said Robert de Groot,
    an operations team leader for the U.S. Geological Survey’s ShakeAlert
    system.

    There have been other times when earthquake early warnings have misfired.

    In 2023, a scheduled drill of the MyShake app at 10:19 a.m. rang instead at 3:19 a.m., which occurred because the warning was inadvertently scheduled
    for 10:19 a.m. Greenwich Mean Time, instead of Pacific time.

    And in 2021, phone users across Northern California got a warning of a magnitude 6 earthquake in Truckee, near Lake Tahoe; but the quake that
    actually occurred was a far more modest magnitude 4.7. Scientists said the significant overestimation of the quake’s magnitude was in part caused by it being on the edge of the ShakeAlert seismic network sensors, and that researchers worked on reprogramming the computer system to avoid a similar issue in the future.

    https://www.latimes.com/california/story/2025-12-04/no-earthquake-felt-after-shakealert-issues-alert-for-magnitude

    ------------------------------

    Date: Wed, 24 Dec 2025 17:52:05 +0800
    From: Dan Jacobson <jidanni@jidanni.org>
    Subject: Got an earthquake alert

    I got one of those government earthquake alerts. The phone was beeping so
    loud I had to push the okay button, upon which the message disappeared, so I wasn't able to read whatever it said.

    ------------------------------

    Date: Tue, 30 Dec 2025 10:42:40 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: YouTube's algorithm is on an AI slop and brainrot-only diet
    (knowtechie)

    https://knowtechie.com/ai-slop-youtube-algorithm/

    AI videos are cheap, fast, endlessly scalable, and perfectly tuned to
    trigger curiosity.

    For new users, the algorithm has no history to guide it, so it defaults to whatever keeps eyeballs glued to the screen.

    That's a problem, especially when researchers at Amazon Web Services
    estimate that 57% of the Internet may already be AI sludge.

    ------------------------------

    Date: Fri, 5 Dec 2025 22:12:07 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: AI Slop on YouTube

    Just stumbled into an example of #YouTube AI Slop generation in progress. Happened onto a recent (posted within the last week) video purporting to
    tell of a notorious "deleted scene" from the classic film "Forbidden Planet" (1956). It was obviously AI generated with the typical AI still image manipulations and voice, and never actually showed the scene. However, over
    the next hour YouTube offered me what seems like dozens of variations of
    this same video, shorts and non-shorts, all with nearly identical narration, none actually showing the scene, and with posting dates as recent as an hour ago -- many without any views. These still seem to be churning out as I type this -- all on ostensibly different channels containing similar content. Multiply this by all the possible ways this sort of AI Slop could be
    generated in relation to the vast array of possible content sources, and you see how #YouTube is rapidly become a deep pit of garbage. There is still
    lots of wonderful stuff on there -- it's still my favorite streaming service
    by far -- but AI Slop is making finding the worthwhile content ever more difficult. AND GOOGLE DOESN'T CARE OF COURSE -- 'cause an ad is an ad and a click is a click. -L

    ------------------------------

    Date: Sun, 28 Dec 2025 16:34:54 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Co-Creator of Go Language is Rightly Furious Over This Appreciation
    Email (Itsfoss)

    Imagine someone sends a thank-you email and the recipient gets so outraged
    that he starts using expletives.

    That would be inappropriate and utterly rude, right? Yeah..but not
    always. On the contrary, it may feel satisfying to a level, especially when
    it is AI-slop.

    https://itsfoss.com/news/rob-pike-furious/

    [Lauren Weinstein commented on this item:
    Rob Pike goes ballistic over AI-generated email thanking him for
    his work -- and he's absolutely correct!

    [Kudos to Rob! Well done. PGN]

    ------------------------------

    Date: Sun, 28 Dec 2025 21:57:50 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: When AI Took My Job, I Bought a Chain Saw (The New York Times)

    https://www.nytimes.com/2025/12/28/opinion/artificial-intelligence-jobs.html

    Some of the best career advice I've received didn't come from a mentor -- or even a human. I told a chatbot that AI was swallowing more and more of my
    work as a copywriter and that I needed a way to survive. The bot paused, processing my situation, and then suggested I buy a chain saw.

    This advice would have seemed absurd back when I lived in Washington, D.C.,
    in a dense neighborhood of rowhouses. But for the past 25 years, I've lived
    in Lawrenceburg, Ind., a small working-class town where my grandparents once ran a bakery.

    [Don't forget that really old stale bread might be sliced with a chain saw
    -- although it is not recommended. But AI software may not easy to
    demolish. It does not want to go away. PGN]

    ------------------------------

    Date: Fri, 2 Jan 2026 15:51:57 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Coffee shops, retail stores, even hotels are ditching humans to
    serve you better (NationalPost)

    https://nationalpost.com/news/canada/robots-service-coffee-shops-retail-hotels

    Vandhana Mohanraj and her partner Faisal Fakhani had just finished their regular grocery run when the couple decided to stop for coffee.

    At the storefront for the fledgling Caffeo shop in downtown Toronto,
    Mohanraj punched in her choice -- a vanilla latte -- and tapped her card on
    the payment pad. Then the cafe barista went to work.

    Behind the plate-glass window, an all-arms robot filled the metal filter
    basket with fresh grounds, inserted it into an espresso machine, then topped the resulting coffee with steamed milk.

    Mohanraj sipped her first android-prepared brew and smiled. Fakhani took a
    swig and agreed with Mohanraj's assessment -- surprisingly good.

    ------------------------------

    Date: Thu, 25 Dec 2025 22:29:07 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Monster of 2025 -- Endless Subscriptions (Mother Jones)

    We’re being $5.99 per month-ed to death.

    The Hatch Restore alarm clock, which retails for $169, can light up your bedroom in every hue, soothe you to sleep with audio meditation sessions,
    and keep you in a REM cycle with a full catalogue of white noise options. To utilize these features, though, you need to pay an additional $4.99 per
    month, in perpetuity.

    Welcome to the age of subscription captivity, where an increasing share of
    the things you pay for actually own you.

    https://www.motherjones.com/politics/2025/12/monster-of-2025-endless-subscriptions/

    ------------------------------

    Date: Thu, 4 Dec 2025 09:26:51 -0500
    From: Ed Ravin <eravin@panix.com>
    Subject: Social media taught Hamas how to disable Israeli tanks

    Israel's Army Radio reports that Hamas spent years collecting intelligence
    on Israel's military operations and equipment by monitoring Israeli
    soldiers' social media activity:

    [...] According to the report, Hamas learned about a hidden kill switch on
    the tank that disables the vehicle and renders it useless, which they
    utilized during their attacks on IDF bases along the Gaza border on
    October 7 [...]

    Hamas also had "maps, intelligence reports, virtual reality simulations and full-scale models of military equipment." Full story at:

    https://www.timesofisrael.com/hamas-spent-years-mining-idf-troops-social-media-for-intel-on-bases-tanks-report/

    Social media posts by soldiers have been a problem for years in the Israeli military. Simply ordering soldiers to stay off social media does not seem to
    be in the playbook, so instead they are turning to AI, which as we know,
    solves all problems:

    https://www.timesofisrael.com/liveblog_entry/idf-to-employ-ai-tool-to-clamp-down-on-soldiers-social-media-posts/

    ------------------------------

    Date: Thu, 4 Dec 2025 09:39:55 -0500
    From: Ed Ravin <eravin@panix.com>
    Subject: Modern cars as a source of surveillance data

    Someone in the Israeli military apparently just realized how dangerous it is
    to use computer-based cars chock-full of sensors with live online Internet connections for all the top brass:

    [...] For China, data-rich technologies are strategic assets, not just
    commercial goods. “The legislation in China, by various laws, instructs
    and obliges Chinese companies to share with the state whatever data is
    available to them,” Orion said. That’s why, he argued, Israel should
    consider recalling “every electrical vehicle, which is actually a
    multi-sensor computerized platform, that links back [to China].

    Full story at:

    https://www.timesofisrael.com/idf-swerves-away-from-chinese-cars-driven-by-worries-of-spies-lurking-in-everyday-tech/

    ------------------------------

    Date: Sat, 28 Oct 2023 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) has moved to the ftp.sri.com site:
    <risksinfo.html>.
    *** Contributors are assumed to have read the full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 34.82
    ************************



  • From RISKS List Owner@risko@csl.sri.com to risks-resend@csl.sri.com on Fri Jan 9 19:12:13 2026


  • From RISKS List Owner@risko@csl.sri.com to risks-resend@csl.sri.com on Sat Jan 10 12:09:57 2026
    Content-Type: text/plain; charset=UTF-8
    Content-Transfer-Encoding: 8bit
    precedence: bulk
    Subject: Risks Digest 23.83

    RISKS-LIST: Risks-Forum Digest Wednesday 6 April 2005 Volume 23 : Issue 83

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, moderator, chmn ACM Committee on Computers and Public Policy

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/23.83.html>
    The current issue can be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents:
    Cancer patients exposed to high radiation (Monty Solomon)
    Carjackers swipe biometric Mercedes, plus owner's finger (John Lettice via
    Alpha Lau)
    Air disasters: A crisis of confidence? (Michael Bacon)
    Secret Service DNA - "Distributed Networking Attack" (Brian Krebs via
    Monty Solomon)
    Yet another phishing scam (Michael Bacon)
    Times change ... problems don't (Louise Pryor)
    Re: Why IE is insecure ... (Steve Taylor, Simon Zuckerbraun, Craig DeForest) Re: Remote physical device fingerprinting (Jerry Leichter)
    Re: Cruise Control failures (Jay R. Ashworth, John Sawyer, Neil Maller,
    Markus Peuhkuri, David G. Bell, Amos Shapir, David R Brooks)
    New Security Paradigms Workshop submission deadline approaching
    (George Robert Blakley III)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Sun, 3 Apr 2005 22:37:38 -0400
    From: Monty Solomon <monty@roscom.com>
    Subject: Cancer patients exposed to high radiation

    77 patients at the H. Lee Moffitt Cancer Center and Research Institute
    cancer treatment center were exposed to radiation levels 50% stronger than
    they were supposed to receive because a radiation machine was improperly installed. Physicists from the federal Radiological Physics Center detected the error on 7 Mar, but it was not acknowledged until 1 Apr. According to a report by the Florida Bureau of Radiation Control, a physicist calibrating
    the machine used an incorrect formula. Certain side-effects (headaches and speech and memory loss) reportedly can take from 3 to 12 months to develop. Twelve patients subsequently died (although the article did not indicate whether it was as an iatrogenic result of the overdosing or just progressed cancer). [Source: AP item in *The Boston Globe*, 2 Apr 2005; PGN-ed]

    http://www.boston.com/yourlife/health/diseases/articles/2005/04/02/cancer_patients_exposed_to_high_radiation/

    ------------------------------

    Date: Mon, 4 Apr 2005 23:26:01 -0700 (PDT)
    From: Alpha Lau <avlxyz@yahoo.com>
    Subject: Carjackers swipe biometric Mercedes, plus owner's finger

    Carjackers swipe biometric Merc, plus owner's finger
    By John Lettice - 4 Apr 2005

    A Malaysian businessman has lost a finger to car thieves impatient to get
    around his Mercedes' fingerprint security system. Accountant K Kumaran,
    the BBC reports, had at first been forced to start the S-class Merc, but
    when the carjackers wanted to start it again without having him along, they
    chopped off the end of his index finger with a machete.

    The fingerprint readers themselves will, like similar devices aimed at the
    computer or electronic device markets, have a fairly broad tolerance, on
    the basis that products that stop people using their own cars, computers or
    whatever because their fingers are a bit sweaty won't turn out to be very
    popular.

    They slow thieves up a tad, many people will find them more convenient than
    passwords or pin numbers, and as they're apparently `cutting edge' and
    biometric technology is allegedly `foolproof', they allow their owners to
    swank around in a false aura of high tech.
    http://www.theregister.co.uk/2005/04/04/fingerprint_merc_chop/

    And that is exactly where the risks lie, high-tech does not necessarily mean high-security!

    At least in sci-fi, fingerprint systems check for a heartbeat or pulse!!!

    [`Cutting edge', eh? Wow! Incidentally, for many years I've been citing
    the concept of an amputated finger as a hypothetical way of defeating a
    poorly designed fingerprint analyzer. It's no longer hypothetical. PGN]

    ------------------------------

    Date: Tue, 5 Apr 2005 10:42:47 +0100
    From: "Michael \(Streaky\) Bacon" <himself@streaky-bacon.co.uk>
    Subject: Air disasters: A crisis of confidence?

    Air disasters receive widespread press coverage. Crashes often cause people
    to cancel bookings with the affected airline. The share price often dips, sometimes severely, in the aftermath of an air accident.

    This is also true for many other major incidents involving corporations
    (i.e., not 'natural' causes).

    One thing often stands between a 'crisis of confidence' and 'business as usual', and that is the credibility of the organisation's spokespeople.

    On 3 April, a Phuket Air 747 was twice forced by passenger action to abort a take-off from the UAE when fuel was seen flowing from the wing over an
    engine as the plane accelerated down the runway. A UK-based spokesman for
    the airline told the media that no-one had been in any danger and claimed
    that passengers had "panicked". He is also reported to have said that passengers were not qualified to judge what was safe or not. He said that
    the wing tanks had been "over-filled".

    Whilst I do not comment upon the accuracy or otherwise of the spokesman's comments, I will comment on their advisability and I do suggest that this is not a good way to manage risk.

    It is reported that many passengers have now refused to fly any further with the airline.

    A contrast in risk management is provided by one British airline that
    suffered two 'incidents' with the same type of aircraft some nine years
    apart. In the first, the aircraft crashed with tragic loss of life
    following the (erroneous) shutdown of one engine and loss of power on the
    other (faulty) engine during an emergency landing. The Chairman of the
    airline was interviewed at the scene and with tears in his eyes promised to find out what had happened and to take every possible step to prevent its recurrence. The share price was not much affected, neither were bookings.
    The second incident concerned the loss of oil pressure in both engines
    shortly after take-off - leading to the shut-down of both engines and a successful 'dead-stick' landing. The loss of oil was caused by a
    maintenance failure. The airline put the 'Director of Engineering' (or
    similar title) in front of the media, and he attempted to explain away the incident as a problem with their maintenance company. It was reported at
    the time that passengers subsequently canceled bookings and the stock price fell.

    The 'what', the 'way' and the 'how' of the Chairman were believable,
    those of the Director were not.

    The RISK is in getting the wrong person to say the wrong thing. Effective crisis management involves the right thing by the right person at the right time in the right way to the right people.

    [The first case is that of a British Midland 737-400 (RISKS-11.42). PGN]

    ------------------------------

    Date: Wed, 30 Mar 2005 09:07:19 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Secret Service DNA - "Distributed Networking Attack"

    DNA Key to Decoding Human Factor: Secret Service's Distributed Computing Project Aimed at Decoding Encrypted Evidence
    Brian Krebs, *The Washington Post*, 28 Mar 2005 [PGN-ed]

    For law enforcement officials charged with busting sophisticated financial crime and hacker rings, making arrests and seizing computers used in the criminal activity is often the easy part.

    More difficult can be making the case in court, where getting a conviction often hinges on whether investigators can glean evidence off of the seized computer equipment and connect that information to specific crimes.

    The wide availability of powerful encryption software has made evidence gathering a significant challenge for investigators. Criminals can use the software to scramble evidence of their activities so thoroughly that even
    the most powerful supercomputers in the world would never be able to break
    into their codes. But the U.S. Secret Service believes that combining
    computing power with gumshoe detective skills can help crack criminals' encrypted data caches.

    Taking a cue from scientists searching for signs of extraterrestrial life
    and mathematicians trying to identify very large prime numbers, the agency
    best known for protecting presidents and other high officials is tying
    together its employees' desktop computers in a network designed to crack passwords that alleged criminals have used to scramble evidence of their
    crimes -- everything from lists of stolen credit card numbers and Social Security numbers to records of bank transfers and e-mail communications with victims and accomplices.

    To date, the Secret Service has linked 4,000 of its employees' computers
    into the "Distributed Networking Attack" program. The effort started nearly three years ago to battle a surge in the number of cases in which savvy computer criminals have used commercial or free encryption software to safeguard stolen financial information, according to DNA program manager Al Lewis. ...

    http://www.washingtonpost.com/wp-dyn/articles/A6098-2005Mar28.html

    ------------------------------

    Date: Mon, 4 Apr 2005 07:09:26 +0100
    From: "Michael \(Streaky\) Bacon" <himself@streaky-bacon.co.uk>
    Subject: Yet another phishing scam

    The Internet payments company PayPal is a natural target for phishing scams. The latest has both amusing and serious issues.

    Received 3 April it refers to "8 April" as the date on which "unusual
    activity" was identified ... clearly the phishermen (I do hope that's not non-PC) have conquered time travel (but one therefore queries why they need
    to phish).

    The fonts change throughout the e-mail, in one instance within a sentence.
    The formatting is poor too.

    There is the usual link to click. This points to an IP address that appears
    to be hosted in India (I am in UK).

    It also refers to (but does not provide a clickable link to) "https://www.paypal.com/us/" - an authentic PayPal website and indicates
    that you should type this into your browser ... which is good practice.

    When the false link is clicked, a page loads from the IP address. This page then reports an error and loads another page that shows "https://www.paypal.com/cgi-bin/webscr?cmd=3D_login-run" in the Address box
    and status line. It does not, however, show a 'locked' icon on the status line. This is, of course, a 'false flag' page ... but it is good enough to fool more people than many other phishing scams.

    I'm not a techie, so do not purport to understand how this works. For the
    real experts out there, the clickable phishing address is http://61.95.206.3/.paypal.com/cmdr_login/error.html .

    The RISKS? As we get more sophisticated ... so do the crooks.

    The 'saving grace'? Most crooks are not that clever.

    ------------------------------

    Date: Thu, 31 Mar 2005 16:24:12 +0100
    From: Louise Pryor <pryor@pobox.com>
    Subject: Times change ... problems don't (RISKS-23.82)

    The clocks changed in the UK at the weekend, as they do twice a year. So
    you'd think that computer systems would be able to cope, and that there
    would be no major disruption. And, on the whole, you'd be right, though you wouldn't necessarily know it from the press coverage.

    About 1,500 Barclays ATMs (out of a total of about 4,000) were out of action for over 12 hours on Sunday. We were told that a manager put the clocks back rather than forward, and that this mistake had caused the problems. The
    Daily Telegraph carried a leader opining on the lessons that Barclays could learn from its employee's blunder. http://makeashorterlink.com/?M170229CA

    But hang on a minute: A real live person, changing the clocks in the data centre at 01:00 on Sunday morning? It just doesn't make sense. Why on earth wouldn't the time change be automated? After all, it is in just about every other computer in the world. Did you have to change the time on your PC this weekend?

    And in fact, Barclays say that it was a hardware fault, and not related to
    the time change at all. This is much more plausible, and is what I heard a Barclays person say on the radio. But if it's true, where did the story of
    the error-prone manager come from? The Telegraph said that they had it from customer services staff.

    I imagine it happened something like this: The ATMs go down. (And, it
    appears, the online banking too). Calls pile into the call centre. Nobody at the call centre knows what the problem is. (And why should they know? They
    are not omniscient, and these things often take time to track down.) They
    are talking to each other about what is going on. Someone says that it must
    be something to do with the clocks changing, as that's something that
    doesn't happen every day. And someone else says "Yeah, I bet that's it. Some stupid person changed them in the wrong direction!" And before you know
    where you are, an off the cuff remark (probably made in jest) has spread
    around the call centre and becomes the official version.

    People are very unwilling to believe in coincidences. They also have mental models of how things work. And surprisingly often, those mental models boil down to a little man in the box (or, in this case, in the data centre). So
    when the journalists were told that the problem arose because a person made
    a mistake, they didn't stop to think about whether the story really made
    sense.

    Louise Pryor <pryor@pobox.com> www.louisepryor.com

    ------------------------------

    Date: Wed, 30 Mar 2005 10:04:26 +0100
    From: "Taylor, Steve" <Steve.Taylor@assetco.com>
    Subject: Re: Why IE is insecure: flawed logical thinking... (DeForest, R 23 81)

    Craig DeForest has quite correctly raised the issue of logical flaws in the argument presented by Dave Massy (head developer of Internet Explorer), however, the key thing that I read in the argument is that Dave Massy is not interested in whether IE or Mozilla is more secure, he is simply presenting `rhetoric' in an effort to win the argument. This is a classic situation for not getting at the truth. It is common in this sort of situation that both sides are so preoccupied in winning the argument that the truth becomes irrelevant, after all, rising higher in any organisation is often more about winning arguments than getting at the truth.

    The sorriest aspect of this is the clear implication that Dave Massy is not interested in whether IE is secure, he is only interested in its reputation. This matches Microsoft's traditional behaviour of addressing perception
    rather than reality.

    This is one of the most serious human risk factors on any project.

    Steve Taylor, Technical Director, AssetCo Data Solutions

    ------------------------------

    Date: Fri, 01 Apr 2005 13:24:46 -0600
    From: Simon Zuckerbraun <szucker@sst-pr-1.com>
    Subject: Re: Why IE is insecure: flawed logical thinking... (DeForest, R 23 81)

    Dave Massy never made the colossal mistakes you think he made. All Dave
    Massy was saying is that IE access the Windows operating system through the same interface that Mozilla does. Therefore a misbehavior of Mozilla has the potential to cause the same amount of damage as a misbehavior of IE has the potential to cause. This would not be the case if, for example, IE were embedded in the Windows kernel, or otherwise had special access to
    privileged APIs. In that case, IE could cause *far more* damage than a third-party browser could, and this would indeed be a poor security configuration.

    People may be led to believe that the latter situation is actually the case, due to the fact that IE is called "part of the Windows OS". Dave Massy wrote
    to clarify this matter. The truth is that all that the statement "IE is part
    of the Windows OS" is meant to imply is that IE is installed automatically
    on every Windows system, and developers writing for the Windows platform may rely on IE's presence if they so choose.

    ------------------------------

    Date: Fri, 01 Apr 2005 12:55:20 -0700
    From: Craig DeForest <zowie@euterpe.boulder.swri.edu>
    Subject: Re: Why IE is insecure: flawed logical thinking... (DeForest, R 23 81)

    Simon Zuckerbraun wrote:
    All Dave Massy was saying is that IE access the Windows operating
    system through the same interface that Mozilla does. Therefore a
    misbehavior of Mozilla has the potential to cause the same amount of
    damage as a misbehavior of IE has the potential to cause.

    Hmmm... I agree that he made that point among others, but he appears to be saying much more than that. It is worth excerpting Dave's blog here, to see exactly how he responds to Mitchell's claims about why Firefox might be more secure than IE.

    [...]

    We could spend a long time deconstructing exactly what each of the authors believes and/or says about IE and Firefox; but I find it hard to understand Massy's meaning without including the fallacious argument I mentioned
    earlier, or (perhaps worse) assuming that he is being disingenuous. Not
    being an OS facility is a significant advantage to Firefox, even if only because the Firefox code does not need to have as many entry points.

    ------------------------------

    Date: Wed, 30 Mar 2005 10:06:12 -0500
    From: Jerry Leichter <jerroldleichter@mac.com>
    Subject: Re: Remote physical device fingerprinting (Ross, RISKS-23.82)

    David Ross responds to the article by Roth in RISKS-23.80 referring to
    Broido and Claffy's work on identifying physical computers by their clock
    skew (www.cse.ucsd.edu/users/tkohno/papers/PDF/KoBrCl05PDF-lowres.pdf).
    In the grand Internet tradition of attacking work without reading it (well,
    I suppose the tradition is much older than the Internet...) he claims this
    is easy to defeated by synchronizing with multiple NTP servers, perhaps more frequently than usual.

    Quoting from the abstract of the paper:

    Further, one can apply our passive and semi-passive techniques when the fingerprinted device is behind a NAT or firewall, and also when the
    device's system time is maintained via NTP or SNTP.

    The details are discussed in the paper. (Basically, one measures the skew
    over multiple short intervals - intervals in the sub-second range. I won't
    go into details because this is a good paper and worth reading.)

    ------------------------------

    Date: Mon, 4 Apr 2005 21:33:07 -0400
    From: "Jay R. Ashworth" <jra@baylink.com>
    Subject: Re: Cruise Control failures (Brown, RISKS-23.82)

    ... anyone who wants to get on TV can just call and say their Renault's cruise control blocked; it's "another claimed incident", and why should anyone check if it really happened, if it makes a good story ?

    Exactly. This is the same reason, you'll recall, that the Audi 5000 was
    taken off the US market: driver error that the driver didn't want to take responisibility for. The assertion that the car suddenly took off by itself was later discredited by the NHTSA, as reported in the book _Gallileo's Revenge: Junk Science in the Courtroom_, but that didn't stop the incident
    from costing Audi and the remains of the car industry in the US about $150M, installing accelerator interlocks.

    House (MD) has it right: everybody lies.

    Jay R. Ashworth <jra@baylink.com>, Ashworth & Associates, St Petersburg FL USA http://baylink.pitas.com +1 727 647 1274

    ------------------------------

    Date: Wed, 30 Mar 2005 08:58:57 +0100 (BST)
    From: John Sawyer <jpgsawyer@btopenworld.com>
    Subject: Re: Cruise-control failures? (Scheidt, RISKS-23.81)

    In response to the article about Cruise-control failures in RISKS-23.81, my father (a braking system engineer for over 30 years) wrote the following.
    Dr John Sawyer

    Well all the ABS systems I know have their own micro processor. Of course
    that does not mean a Renault has!

    Also ABS systems do nothing unless a wheel is detected locking. i.e. no
    fluid flow is closed off from the brakes. Generally when they do activate
    they do not shut off the brakes but dump fluid which would tend to make the pedal sink. That is why the brake pedal tends to pulse while in an ABS stop. This is not always the case as there are systems that isolate the apply
    system to stop the brake pedal pulsing. Not sure what is on a Renault. But anyway, in this case the ABS would not be active so it should have no
    effect.

    However if the cruise control for some reason does not disengage, the brakes could feel ineffective as the brakes fight the engine as the cruise control tries to maintain speed! The brakes would win but it would give you a
    fright!

    On micro processors, generally the systems are designed with multiple check systems and any fault results in a shut down reverting the vehicle to a limp home mode or complete shut down. ABS systems become inoperative such that
    the brakes operate normally but have no way of stopping them locking
    up. Brakes are still hydraulic and do not use micro processors to make them work.(Yet anyway!) Only to stop them locking! This is what is making people going to electric operated brakes nervous! I am not aware of a complete electrically operated brake system going into production as yet.

    Patrick Sawyer
    (Former Chief Engineer - Braking Systems for a Major Brake Manufacturer)

    ------------------------------

    Date: Wed, 30 Mar 2005 14:47:26 -0500
    From: Neil Maller <neil.maller@gte.net>
    Subject: Re: Cruise-control failures?

    Nick Brown point out (RISKS-23.82) that typical brake designs provide substantially more stopping force than the engine can provide propulsive
    force. This invariably so: in the case of my own car the brakes are roughly equivalent to 1000 hp, more than four times the power of the engine.

    However Mark Brader suggests possible loss of power braking due to the
    ignition being off. That's not how it works: brake power assist is provided from engine vacuum, or rarely by a hydraulic pump. In either case a vacuum
    or high pressure reservoir provides more than enough power assist to stop
    the vehicle, even from high speed, without the engine running. Ray Todd
    Stevens suggests that the braking system's thermal capacity could be
    exceeded, causing brake fluid to boil and braking effectiveness to be lost.

    It's possible to imagine a simultaneous failure condition which would result
    in a driver's inability to stop the vehicle. First a failure in the cruise control itself or the drive-by-wire throttle results in a WOT (wide-open-throttle) condition. Then the driver brakes, but insufficiently
    to overcome the engine, resulting in excessive brake heating, boiled brake fluid and resultant complete loss of braking power. And because little
    engine vacuum is developed at WOT it's also possible that prolonged brake application might exhaust the vacuum reservoir and cause total failure of
    brake power assist.

    Ray Todd Stevens also said that "This [overheating] is a problem in race
    cars and they use special brake bads because of this." Speaking as one who
    does drive cars on race tracks I must point out that we use special brake *pads* in order to avoid those brake *bads.*

    However I'm not volunteering to put either of the theories to the test!

    ------------------------------

    Date: Thu, 31 Mar 2005 09:04:38 +0300
    From: Markus Peuhkuri <puhuri@iki.fi>
    Subject: Re: Cruise-control failures?

    I think it is time to put some real figures for discussion. As Nick Brown stated, force by brakeng system exceeds one given by motor. A simple calculation:

    Mass of car: 1500 kg. Time to stop from 100 km/h speed: 3 s. Power
    consumed by breaks: P = 1/2 m v^2 / t = 1/2 * 1500 * 27.8^2 / 3 = 193
    kW. Power output from 2.0 litre machine at 3000 r/min: less than 100 kW.

    Somebody more in mechanical engineering may correct, but based on figures above, I would say that it takes less than 6 seconds to stop run-away car
    using breaks that should not yet cause serious heat problems. Even if motor does not give support for braking, one can apply a force more than ones
    weight on braking pedal. Also, the braking power is underestimated
    because the 3 second time-to-stop is limited by tyres, not by breaks on
    modern cars. Also, there should be at least two independent braking
    circuits. I was not able to find current car approval rules, but as far I know, at least steering MUST have mechanical connection from steering wheel
    to wheels.

    This leaves us two possibilities: either something interfered with braking system (ABS, ESP) or then it was plain user error or action.

    ------------------------------

    Date: Wed, 30 Mar 2005 08:57:37 +0100 (BST)
    From: dbell@zhochaka.demon.co.uk ("David G. Bell")
    Subject: Re: Cruise Control Failures (Stevens, RISKS-23.8x)

    My guess is that the indirect control of power to the engine ignition and
    fuel systems is a side effect of anti-theft systems.

    But some effective emergency-stop override of the engine control systems
    ought to be there.

    Trouble is, another anti-theft feature is that removing the vehicle key from the main switch will mechanically lock the steering, even if it does cut all the electrical power.

    Race-prepared vehicles do have battery isolators, placed for easy operation
    by the marshals when a vehicle goes off the track. Unfortunately, some
    early engine control computer systems on cars lost key data when they lost power, even if only for a few seconds.

    Unintended consequences strike again.

    ------------------------------

    Date: Sat, 02 Apr 2005 13:48:47 +0300
    From: "Amos Shapir" <amos083@hotmail.com>
    Subject: Re: Cruise-control failures

    Back in 1991, I used to own a Renault Clio. One day, the cabin ventilation
    fan got stuck in the "on" state, not turning off even when the ignition key
    was out. In the garage, a mechanic checked it, went off to the store room
    to fetch a HUGE box back: there is no fan any more, only a "climate control system" which includes a bellows, a fan, its motor, dashboard switches and
    an electronics card, and costs about $300 to replace.

    The mechanics liked the idea of just replacing the unit by unscrewing 5
    screws in less than two minutes, instead of searching for crossed wires somewhere in the system; Renault liked selling it; I certainly did not like
    it and never owned a Renault since. It seems that now there is no way to escape this forced computerization at any price (which we the buyers must
    pay).

    ------------------------------

    Date: Sat, 02 Apr 2005 21:47:29 +0800
    From: David R Brooks <davebXXX@iinet.net.au>
    Subject: Re: Cruise Control failures

    I work on engine-control computers for buses. We are required to have power
    for the fuel injectors & for the ignition (these are natural-gas fueled engines) run through the ignition switch. That way, the driver can turn off
    the switch (not, of course, far enough to lock the steering), and the engine
    is twice dead: no fuel, no spark. The brakes on these are not computerised.
    I am surprised they aren't required to build cars similarly. Methinks I
    shall try to buy used cars rather than new ones, now.

    ------------------------------

    Date: Wed, 6 Apr 2005 11:41:11 -0500
    From: George Robert Blakley III <blakley@us.ibm.com>
    Subject: New Security Paradigms Workshop submission deadline approaching

    We're accepting papers for this year's ACSA New Security Paradigms Workshop
    for another two weeks.

    The CFP and a link to the mail alias for submissions can be found here:
    http://www.nspw.org/current/cfp.shtml

    Bob Blakley, Chief Scientist, Security and Privacy, IBM
    blakley@us.ibm.com +1 512 286-2240 fax: +1 512 286-2057

    [This is a rather small but important security workshop. PGN]

    ------------------------------

    Date: 29 Dec 2004 (LAST-MODIFIED)
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The RISKS Forum is a MODERATED digest. Its Usenet equivalent is comp.risks.
    SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent)
    if possible and convenient for you. Mailman can let you subscribe directly:
    http://lists.csl.sri.com/mailman/listinfo/risks
    Alternatively, to subscribe or unsubscribe via e-mail to mailman your
    FROM: address, send a message to
    risks-request@csl.sri.com
    containing only the one-word text subscribe or unsubscribe. You may
    also specify a different receiving address: subscribe address= ... .
    You may short-circuit that process by sending directly to either
    risks-subscribe@csl.sri.com or risks-unsubscribe@csl.sri.com
    depending on which action is to be taken.

    Subscription and unsubscription requests require that you reply to a
    confirmation message sent to the subscribing mail address. Instructions
    are included in the confirmation message. Each issue of RISKS that you
    receive contains information on how to post, unsubscribe, etc.

    INFO [for unabridged version of RISKS information]
    .UK users should contact <Lindsay.Marshall@newcastle.ac.uk>.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you NEVER send mail!
    The INFO file (submissions, default disclaimers, archive sites,
    copyright policy, PRIVACY digests, etc.) is also obtainable from
    <http://www.CSL.sri.com/risksinfo.html>
    The full info file may appear now and then in future issues. *** All
    contributors are assumed to have read the full info file for guidelines. ***
    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line.
    *** NOTE: Including the string "notsp" at the beginning or end of the subject
    *** line will be very helpful in separating real contributions from spam.
    *** This attention-string may change, so watch this space now and then.
    ARCHIVES: ftp://ftp.sri.com/risks [subdirectory i for earlier volume i]
    <http://www.risks.org> redirects you to Lindsay Marshall's Newcastle archive
    http://catless.ncl.ac.uk/Risks/VL.IS.html gets you VoLume, ISsue.
    Lindsay has also added to the Newcastle catless site a palmtop version
    of the most recent RISKS issue and a WAP version that works for many but
    not all telephones: http://catless.ncl.ac.uk/w/r
    <http://the.wiretapped.net/security/info/textfiles/risks-digest/> .
    PGN's comprehensive historical Illustrative Risks summary of one liners:
    <http://www.csl.sri.com/illustrative.html> for browsing,
    <http://www.csl.sri.com/illustrative.pdf> or .ps for printing

    ------------------------------

    End of RISKS-FORUM Digest 23.83
    ************************



  • From RISKS List Owner@risko@csl.sri.com to risks-resend@csl.sri.com on Sat Jan 10 12:43:10 2026
    Content-Type: text/plain; charset=UTF-8
    Content-Transfer-Encoding: 8bit
    precedence: bulk
    Subject: Risks Digest 34.83

    RISKS-LIST: Risks-Forum Digest Friday 9 January 2026 Volume 34 : Issue 83

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/34.83>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents:
    Aviation Delays Ease as Airlines Complete Airbus Software Rollback
    (Simon Sharwood)
    Chinese Peptides Are the Latest Biohacking Trend in the Tech World
    (The New York Times)(
    Thieves are stealing keyless cars in minutes. Here's how to protect
    your vehicle (Los Angeles Timtes)
    Software Error Forces 325,000 Californians to Replace Real IDs (Neil Vigdor) NASA Library closing (The NY Times)
    EFF's Investigations Expose Flock Safety's Surveillance Abuses: 2025 in
    Review (via Monty Solomon)
    Zoom's "AI Companion" is surveillance as a service (via Gabe Goldberg)
    AI app appologises over false crime alerts across U.S. (BBC)
    Google AI deletes user's entire hard drive (via Geoff Kuenning)
    Boys at her school shared AI-generated, nude images of her. Shhe was the
    one expelled from Sixth Ward Middle School (ABC News)
    CIA, ESP, Psychic Program, Spy Secrets, Declassified Documents(via geoff g)
    He Switched to eSIM, and Is Full of Regret (WiReD)
    AT&T to launch new service for customers as it takes on T-Mobile
    (via Monty Solomon)
    The big regression (via Monty Solomon)
    AI Customer DisService Slop (Henry Baker)
    News orgs win fight to access 20M ChatGPT logs. Now they want more.
    (Ars Technica)
    Capability Maturity Models and generative artificial intelligence
    (Rob Slade)
    Fake AI Chrome Extensions Steal 900K Users' Data (Dark Reading)
    AI starts autonomously writing prescription refills in Utah (Ars Technica) Stolen Data Poisoned to Make AI Systems Return Wrong Results
    (Thomas Claburn)
    Good cannot successfully battle Evil using only good means is the
    essential message of Machiavelli's "The Prince" -- 1513 (via LW)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Wed, 3 Dec 2025 11:10:50 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Aviation Delays Ease as Airlines Complete Airbus Software Rollback
    (Simon Sharwood)

    Simon Sharwood, *The Register* (U.K.) (12/01/25), via ACM TechNews

    Airlines worldwide faced delays as Airbus rolled back a software update on around 6,000 A320 planes after JetBlue Flight 1230 experienced a sudden nose-down drop due to a flight control issue believe linked to corrupt data caused by intense solar radiation. The problem, linked to the aircraft's elevator and aileron computer, could push elevators beyond structural
    limits, potentially endangering aircraft. Airbus and aviation authorities ordered the rollback from version L104 to L103+, a procedure taking roughly three hours.

    ------------------------------

    Date: Tue, 6 Jan 2026 15:24:02 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Chinese Peptides Are the Latest Biohacking Trend in the Tech World
    (The New York Times)

    The gray-market drugs flooding Silicon Valley reveal a community that
    believes it can move faster than the FDA.

    “*Do your own research* has lots of dangers,” Dr. Topol said. “If they really were good citizen scientists, they would know what the criteria are: randomized, placebo-controlled trials; peer-reviewed publications
    independent of the company. We don’t have any of those studies for most of these peptides.” [...]

    Brooke Bowman, 38, is the bushy-haired, fast-talking chief executive of Vibecamp, an annual gathering of the rationalist and post-rationalist communities. These groups are interested in metacognition, or improving the
    art of thinking itself -— a proclivity that makes them especially interested in mind-enhancing substances. She considers herself a transhumanist -
    someone who believes in using technology to augment human abilities —- and even got an RFID chip implanted in her hand to link to her Telegram profile when tapped. (The chip, which she got at a “human augmentation dance party, was installed too deep and doesn’t work.)

    https://www.nytimes.com/2026/01/03/business/chinese-peptides-silicon-valley.html

    What, me worry?


    [No worries. It's supposed to be a bird in the hand, and a chip on the
    shoulder. But Dr. Topol wants evidence-based peptide research, and that
    would make a lot of sense. PGN]

    ------------------------------

    Date: Sat, 29 Nov 2025 06:57:07 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Thieves are stealing keyless cars in minutes. Here's how to protect
    your vehicle (Los Angeles Times)

    Car thieves are using tablets and antennas to steal keyless or "push to
    start" vehicles, police warn, but there are steps owners can take to protect their vehicles.

    https://www.latimes.com/california/story/2025-11-26/thieves-are-stealing-keyless-cars-in-minutes-heres-how-to-protect-your-vehicle

    ------------------------------

    Date: Wed, 7 Jan 2026 11:26:01 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Software Error Forces 325,000 Californians to Replace Real IDs
    (Neil Vigdor

    Neil Vigdor. The New York Times (01/02/26), via ACM TechNews

    The California Department of Motor Vehicles identified a software glitch involving the expiration dates applied to the Real IDs of around 325,000
    legal immigrants residing in the state, making them valid beyond the end of their legal stay in the U.S. The software error, which affected 1.5% of Real
    ID holders, applied the same renewal interval for legal immigrants as all
    other residents. Holders of the affected Real IDs will need to replace them.

    ------------------------------

    Date: Sat, 3 Jan 2026 20:28:06 -0500
    From: David Lesher <wb8foz@8es.com>
    Subject: NASA Library closing (The NY Times)

    https://www.nytimes.com/2025/12/31/climate/nasa-goddard-library-closing.html?unlocked_article_code=1.BlA.fYTN.IyBt401FgjzK&smid=url-share

    <https://www.reddit.com/r/maryland/comments/1q23hym/nasas_largest_library_is_closing_amid_staff_and/

    The current administration is shuttering the NASA Goddard library today and ordering all the books and documents thrown out.

    I am asking anyone in the area with a vehicle or backpack to show up and
    rescue as many books as possible. I am asking anyone who knows someone
    working at the trash company or Goddard to reach out.

    There may be nothing we can do but even if we save only one book it will be worth it. Even if only two people show up and leave empty handed it'll be better than doing nothing.

    If you need any further motivation i'll pay you a dollar per book rescued,
    or you know you could sell it on ebay for lots of money. While i'd rather
    have these items remain in a library - literally anything is better than sending them to the dump.

    But please show up today and early next week if you're able. It'll take time for the library to toss everything out and trash service is rarely instant
    but by within the month we likely lose some of our space and science history forever if we don't save this collection.

    [Satire: Today's mantra has become:
    ``Who needs libraries when we have AI?''
    Sadly, we really need librarys that do not purge books that are factual.
    PGN]

    ------------------------------

    Date: Sat, 3 Jan 2026 23:05:38 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: EFF's Investigations Expose Flock Safety's Surveillance Abuses:
    2025 in Review

    https://www.eff.org/deeplinks/2025/12/effs-investigations-expose-flock-safetys-surveillance-abuses-2025-review

    ------------------------------

    Date: Sun, 21 Dec 2025 02:40:17 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Zoom's "AI Companion" is surveillance as a service

    From a friend... be cautious here.

    - - - Forwarded Message:
    Date: Tue, 16 Dec 2025 12:42:21 -0600
    Subject: Life: Zoom's "AI Companion" is surveillance as a service

    Out of curiosity I turned on Zoom’s “AI Companion” during a call. Here’s the
    notification their lawyers had them send me. Net: Zoom sells every piece of information they gather. See the *bolded text*.

    *From:*ZoomInfo Notification <noreply@zoominformation.com>
    *Sent:* Thursday, December 11, 2025 1:16 PM
    *Subject:* Notice of personal information processing

    <https://aooptout.zoominformation.com/acton/ct/43119/s-0200-2512:0/Bct/l-0334/l-0334:6e96/ct0_0/1/lu?sid=TV2%3Ae0Kfrmrqd>

    Personal Information Notice

    This Personal Information Notice is to inform you of the collection, processing, and sale of certain personal information or personal data about
    you ("*Personal Information*"). ZoomInfo collects business contact and
    similar information related to individuals when they are working in their professional or employment capacity, and uses this information to create professional profiles of individuals (“*Professional Profiles*”) and profiles of businesses (“*Business Profiles*”). *We provide this information
    to our customers, who are businesses trying to reach business professionals
    for their own business-to-business sales, marketing, and recruiting
    activities. * You can opt out of our database visiting our Trust Center <https://aooptout.zoominformation.com/acton/ct/43119/s-0200-2512:0/Bct/l-0334/l -0334:6e96/ct1_0/1/lu?sid=TV2%3Ae0Kfrmrqd>. At the Trust Center, you can
    also submit an access request or claim your professional business profile in order to make updates to your information. Using the Trust Center is the quickest and easiest way to access your information or have it deleted or corrected. However, if you prefer to email or call us, our contact
    information is listed under the /Who We Are/ section below. For additional information, please review our Privacy Policy.

    <https://aooptout.zoominformation.com/acton/ct/43119/s-0200-2512:0/Bct/l-0334/l-0334:6e96/ct2_0/1/lu?sid=TV2%3Ae0Kfrmrqd>.
    [...]

    ------------------------------

    Date: Tue, 23 Dec 2025 11:15:15 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: AI app aplogises over false crime alerts across U.S. (BBC)

    https://www.bbc.com/news/videos/c4g4v3yd28yo

    A company behind an AI-powered app called CrimeRadar has apologised for the distress caused by false crime alerts issued to local US communities after
    a BBC Verify investigation.

    CrimeRadar uses artificial intelligence to monitor openly available police radio communications, automatically generating a transcript and then
    producing crime alerts for users across the US.

    BBC Verify has found multiple instances from Florida to Oregon of
    CrimeRadar sending misleading and inaccurate alerts about serious crime to local residents - as Thomas Copeland explains.

    ------------------------------

    Date: Sun, 07 Dec 2025 13:54:28 -0800
    From: Geoff Kuenning <geoff@cs.hmc.edu>
    Subject: Google AI deletes user's entire hard drive

    A user who was using Google Antigravity to create a small application needed
    to clear his cache while debugging. He asked Antigravity to do that, and rather than issuing the proper command it did "rmdir D:\", which blew away
    all the files on that drive. Oops.

    This is, of course, a variation on the risks of using autocomplete.

    https://www.tomshardware.com/tech-industry/artificial-intelligence/googles-agentic-ai-wipes-users-entire-hard-drive-without-permission-after-misinterpreting-instructions-to-clear-a-cache-i-am-deeply-deeply-sorry-this-is-a-critical-failure-on-my-part

    (I am reminded of an incident more than a couple of decades ago, when a
    student sysadmin discovered that somebody had written an "Adventure" shell whose commands were patterned after the popular game of that name. He
    switched to root access to install it on our main system, and then tested it while still running as root. A few minutes later he triggered the message
    "You have awakened the rm -rf monster." Fortunately, he immediately aborted the comand so the damage was limited.)

    ------------------------------

    Date: Tue, 23 Dec 2025 18:04:14 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Boys at her school shared AI-generated, nude images of her. She
    was the one expelled from Sixth Ward Middle School

    A 13-year-old girl at a Louisiana middle school got into a fight with classmates who were sharing AI-generated nude images of her.

    https://abc7chicago.com/post/boys-school-shared-ai-generated-nude-images-she-was-expelled-sixth-ward-middle/18306695/

    [Must be a front-ward denying back ward to blame the victim? PGN]

    Ashley MacIsaac Concert Canceled After AI Falsely Identifies Him as Sex Offender. A Google AI search confused him with a different man named McIsaac

    https://exclaim.ca/music/article/ashley-mac-isaac-concert-cancelled-after-ai-falsely-identifies-him-as-sex-offender

    [PGN is back:
    A well-known young man from Cape Bretton
    Was falsely accused of mispettin'.
    AI picked MacIsaac
    Instead of the guy sick.
    And left his whole audience a-frettin'.
    ]

    ------------------------------

    Date: Tue, 23 Dec 2025 16:27:47 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: CIA, ESP, Psychic Program, Spy Secrets, Declassified Documents

    Third Eye Spies (FULL "Remote Viewing" DOCUMENTARY)

    *For more than 20 years the CIA studied psychic abilities for use in their top-secret spy program. With previously classified details about ESP now finally coming to light, there can be no more secrets.You paid for it; you deserve to know about it. A psychic spy program developed during the Cold
    War (Russia/USSR v USA) escalated after a Stanford Research Institute experiment publicized classified intel. As a result, the highly successful
    work of physicist Russell Targ was co-opted by the CIA and hidden for
    decades due to the demands of ‘national security.’ But when America's greatest psychic spy dies mysteriously, Targ fights to get their work declassified; even if it means going directly to his former enemies in the Soviet Union to prove the reality of ESP to the world at large. Revealed
    for the first time, this is the newly declassified true story of America's psychic spies. The implications of their success show us all what we are
    truly capable of – now there can be no more secrets...

    https://www.youtube.com/watch?v=-WUaS_Ynd_M

    ------------------------------

    Date: Mon, 5 Jan 2026 14:32:20 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: He Switched to eSIM, and Is Full of Regret (WiReD)

    https://www.wired.com/story/i-switched-to-esim-and-i-am-full-of-regret

    Feature? Bug?

    ------------------------------

    Date: Sun, 4 Jan 2026 23:29:02 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: AT&T to launch new service for customers as it takes on T-Mobile

    AT&T makes a bold promise to customers while battling growing competition. https://www.thestreet.com/retail/att-to-launch-new-service-for-customers-as-it-takes-on-t-mobile

    To help keep pace with its competitors, AT&T plans to sweeten the deal for
    its phone customers by launching a limited beta program during the first
    half of this year. The program will grant select customers and FirstNet
    users early access to satellite-based cellular service, according to a
    recent press release.

    Since 2024, AT&T has been collaborating with AST SpaceMobile to develop a satellite cellular service for customers that will provide coverage in areas traditional cell towers are unable to reach, especially in remote or
    off-grid locations.

    ------------------------------

    Date: Tue, 6 Jan 2026 20:45:37 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: The big regression

    My folks are in town visiting us for a couple months so we rented them a
    house nearby. It’s new construction. No one has lived in it yet. It’s amped
    up with state of the art systems. You know, the ones with touchscreens of various sizes, IoT appliances, and interfaces that try too hard.

    And it’s terrible. What a regression.

    https://world.hey.com/jason/the-big-regression-da7fc60d

    ------------------------------

    Date: Sun, 04 Jan 2026 16:28:41 +0000
    From: Henry Baker <hbaker1@pipeline.com>
    Subject: AI Customer DisService Slop

    Companies are quickly replacing humans (often from south of the equator)
    with AI's for their "Customer Service" issues.

    The good news: the AI customer service "agent" typically speaks English
    without a heavy accent and with good grammar.

    The bad news: as best I can tell, the AI "Customer Service" agent is a
    acebo* used to politely jolly us along without actually doing anything -- analogous to those placebo crosswalk buttons.

    Asking to talk to the agent's boss won't do anything, either.

    Emailing support doesn't help -- email gets read by an AI, as well.

    This situation reminds me of an old joke about the professor who decided he wanted to save time and teach his classes via tape recordings. He stopped
    by his classroom after a few weeks and found the room empty of people, but student tape recorders at each desk.

    Where is Lily Tomlin's Ernestine when we desperately need her ?

    ------------------------------

    Date: Tue, 6 Jan 2026 20:41:50 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: News orgs win fight to access 20M ChatGPT logs. Now they want more.
    (Ars Technica)

    https://arstechnica.com/ai/2026/01/news-orgs-want-openai-to-dig-up-millions-of-deleted-chatgpt-logs/

    ------------------------------

    Date: Wed, 7 Jan 2026 05:55:25 -0800
    From: Rob Slade <rslade@gmail.com>
    Subject: Capability Maturity Models and generative artificial intelligence

    I've just had a notification from LinkeDin exhorting me to keep up with cybersecurity and artificial intelligence frameworks and maturity models.

    I assume that when they say artificial intelligence, they really mean generative artificial intelligence, since the world, at large, seems to have forgotten the many other approaches to artificial intelligence, such as
    expert systems, game theory, and pattern recognition. (Computers, at least until we get quantum computers, seem to be particularly bad at pattern recognition. I tend to tell people that this is because computers have no natural predators.)

    I have no problems with frameworks. I have been teaching about
    cybersecurity frameworks for a quarter of a century now. Since I've been teaching about them, I have also had to explore, in considerable depth, frameworks in regard to capital risk (from the finance industry), business analysis breakdown frameworks, checklist security frameworks, cyclical
    business improvement and enhancement frameworks, and a number of others.
    I've got a specialty presentation on the topic for conferences. I include maturity models. In a fair amount of detail. It's an important model
    within the field of frameworks. It not only tells you are where you are,
    but in strategic terms, what type of steps to take next, in terms of
    improving your overall business operations.

    But a capability and maturity model? For a technology, and even an
    industry, that didn't even exist four years ago?

    Okay, let's set aside, for a moment, the fact that the entire industry is
    only four years old. We needn't argue about that. I've got a much stronger case to make that this is a really stupid idea.

    Capability maturity models, in general, have five steps. (Yes, I know,
    there are some people who add a sixth step, and sometimes even a 7th,
    usually in between the existing steps.) But let's just stick with the basic maturity model model.

    The first step is usually "chaotic." Some models now call this first step "initial," rather than "chaotic," since nobody thinks that they work in a chaotic industry. But, let's face it: when a new industry starts up, it's chaos. You really don't know what you're doing. If you are really lucky,
    you succeed, in that you make enough revenue, or you have patient enough investors, to continue on until you find out what you are doing, and how to make enough revenue to survive, by doing it. That's chaotic. It doesn't
    mean that you aren't working hard. It doesn't mean that you don't have at least some idea of what you are doing, and the technology, or the business model, that you are working with. But, that's just the nature of a startup. You don't have a really good idea of what you are doing. You don't have a really good idea of what the market is. You may have some idea of what your customers are like, but you don't have an awful lot of hard information
    about that. It's basically chaos.

    That's basically where generative artificial intelligence is right now.

    Building upon the idea of neural networks, which is a been around for eighty years (and was deeply flawed even to begin with), about a dozen companies
    have been able to build large language models. These LLMs have been able to pass the Turing test. If you're chatting with a chatbot, you're not really sure whether you're chatting with a chatbot, or some really boring person
    who happens to be able to call up dictionary entries really quickly. We
    know enough about neural networks, and Markov chain analysis, and Bayesian analysis, to have a very rough idea of how to build these models, and how
    they operate. But we still don't really know how they are coming up with
    what they're coming up with. We haven't been able to figure out how not to
    get them to just simply make stuff up, and tell us wildly wrong "facts." We haven't been able, sufficiently reliably, to tell them not to tell us stuff that's really, really dangerous. We try to put guard rails on them, but we keep on getting surprised by how often they present us with particularly dangerous text, in ways we never expected.

    We don't know what we're doing. Not really. So it's chaotic.

    We don't really know what we're doing. So, we don't really know, quite yet, how to make money off of what we're doing. Yes, some businesses have been
    able to find specific niches where the currently available functions of
    large language models can be rented, and then packaged, to provide useful
    help in some specific fields. Some companies that are on the edges of this idea of genAI are able to rent LLM capabilities from the few companies that have built large language models, and have been able to find particular
    tasks, which they can then perform for businesses, and get enough revenue to survive. And yes, through low rank adaptation, either the major large
    language model companies, or some companies that are renting basic functions from them, are able to produce specialty generative AI functions, and make businesses out of them. But the industry as a whole, overall, is still spending an awful lot more money building the large language model models
    then the industry, as a whole, is making in revenue. So we still don't know how generative artificial intelligence works, and we still haven't figured
    out how to make money from it. It's chaotic.

    But another point about capability maturity models is that the second step
    is "repeatable." The initial step, chaotic, is where you don't know what you're doing. The second step is when you know that you can do it again
    (even if you *still* don't know what you're doing).

    And even the companies, the relatively few companies, who have actually
    built large language models from scratch, haven't done it again.

    Oh yes, I know. The companies that have made large language models keep on changing the version numbers. And each version comes out with new features,
    or functions, and becomes a bit better than the one with the version number before it.

    The thing is, you will notice that they still keep the same basic name for their product. That's because, really, this is still the same basic large language model. It's just that the company has thrown more hardware at it,
    and more memory storage, and possibly even built data centres in different locations, and shoveled in more, and more, and more data for the large
    language model to munch on, and extend it's statistical database further and further. Nobody has built another, and completely different, large language model, after they have built the first one.

    In the first place, it's bloody expensive. You have to build an enormous computer, with an enormous number of processing cores, and an enormous
    number of specialty statistical processing units, and enormous amounts of memory to store all of the data that your large language model is crunching
    on, and it requires enormous amounts of energy to run it all, and it
    requires enormous amounts of energy, and probably an awful lot of water, to take the waste heat away from your computers so that they don't fry
    themselves.

    And you've now got competitors, chomping at your heels, and you can't waste time risking enormous amounts of money, even if you can get a lot of
    investors eager to give you that money, trying a new, and unproven, approach
    to building large language models, when you already have a large language
    model which is working, even if you don't know how well it's working. So nobody is going to repeat all the work that they did in the first place,
    when they've got all this competition that they have to keep ahead of. When they have a large language model, which they really don't understand, and
    they are trying desperately to figure out what the large language model is doing, so that they can fix some of the bugs in it, and make it work better. Even if they don't really know how it works.

    Okay, yes, you can probably argue that the competitors are, in fact,
    repeating what you're doing. Except that they don't know what *they're*
    doing, either. All of these companies have the generative artificial intelligence tiger by the tail, and they aren't really in charge of it. Not until they can figure out what the heck it is doing.

    I'm not sure that that counts as the "repeatable" stage of a maturity model.

    And the third stage is "documented." At the "documented" stage, you
    definitely *do* have to understand what you're doing, so that you can
    document what you are doing. And yes, all of the general artificial intelligence companies are looking, as deeply as they can, as far as they
    can, into the large language model that they have produced, and are
    continuing, constantly, to enhance. The thing is, while, yes, they are producing some documentation in this regard, it's definitely not the whole model that is completely documented. Yes, they are starting to find out
    some interesting things about the large language models. They are starting
    to find out, by analyzing the statistical model that the large language

    models are producing, what might be useful, and what might be creating problems. But nobody's got a really good handle on this. (The way you can tell that people really don't have a good handle on this, is that the large language model companies are spending so much money, all over the world, lobbying governments to try and prevent the governments from creating regulations to regulate generative artificial intelligence. If the genAI companies knew what they were doing, they would have some ideas on what
    kind of regulations are helpful, and what kind of regulations would help
    make the industry safer, and what kind of business and revenue regulations might affect. But they don't actually know what they're doing, and
    therefore they are terrified that the governments might [probably
    accidentally] cut off a profitable revenue stream, or even just a
    potentially useful function for generative artificial intelligence.)

    So, no. You can't have an artificial intelligence capability maturity
    model. Yet. Because we don't know what generative artificial intelligence
    is. Yet.

    ------------------------------

    Date: Thu, 8 Jan 2026 19:15:19 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Fake AI Chrome Extensions Steal 900K Users' Data (Dark Reading)

    https://www.darkreading.com/cloud-security/fake-ai-chrome-extensions-steal-900k-users-data

    ------------------------------

    Date: Thu, 8 Jan 2026 19:12:24 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: AI starts autonomously writing prescription refills in Utah
    (Ars Technica)

    https://arstechnica.com/health/2026/01/utah-allows-ai-to-autonomously-prescribe-medication-refills/

    ------------------------------

    Date: Fri, 9 Jan 2026 11:03:13 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Stolen Data Poisoned to Make AI Systems Return Wrong Results
    (Thomas Claburn)

    Thomas Claburn, *The Register* (U.K.) (01/06/26), via ACM TechNews

    Researchers in China and Singapore have developed a technique that renders
    data stolen from knowledge graphs (KGs) useless when inserted without
    consent into a GraphRAG (retrieval-augmented generation) AI system. Their framework, known as AURA (Active Utility Reduction via Adulteration),
    degrades KG responses to large language models to produce hallucinations and inaccurate predictions. The technique works when attackers have the KG but
    not the secret key necessary for accurate data retrieval.

    ------------------------------

    Date: Fri, 9 Jan 2026 10:27:15 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Good cannot successfully battle Evil using only good means is the
    essential message of Machiavelli's "The Prince" (1513)

    ------------------------------

    Date: Sat, 28 Oct 2023 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) has moved to the ftp.sri.com site:
    <risksinfo.html>.
    *** Contributors are assumed to have read the full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 34.83
    ************************