• Risks Digest 34.85

    From risko@risko@csl.sri.com (RISKS List Owner) to risko on Wed Jan 28 23:07:13 2026
    From Newsgroup: comp.risks

    RISKS-LIST: Risks-Forum Digest Wednesday 28 January 2026 Volume 34 : Issue 85

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/34.85>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents:
    Litany of FAA Faults Found in Potomac Crash (The New York Times)
    NYC creating a "secret" underground utilties map (Gothamist)
    Vulnerabilities Surge, Messy Reporting Blurs Picture (Robert Lemos)
    Trump's acting cyber-chief uploaded sensitive files into a public version
    of ChatGPT (Politico)
    Russian Hackers Believed Behind December Cyberattacks on Polish Energy
    Targets (A.J. Vicens)
    Microsoft Gave FBI Keys to Unlock Encrypted Data (Thomas Brewster)
    Why AI Can't Make Thoughtful Decisions (Blair Effron)
    China Trains AI-Controlled Weapons with Learning from Animals (Josh Chin) Misleading Text in the Physical World Can Hijack AI-Enabled Robots
    (Emily Cerf)
    Risks of AI in Schools Outweigh Benefits (Cory Turner)
    AI hallucination reveals in part how bad WMP botched their risk assessment
    (Peter Campbell)
    AI error sent some ICE recruits into field offices without proper training
    (NBC News)
    Waymo Probed by NTSB over Illegal School Bus Behavior (Sean O'Kane)
    Congress Passes Bill to Fund U.S. Science Agencies (Evan Bush)
    Daily Beast and other outlets are reporting the supposed leak of thousands
    of personal details for ICE and other immigration related agents
    (Boris Patro -- Raw Story)
    Never-before-seen Linux malware is rCLfar more advanced than typicalrCY
    (ArsTechnica)
    I think I found what caused yesterday's Verizon outage. (Reddit)
    Many Bluetooth devices with Google Fast Pair vulnerable to WhisperPair hack
    eavesdrop (Ars Technica)
    Starlink tries to stay online in Iran as regime jams signals during protests
    (Ars Technica)
    Microsoft Copilot misinformation is the source of a career-limiting
    international incident (The Guardian)
    Script of my national radio report yesterday on the damage being done to
    communities by AI data centers, and the effect of AI strangling the supply
    of DRAM (Lauren Weinstein)
    How to slow down the AI Horror (Lauren Weinstein)
    Verizon outage: With service restored, here's everything that's happened so
    far (Tech Radar)
    Google's fear of anything like an Ombudsman (Lauren Weinstein)
    Exclusive: Volvo tells us why having Gemini in your next car is a good thing
    (Ars Technica)
    A single click mounted a covert, multistage attack against Copilot
    (Ars Technica)
    Tech giants face landmark trial over social media addiction claims (BBC)
    He let ChatGPT analyze a decade of his Apple Watch data. Then he called his
    doctor. (WashPost)
    The AI Shopping Wars Are Here (NY Mag via Steve Bacher)
    AI-Powered Disinformation Swarms Are Coming for Democracy (WiReD)
    Google appeals landmark antitrust verdict over search monopoly (BBC)
    Re: How AI Undermines Education (Martin Ward)
    Re: Thieves are stealing keyless cars in minutes. Here's how to protect your
    vehicle (John Levine)
    Abridged info on RISKS (comp.risks)

    ------------------------------

    Date: Wed, 28 Jan 2026 11:55:32 PST
    From: Peter G Neumann <neumann@csl.sri.com>
    Subject: Litany of FAA Faults Found in Potomac Crash (The New York Times)

    Karoun Demirjian and Kate Kelly, *The New York Times*, 28 Jan 2026
    Board Says Pilots Were Set Up For Disaster

    As we had observed in RISKS at the time of the fatal midair collision at Washington D.C.'s National Airport a year ago, the blame was widely
    distributed as a result of undesirable downsizing of the ATC staff,
    operational problems, and lots more. The National Transportation Safety
    Board determined that the FAA had designed and approved dangerous
    crosscutting flight rules that allowed an Army helicopter to fly into the landing path of a passenger jet. The FAA failed to respond to repeated warnings before the disaster. There were also insufficient warnings from
    the single Air-Traffic Controller to the helicopter pilot and the passenger jet, and the separate helicopter traffic controller had just been relieved.

    A second article by Kate Kelly is also included in today's paper,
    considering the reality that the concerns expressed a year ago are still
    valid. It is above the fold on Page A20 over the continuation of the front-page article in The Times' National edition. The existence together
    of both articles clearly demonstrates the pitiful lack of hindsight and foresight -- and what needs to be done to rectify the risks.

    [PGN-edited items]

    [If you know German, the Reagan Airport needs an electronic Reaganschirm
    (Regenschirm = Umbrella) to provide everything that should have been done
    and should now be done to prevent future disasters.]

    ------------------------------

    Date: Sun, 18 Jan 2026 17:35:17 -0500
    From: Ed Ravin <eravin@panix.com>
    Subject: NYC creating a "secret" underground utilties map (Gothamist)

    New York City streets are famous for having surprises underground,
    frustrating construction projects and delaying emergency responses
    when a water or steam pipe breaks.

    Now the city is embarking upon an ambitious 3-D mapping project,
    but they're afraid the result will be so useful to terrorists
    that they're keeping the whole thing secret and on a "need to
    know" basis, including from city agencies:

    https://gothamist.com/news/new-york-city-is-making-a-top-secret-map-of-everything-under-the-street

    https://www.nyc.gov/mayors-office/news/2025/11/mayor-adams-announces--10-million-platform-to-map-new-york-city-

    But there's a catch, the city doesn't own all the data:

    In the past, private companies with subsurface assets have been resistant
    to freely handing over data, preferring to send a human representative to
    share the data rather than having it all collected in one place. [The
    Mayor's office] said the city cannot compel utilities to share their data.

    And here's how they want to handle the security of the map:

    The map wonrCOt be accessible to the public. In fact, only certain levels of
    city government will have access rCo and only on a need-to-know basis for
    limited windows of time, according to Steinberg.

    The design and architecture of the city's secret map is still in its
    beginning stages, but the concept is to create a rCLcut request systemrCY
    instead of a static map that could fall into the wrong hands, making
    subterranean infrastructure vulnerable to attacks.

    In a rCLcut request system,rCY a municipal worker with clearance would access
    data by making a request for a specific location. This would ping
    utilities and other entities to release the information and then the map
    would integrate the data and visualize it in 3D. The data would be purged
    when no longer needed or the window of time for clearance expires.

    This sounds like quite a challenge for the data-security designers. And a "just in time" request system to the utilities could be unreliable when responding to a hurricane or utility fire that might have taken down the
    comm lines that the "ping" was going to travel over.

    ------------------------------

    Date: Wed, 21 Jan 2026 11:23:32 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org
    Subject: Vulnerabilities Surge, Messy Reporting Blurs Picture (Robert Lemos)

    Robert Lemos, Dark Reading (01/15/26), via ACM TechNews

    Reported software vulnerabilities hit a ninth straight annual record in 2025 with about 48,000 Common Vulnerabilities and Exposures (CVEs) assigned to issues, according to National Vulnerability Database data. The surge
    reflects changes in reporting more than rising cyber risk. For the first
    time, relatively new CVE-numbering authorities have replaced MITRE at the leading identifier, with three organizations representing a different trend driving vulnerability numbers. WordPress security firm Patchstack made the
    most submissions, reporting more than 7,000 for 2025.

    [The increasing record number of annual CVEs is horrendous, and merely
    amplifies the persistent bad news reported here in the past decades. PGN]

    ------------------------------

    Date: Wed, 28 Jan 2026 07:09:50
    From: A regular subscriber
    Subject: Trump's acting cyber-chief uploaded sensitive files into a public
    version of ChatGPT (Politico)

    https://www.politico.com/news/2026/01/27/cisa-madhu-gottumukkala-chatgpt-00749361

    [The train-ing has left the station, and is now off the tracks. Pgn]

    ------------------------------

    Date: Wed, 28 Jan 2026 11:07:18 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Russian Hackers Believed Behind December Cyberattacks on Polish
    Energy Targets (A.J. Vicens)

    A.J. Vicens, Reuters (01/23/26) via ACM TechNews

    Russian military intelligence hacking unit Sandworm likely was responsible
    for failed cyberattacks on Poland's power system in December, according to researchers at Slovakian software firm ESET. The determination was made
    based on Sandworm's previous operations and the identification of code that overlaps with cyberattacks previously carried out by the group. The
    researchers said hackers attempted to deploy the DynoWiper malware to
    destroy files on targeted computer systems; Polish authorities said the
    attack was thwarted before it could cause power outages.

    ------------------------------

    Date: Wed, 28 Jan 2026 11:07:18 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Microsoft Gave FBI Keys to Unlock Encrypted Data (Thomas Brewster)

    Thomas Brewster, Forbes (01/23/26), via ACM TechNews

    Microsoft has confirmed that it complied with a 2025 search warrant from the U.S. Federal Bureau of Investigation (FBI) requesting BitLocker recovery
    keys to unlock data stored on three laptops, as part of an investigation
    into the theft of funds from Guam's COVID-19 unemployment assistance
    program. BitLocker encrypts Windows PCs, and while users can store keys locally, Microsoft's cloud storage allows law enforcement access. This marks the first known instance of Microsoft handing over encryption keys.

    ------------------------------

    Date: Wed, 28 Jan 2026 14:42:37 PST
    From: Peter G Neumann <neumann@csl.sri.com>r
    Subject: Why AI Can't Make Thoughtful Decisions (Blair Effron)

    Blair Effron, *The New York Times*, Opinion

    The judgment we use to make trade-offs is unuquely human.

    ... The medieval law seminar of long ago prepared me for this world more
    than my courses in finance and economics. Just as centuries ago those
    judges rendered decisions amid reasonable disagreement, where evidence was mixed and incomplete, professionals today must possess the skills to make things better where machines cannot.

    ------------------------------

    Date: Wed, 28 Jan 2026 11:07:18 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: China Trains AI-Controlled Weapons with Learning from Animals
    (Josh Chin)

    Josh Chin, The Wall Street Journal (01/24/26), via ACM TechNews

    A review of patent filings, government procurement tenders, and research
    papers show China's People's Liberation Army is working on AI-powered swarms
    of drones, robot dogs, and other autonomous systems, some based on animal traits. Researchers at Beihang University, for example, are looking to
    nature as they build new combat drones; they developed a defense drone that
    can identify and destroy the most vulnerable enemy aircraft based on hawks choose their prey.

    ------------------------------

    Date: Wed, 28 Jan 2026 11:07:18 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Misleading Text in the Physical World Can Hijack AI-Enabled Robots
    (Emily Cerf)

    Emily Cerf, UC Santa Cruz News (01/21/26), via ACM TechNews

    University of California, Santa Cruz researchers have developed command hijacking against embodied AI (CHAI) attacks to show how large
    visual-language models can be manipulated to control AI decision-making systems. CHAI employs generative AI to optimize text on street signs and
    other objects to increase the probability an embodied AI will follow the
    text instructions, then manipulates the appearance of the text to account
    for location, color, and size. In tests, CHAI achieved attack success rates
    of 95.5% on drones performing aerial object tracking, 81.8% on self-driving vehicles, and 68.1% on drone emergency landings.

    ------------------------------

    Date: Fri, 16 Jan 2026 11:04:04 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Risks of AI in Schools Outweigh Benefits (Cory Turner)

    Cory Turner, NPR (01/14/26) via ACM TechNews

    A new Brookings Institution report concluded the risks of using generative
    AI in K-12 education currently outweigh the benefits. While AI can aid
    reading, writing, lesson planning, and accessibility for students with disabilities, it can also stunt cognitive, social, and emotional development
    by encouraging overreliance and reducing critical thinking. The report urges the use of AI to supplement, not replace, teachers, and calls for holistic
    AI literacy, equitable access, child-centered design, and government regulation.

    ------------------------------

    Date: Thu, 15 Jan 2026 18:24:20 +0100
    From: Free <campbell.peter@free.fr>
    Subject: AI hallucination reveals in part how bad WMP botched their risk
    assessment (Peter Campbell)

    I wrote a substack post about how an hallucination by Microsoft Copilot was injected into the police risk assessment that informed a local authority decision to ban visiting fans from attending a football match in Birmingham.

    This misinformation was not decisive in the decision on its own. But it revealed deep flaws in the risk assessment process which was so badly
    governed that the WM police Chief Constable ended up giving wrong
    information to a parliamentary select committee. He might lose his job less over the AI hallucination than over being responsible for a badly designed process.

    https://peter875364.substack.com/p/ai-hallucinations-will-reveal-whether...

    AI hallucinations will reveal whether decision-making systems were ever properly governed peter875364.substack.com

    ------------------------------

    Date: Thu, 15 Jan 2026 08:30:57 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: AI error sent some ICE recruits into field offices without proper
    training (NBC News)

    As Immigration and Customs Enforcement was racing to add 10,000 new officers
    to its force, an artificial intelligence error in how their applications
    were processed sent many new recruits into field offices without proper training, according to two law enforcement officials familiar with the
    error.

    The AI tool used by ICE was tasked with looking for potential applicants
    with law enforcement experience to be placed into the agency's "LEO program"
    -- short for law enforcement officer -- for new recruits who are already law enforcement officers. It requires four weeks of online training.

    Applicants without law enforcement backgrounds are required to take an eight-week in-person course at ICErCOs academy at the Federal Law Enforcement Training Center in Georgia, which includes courses in immigration law and handling a gun, as well as physical fitness tests.

    "They were using AI to scan r|-sum|-s and found out a bunch of the people who were LEOs werenrCOt LEOs," one of the officials said. The officials said the AI tool sent people with the word "officer" on their r|-sum|-s to the shorter four-week online training -- for example, a "compliance officer" or people
    who said they aspired to be ICE officers.

    The majority of the new applicants were flagged as law enforcement officers, the officials said, but many had no experience in any local police or
    federal law enforcement force. [...]

    https://www.nbcnews.com/politics/immigration/ice-error-meant-recruits-sent-field-offices-proper-training-sources-sa-rcna254054

    ------------------------------

    Date: Wed, 28 Jan 2026 11:07:18 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Waymo Probed by NTSB over Illegal School Bus Behavior (Sean O'Kane)

    Sean O'Kane, Tech Crunch (01/23/26), via ACM TechNews

    The U.S. National Transportation Safety Board (NTSB) confirmed it is investigating incidents in which Waymo's robotaxis illegally passed stopped school buses. The probe will focus on more than 20 such instances that
    occurred in Austin, TX. This follows a similar investigation announced by
    the U.S. National Highway Traffic Safety Administration's Office of Defects Investigation in October. The incidents occurred despite software updates, prompting the Austin school district to request Waymo cease operations
    during pickup and drop-off times.

    ------------------------------

    Date: Fri, 16 Jan 2026 11:04:04 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Congress Passes Bill to Fund U.S. Science Agencies (Evan Bush)

    Evan Bush, NBC News (01/15/26), via ACM News

    Congress passed a bipartisan budget bill funding U.S. science agencies
    through Sept. 30, rejecting steep cuts proposed by the White House. The measure, approved by the Senate Thursday after overwhelming House passage, providing billions more than requested for agencies including the National Science Foundation (NSF), rebuffing the Trump administration's proposal to slash its budget by 57%.

    ------------------------------

    Date: Tue, 13 Jan 2026 13:18:09 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Daily Beast and other outlets are reporting the supposed leak of
    thousands of personal details for ICE and other immigration related agents
    (Boris Patro -- Raw Story)

    https://www.rawstory.com/ice-agents-data-leak/

    ------------------------------

    Date: Tue, 13 Jan 2026 22:21:52 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Never-before-seen Linux malware is rCLfar more advanced than typicalrCY
    (ArsTechnica)

    https://arstechnica.com/security/2026/01/never-before-seen-linux-malware-is-far-more-advanced-than-typical/

    ------------------------------

    Date: Thu, 15 Jan 2026 19:30:38 -0500n
    From: Monty Solomon <monty@roscom.com>
    Subject: I think I found what caused yesterday's Verizon outage. (Reddit)

    Verizon was trying to fix a week-long MMS bug and the patch crashed everything. https://www.reddit.com/r/verizon/comments/1qdsc0g/i_think_i_found_what_caused_yesterdays_outage/

    ------------------------------

    Date: Thu, 15 Jan 2026 20:47:59 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Many Bluetooth devices with Google Fast Pair vulnerable to
    WhisperPair hack eavesdrop (Ars Technica)

    https://arstechnica.com/gadgets/2026/01/researchers-reveal-whisperpair-attack-to-eavesdrop-on-google-fast-pair-headphones/

    ------------------------------

    Date: Tue, 13 Jan 2026 22:23:10 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Starlink tries to stay online in Iran as regime jams signals during
    protests (Ars Technica)

    https://arstechnica.com/tech-policy/2026/01/starlink-tries-to-stay-online-in-iran-as-regime-jams-signals-during-protests/

    ------------------------------

    Date: Wed, 14 Jan 2026 14:55:30 +0000
    From: Tom Gardner <tggzzz@gmail.com>
    Subject: Microsoft Copilot misinformation is the source of a career-limiting
    international incident (The Guardian)

    The LLM appears to have hallucinated in at least two ways. Firstly it hallucinated a football match, and secondly it swapped the aggressor and
    victim in a different match. The authorities acted on the hallucinations,
    and later the police chief misled Members of Parliament. The Home Secretary
    has expressed "no confidence" in the police chief, but has no formal power
    to sack or require a chief constable to resign.

    "The chief of West Midlands police has apologised to MPs for giving them incorrect evidence about the decision to ban Maccabi Tel Aviv football
    fans, saying it had been produced by artificial intelligence (AI). Craig Guildford told the home affairs select committee on Monday that the
    inclusion of a fictitious match between Maccabi Tel Aviv and West Ham in
    police intelligence arose as a result of a use of Microsoft Copilot. https://www.theguardian.com/uk-news/2026/jan/14/watchdog-to-criticise-west-midlands-police-over-maccabi-tel-aviv-ban

    "As part of its inquiry, His Majesty's Inspectorate of Constabulary spoke to Dutch police, who said several key claims that West Midlands police relied
    on clashed with its experience of policing Maccabi fans during the match in Amsterdam in November 2024, which was marred by violence. Dutch police
    disputed a claim that Maccabi fans had at one point thrown people into a
    river. In fact, it was one Maccabi fan who ended up in the water."

    https://www.theguardian.com/uk-news/2026/jan/14/west-midlands-police-chief-apologises-ai-error-maccabi-tel-aviv-ban

    ------------------------------

    Date: Tue, 13 Jan 2026 07:53:35 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Script of my national radio report yesterday on the damage being
    done to communities by AI data centers, and the effect of AI strangling the
    supply of DRAM

    This is the script of my national radio report yesterday discussing how AI
    data centers are damaging communities, and now are damaging everyone who
    buys computers by massively driving up the cost of a critical component. As always, there may have been minor wording variations from this script as I presented this report live on air.

    - - -

    Well, we've talked before about the flood of massive new data centers
    being built around the country, that many communities are pushing back
    against now -- data centers mostly to support AI services that nobody
    asked for and most people don't want. And we continue to see new
    pushes by those Big Tech Billionaire CEOs and their politician
    supporters to force AI into everybody's lives in every possible way no
    matter how much damage they do.

    It was just announced that Apple is going to use Google's Gemini AI
    models for Siri. Google itself is rolling out a pile of Gemini AI
    features in Gmail to "scour" -- that's the word I saw them use -- your
    email in ways that personally I'd recommend you refuse and disable
    when they're offered to you. Don't click that horrific little four
    pointed star when it shows up that says "Try Gemini" when you hover
    over it!

    Oh and by the way, Google AI Overviews, which are so prone to wrong
    answers and misinformation, were found to be spewing incorrect and
    potentially dangerous answers to important medical question searches
    and Google had to back those off very recently after an outcry. Oh
    yeah, and OpenAI wants you to feed your personal medical records into
    a health-oriented version of ChatGPT. What could go wrong, huh?

    My recommendation for now is Just Say No to all this AI Slop. These
    giant AI data centers can ruin areas wherever they're built -- often
    originally pristine rural areas. They suck up massive amounts of
    electricity and can push electric rates up for everyone in the region.
    They can slurp up massive amounts of water for cooling and make
    existing water prices and water shortages worse. They can cause noise
    and air pollution from generators, and word now is that many want
    their own little nuclear reactors for power with the obvious issues of
    dealing with the waste and potential terrorism.

    But now Big Tech has found a way to cost us more money even if you
    don't live ANYWHERE near a data center! The crush of so many new
    gigantic data centers is causing the supply to virtually collapse of a
    critical computer component, used in one form or another by all
    computers and computing systems, pushing up prices enormously. This
    component is DRAM, Dynamic Random Access Memory. This isn't like
    sdcards or other forms of memory that maintain their data without
    power. The amount of DRAM on a system is one of the basic
    specifications, today often 4 gigabytes or 8 or 16 or more. This is
    the actual high speed working memory of the computer where the
    operating system and programs and apps actually execute. In early days
    of computers working memory was tiny little magnetic iron doughnuts
    strung in a complex grid of x, y and sense wires. But for many decades
    now working memory has been solid state components, either soldered
    directly into computers (including devices like smartphones of
    course), or on sticks that can be soldered in or made removable via
    sockets.

    And there have been many generations of DRAM over the years as this
    kind of memory got faster and denser in terms of capacity. So it's
    easy to see how shortages and price increases for DRAM being triggered
    by Big Tech AI's insatiable desire to take over everything is now
    having such a wide impact on the entire electronics industry.

    AI wants our electricity, our water, our personal information, and so
    much more to churn out deepfakes, wrong answers, and other
    misinformation, all to benefit that handful of Big Tech Billionaires.

    Of course it's up to every individual to decide if they're voluntarily
    going to participate in this mess when a choice is offered. But more
    and more that choice is being eliminated, and the reality continues to
    be that without serious legislation to reign in AI abuses and force
    Big Tech to take responsibility for the damage that AI causes, the
    situation is only going to rapidly get worse. It's already bad, but unfortunately when it comes to damage from AI, we haven't really seen
    anything yet.

    ------------------------------

    Date: Wed, 14 Jan 2026 12:56:42 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: How to slow down the AI Horror

    The only way to reign in AI is to make the Big Tech firms 100% responsible
    -- in a civil and criminal sense -- for damages done by AI responses. No exceptions. Make sure that any of their attempts to shield themselves under Section 230 fail. There are already politicians attempting to undermine
    Section 230 more generally. I have long been a defender of 230, but if decimating 230 is what's necessary to slow down this AI Horror show, then so
    be it. There will be a lot of collateral damage on the Internet in that
    case, but it could well be worth it to society. And lives will be saved. -L

    ------------------------------

    Date: Thu, 15 Jan 2026 08:34:51 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Verizon outage: With service restored, here's everything that's
    happened so far (Tech Radar)

    https://www.techradar.com/news/live/verizon-outage-january-2026

    ------------------------------

    Date: Wed, 14 Jan 2026 12:57:33 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Google's fear of anything like an Ombudsman

    You can understand why Google is terrified of having any sort of
    Ombudsman to interface with the public. This has been their position
    for decades, and is now only more intense given their #AI push. Having
    anyone actually at a high level in the hierarchy who actually cared
    about the social implications of this tech and weren't 100% "Ra! Ra!
    Ra!" fills Google with dread. -L

    ------------------------------

    Date: Thu, 15 Jan 2026 20:48:41 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: Exclusive: Volvo tells us why having Gemini in your next car is a
    good thing (Ars Technica)

    https://arstechnica.com/cars/2026/01/exclusive-volvo-tells-us-why-having-gemini-in-your-next-car-is-a-good-thing/

    ------------------------------

    Date: Thu, 15 Jan 2026 21:33:45 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: A single click mounted a covert, multistage attack against Copilot

    Exploit exfiltrating data from chat histories worked even after users closed chat windows.

    https://arstechnica.com/security/2026/01/a-single-click-mounted-a-covert-multistage-attack-against-copilot/

    ------------------------------

    Date: Tue, 27 Jan 2026 04:08:33 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Tech giants face landmark trial over social media addiction claims
    (BBC)

    https://www.bbc.com/news/articles/c24g8v6qr1mo

    A landmark social media addiction trial in which top tech executives are expected to testify begins on Tuesday in California.

    The plaintiff, a 19-year-old woman identified by the initials KGM, alleges
    the design of the platforms' algorithms left her addicted to social media
    and negatively affected her mental health.

    The defendants include Meta - which owns Instagram and Facebook - TikTok's owner ByteDance and YouTube parent Google. Snapchat settled with the
    plaintiff last week.

    The closely-watched case at Los Angeles Superior Court is the first in a
    wave of such lawsuits, which could challenge a legal theory used by tech
    firms to shield themselves from culpability in the US.

    ------------------------------

    Date: Tue, 27 Jan 2026 01:29:36 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: He let ChatGPT analyze a decade of his Apple Watch data. Then he
    called his doctor. (WashPost)

    Free article:

    Author: I gave the new ChatGPT Health access to 29 million steps and 6
    million heartbeat measurements. It drew questionable conclusions that
    changed each time I asked.

    https://wapo.st/49GEASP

    ------------------------------

    Date: Wed, 21 Jan 2026 12:54:41 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: The AI Shopping Wars Are Here

    Amazon, OpenAI and Google Face Off in AI Shopping Wars
    Big tech has big plans for bots that will do all your buying for you.

    Late last year, some independent online merchants started noticing something strange. Despite never working with Amazon -- and in some cases pointedly avoiding it -- they were getting orders that seemed to originate from
    Amazon. Typically, getting your product listed on Amazon is a complicated process, involving either a wholesale relationship or becoming a seller on
    the platform, and these brands, according to various reports, had done
    nothing of the sort.

    Amazon, it turned out, had been scraping their listings to include in its AI-powered shopping feature, Rufus. In addition to seeing products from
    Amazon, customers using Rufus were being shown items from outside stores, sometimes with a button labeled rCLBuy for me,rCY which would trigger an Amazon-powered bot to browse the outside merchant's website, check the
    item's price and availability, place the order, and handle the payment
    process. [...]

    https://nymag.com/intelligencer/article/amazon-openai-and-google-face-off-in-ai-shopping-wars.html

    ------------------------------

    Date: Sat, 24 Jan 2026 06:58:29 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: AI-Powered Disinformation Swarms Are Coming for Democracy

    Advances in artificial intelligence are creating a perfect storm for those seeking to spread disinformation at unprecedented speed and scale. And itrCOs virtually impossible to detect.

    https://www.wired.com/story/ai-powered-disinformation-swarms-are-coming-for-democracy/

    ------------------------------

    Date: Fri, 16 Jan 2026 22:05:50 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Google appeals landmark antitrust verdict over search monopoly
    (BBC)

    https://www.bbc.com/news/articles/clyn0ek5rdpo

    Google has appealed a U.S. district judge's landmark antitrust ruling that found the company illegally held a monopoly in online search.

    "As we have long said, the Court's August 2024 ruling ignored the reality
    that people use Google because they want to, not because they're forced to," Google's vice president for regulatory affairs Lee-Anne Mulholland said.

    In its announcement on Friday, Google said the ruling by Judge Amit Mehta didn't account for the pace of innovation and intense competition the
    company faces.

    The company is requesting a pause on implementing a series of fixes -
    viewed by some observers as too lenient - aimed at limiting its monopoly
    power.

    ------------------------------

    Date: Mon, 19 Jan 2026 13:18:16 +0000
    From: Martin Ward <martin@gkc.org.uk>
    Subject: Re: How AI Undermines Education (RISKS-34.84)

    Schools have stopped teaching which way to go and should simply teach
    students how to get to wherever they happen to be going as quickly as
    possible?

    Do you see a problem with this? If you are going in the wrong direction,
    then moving quickly is actually *worse* than moving slowly.

    Surely it is much more important to know the right way to go and to start moving in that direction than it is to be able to move very quickly in
    possibly the wrong direction. After all: there are many more *wrong* directions than *right* directions.

    The Nazis were very efficient at what they did.

    Modern big business is very "efficient", but optimised for the wrong thing: profits for billionaires rather than human flourishing.

    ------------------------------

    Date: 13 Jan 2026 22:17:44 -0500
    From: "John Levine" <johnl@iecc.com>
    Subject: Re: Thieves are stealing keyless cars in minutes. Here's how to
    protect your vehicle (Los Angeles Times via Gold, RISKS-34.83)

    Sigh. Don't automakers have anybody who understands cybersecurity?

    This should be easy, but it requires 6 steps:

    1. driver pushes the button
    2. fob sends a signal to the car
    3. car sends a challenge to the fob (public key encryption)
    4. fob sends back the decrypted signal
    5. car unlocks or starts as the case may be.

    Well, that's one use case. In my car, you have to keep the fob in the car
    while it's running, and if you don't it warns for a while and then turns
    itself off. The fob has a battery that you have to replace every few years,
    and I was relieved to find that even with a dead fob battery, if you touch
    the fob to the car's start button, the car starts. (Opening the door has a different backup scheme, a physical key in the fob that opens a hidden lock
    in the driver side door handle.)

    Also, I dunno about you but I often have stuff in my hands when I get
    into the car so I would not enjoy having to fish out the fob and push
    a button on it. There's a little button on the door handle that I can
    tap which unlocks and opens the door if it can sense the fob.

    I'm not denying that car security could be better, but if it looks
    like they've missed an obvious simple solution, the obvious solution
    is probably wrong.

    ------------------------------

    Date: Sat, 28 Oct 2023 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) has moved to the ftp.sri.com site:
    <risksinfo.html>.
    *** Contributors are assumed to have read the full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 34.85
    ************************

    --- Synchronet 3.21b-Linux NewsLink 1.2