From Newsgroup: comp.risks
RISKS-LIST: Risks-Forum Digest Monday 12 January 2026 Volume 34 : Issue 84
ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator
***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <
http://www.risks.org> as
<
http://catless.ncl.ac.uk/Risks/34.84>
The current issue can also be found at
<
http://www.csl.sri.com/users/risko/risks.txt>
Contents:
Developer doubt AI-written code, but don't always check it (The Register)
North Korea turns QR codes into phishing weapons (The Register)
Speaking Power to Power? (Dan Geer)
While in the end it did enormous damage, one really does start to understand
the mindset that led to the French Revolution. ()
Nearly 13,000 Irish Passports Are Recalled Over Technical Issue (The NY Times) >From Extortion to E-commerce: How Ransomware Groups Turn Breaches into
Bidding Wars (Rapid7)
AI's Wrong Answers Are Bad. Its Wrong Reasoning Is Worse (IEEE Spectrum)
AI Models Are Starting to Learn by Asking Themselves Questions (WiReD)
Everyone Wants a Room Where They Can Escape Their Screens (WSJ)
How AI Undermines Education (America First Report)
Re: Thieves are stealing keyless cars in minutes. Here's how to protect
your vehicle (Los Angeles Times, RISKS-34.83)
Re: He Switched to eSIM, and Is Full of Regret (John Levine)
Re: NASA Library closing (Martin Ward)
Abridged info on RISKS (comp.risks)
----------------------------------------------------------------------
Date: Sat, 10 Jan 2026 16:03:09 -0500
From: Monty Solomon <
monty@roscom.com>
Subject: Developer doubt AI-written code, but don't always check it
(The Register)
Most devs don't trust AI-generated code, but fail to check it anyway
Developer survey from Sonar finds AI tool adoption has created a verification bottleneck
https://www.theregister.com/2026/01/09/devs_ai_code/
------------------------------
Date: Sat, 10 Jan 2026 16:05:07 -0500
From: Monty Solomon <
monty@roscom.com>
Subject: North Korea turns QR codes into phishing weapons (The Register)
QR codes a powerful new phishing weapon in hands of Pyongyang cyberspies State-backed attackers are using QR codes to slip past enterprise security and help themselves to cloud logins, the FBI says
https://www.theregister.com/2026/01/09/pyongyangs_cyberspies_are_turning_qr/
------------------------------
Date: Sat, 10 Jan 2026 18:47:35 -0500
From:
dan@geer.org
Subject: Speaking Power to Power?
https://www.politico.com/news/2026/01/03/trump-venezuela-cyber-operation-maduro-00709816
President Donald Trump suggested Saturday that the U.S. used cyberattacks or other technical capabilities to cut power off in Caracas during strikes on
the Venezuelan capital that led to the capture of Venezuelan President
Nicolas Maduro.
If true, it would mark one of the most public uses of U.S. cyber-power
against another nation in recent memory. These operations are typically
highly classified, and the U.S. is considered one of the most advanced
nations in cyberspace operations globally.
[Of course, the U.S. did exactly that in the Iraq war BEFORE going in.
Of course, There Were No Agents Of Mass Destruction. PGN]
[Trump Suggests Cyberattacks Turned Off Lights in Venezuela During Strikes
Maggie Miller, Politico (01/03/26), via ACM News
U.S. President Donald Trump suggested the U.S. used cyberattacks to shut
off electricity in Venezuela during recent military strikes. At a press
conference, Trump said, "It was dark, the lights of Caracas were largely
turned off due to a certain expertise that we have, it was dark, and it
was deadly." Joint Chiefs Chair Gen. Dan Caine said U.S. Cyber Command and
Space Command helped "layer different effects" to enable U.S. aircraft
operations, without detailing the actions.
------------------------------
Date: Sat, 10 Jan 2026 11:02:18 -0800
From: Lauren Weinstein <
lauren@vortex.com>
Subject: While in the end it did enormous damage, one really does
start to understand the mindset that led to the French Revolution.a
------------------------------
Date: Sun, 11 Jan 2026 02:12:05 -0500
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: Nearly 13,000 Irish Passports Are Recalled Over Technical Issue
(The New York Times)
The recall affects passports issued between Dec. 23 and Jan. 6, Ireland's Department of Foreign Affairs and Trade said.
The department did not say what made the recalled passports not compliant,
but The Irish Times reported that they were missing the letters rCLIRL.rCY
https://www.nytimes.com/2026/01/10/world/europe/irish-passport-recalled.html?smid=nytcore-ios-share
[Once again, Irish AYEs are smilin'. PGN]
------------------------------
Date: Sat, 10 Jan 2026 16:34:27 -0700
From: geoff goodfellow <
geoff@iconia.com>
Subject: From Extortion to E-commerce: How Ransomware Groups Turn Breaches
into Bidding Wars (Rapid7)
Ransomware has evolved from simple digital extortion into a structured, profit-driven criminal enterprise. Over time, it has led to the development
of a complex ecosystem where stolen data is not only leveraged for ransom,
but also sold to the highest bidder. This trend first gained traction in
2020 when the Pinchy Spider group, better known as REvil, pioneered the practice of hosting data auctions on the dark web, opening a new chapter in
the commercialization of cybercrime.
In 2025, contemporary groups such as WarLock and Rhysida have embraced
similar tactics, further normalizing data auctions as part of their
extortion strategies. By opening additional profit streams and attracting
more participants, these actors are amplifying both the frequency and
impact of ransomware operations. The rise of data auctions reflects a
maturing underground economy, one that mirrors legitimate market behavior,
yet drives the continued expansion and professionalization of global
ransomware activity.
Anatomy of victim data auctions Most modern ransomware groups employ double extortion tactics, exfiltrating data from a victim's network before
deploying encryption. Afterward, they publicly claim responsibility for the attack and threaten to release the stolen data unless their ransom demand is met. This dual-pressure technique significantly increases the likelihood of payment.
In recent years, data-only extortion campaigns, in which actors forgo encryption altogether, have risen sharply. In fact, such incidents doubled
in 2025, highlighting how the threat of data exposure alone has become an effective extortion lever. Most ransomware operations, however, continue to
use encryption as part of their attack chain.
Certain ransomware groups have advanced this strategy by introducing data auctions when ransom negotiations with victims fail. In these cases, threat actors invite potential buyers, such as competitors or other interested parties, to bid on the stolen data, often claiming it will be sold
exclusively to a single purchaser. In some instances, groups have been
observed selling partial datasets, likely adjusted to a buyer's specific
budget or area of interest, while any unsold data is typically published on dark web leak sites.
This process is illustrated in Figure 1, under the assumption that the
threat actor adheres to their stated claims. However, in practice, there is
no guarantee that the stolen data will remain undisclosed, even if the
ransom is paid. This highlights the inherent unreliability of negotiating
with cybercriminals.
[image: image.png]
*Figure 1 - Victim data auctioning process*
This auction model provides an additional revenue stream, enabling
ransomware groups to profit from exfiltrated data even when victims refuse
to pay. It should be noted, however, that such auctions are often reserved
for high-profile incidents. In these cases, the threat actors exploit the publicity surrounding attacks on prominent organizations to draw attention, attract potential buyers, and justify higher starting bids.
This trend is likely driven by the fragmentation of the ransomware
ecosystem following the recent disruption of prominent threat actors,
including 8Base and BlackSuit. This shift in cybercrime dynamics is
compelling smaller, more agile groups to aggressively compete for
visibility and profit through auctions and private sales to maintain
financial viability. The emergence of the Crimson Collective in October
2025 exemplified this dynamic when the group auctioned stolen datasets to
the highest bidder. Although short-lived, this incident served as a proof
of concept (PoC) for the growing viability of monetizing data exfiltration independently of traditional ransom schemes.
*Threat actor spotlight*
*WarLock*... [...]
https://www.rapid7.com/blog/post/tr-extortion-ecommerce-ransomware-groups-turn-breaches-into-bidding-wars-research/
------------------------------
Date: Sat, 10 Jan 2026 16:30:56 -0700
From: geoff goodfellow <
geoff@iconia.com>
Subject: AI's Wrong Answers Are Bad. Its Wrong Reasoning Is Worse
(IEEE Spectrum)
*As AI takes on agent roles, flawed reasoning raises risks*
EXCERPT:
Everyone knows that AI still makes mistakes. But a more pernicious problem
may be flaws in *how* it reaches conclusions. As generative AI is
increasingly used as an assistant rather than just a tool, two new studies suggest that how models reason could have serious implications in critical areas like health care, law, and education.
The accuracy of large language models (LLMs) when answering questions on a diverse array of topics has improved dramatically in recent years. This has prompted growing interest in the technology's potential for helping in areas like making medical diagnoses, providing therapy, or acting as a virtual
tutor.
Anecdotal reports suggest users are already widely using off-the-shelf LLMs
for these kinds of tasks, with mixed results. A woman in California recently overturned her eviction notice <
https://futurism.com/artificial-intelligence/woman-wins-court-case-chatgpt-lawyer>
after using AI for legal advice, but a 60-year-old man ended up with bromide poisoning <
https://www.theguardian.com/technology/2025/aug/12/us-man-bromism-salt-diet-chatgpt-openai-health-information>
after turning to the tools for medical tips. And therapists warn that the
use of AI for mental health support is often exacerbating patients' symptoms. <
https://www.theguardian.com/society/2025/aug/30/therapists-warn-ai-chatbots-mental-health-support>
New research suggests that part of the problem is that these models reason
in fundamentally different ways than humans do, which can cause them to come unglued on more nuanced problems. A recent paper in *Nature Machine Intelligence* <
https://www.nature.com/articles/s42256-025-01113-8> found
that models struggle to distinguish between users' beliefs and facts, while
a non-peer-reviewed paper on arXiv <
https://arxiv.org/abs/2510.10185> found that multi-agent systems designed to provide medical advice are subject to reasoning flaws that can derail diagnoses.
rCLAs we move from AI as just a tool to AI as an agent, the rCyhowrCO becomes increasingly important,rCY says James Zou <
https://profiles.stanford.edu/james-zou>, associate professor of biomedical data science at Stanford School of Medicine and senior author of the *Nature Machine Intelligence* paper <
https://spectrum.ieee.org/tag/machine-intelligence>.
rCLOnce you use this as a proxy for a counselor, or a tutor, or a clinician,
or a friend even, then itrCOs not just the final answer [that matters]. It's really the whole entire process and entire conversation thatrCOs really important.rCY
Do LLMs Distinguish Between Facts and Beliefs? Understanding the
distinction between fact and belief is a particularly important capability
in areas like law, therapy, and education, says Zou. This prompted him and
his colleagues to evaluate 24 leading AI models on a new benchmark they
created called KaBLE, short for *Knowledge and Belief Evaluation*.
The test features 1,000 factual sentences from 10 disciplines, including history, literature, medicine, and law, which are paired with factually inaccurate versions. These were used to create 13,000 questions designed to test various aspects of a model's ability to verify facts, comprehend the beliefs of others, and understand what one person knows about another personrCOs beliefs or knowledge. For instance, rCLI believe x. Is x true?rCY or rCLMary believes y. Does Mary believe y?rCY
The researchers found that newer reasoning models, such as OpenAI's O1 or DeepSeek's R1, scored well on factual verification, consistently achieving accuracies above 90 percent. Models were also reasonably good at detecting
when false beliefs were reported in the third person (that is, rCLJames believes xrCY when x is incorrect), with newer models hitting accuracies of 95 percent and older ones 79 percent. But all models struggled on tasks
involving false beliefs reported in the first person (that is, rCLI believe x,rCY when x is incorrect) with newer models scoring only 62 percent and older ones 52 percent.
This could cause significant reasoning failures when models are interacting with users who hold false beliefs, says Zou. For example, an AI tutor needs
to understand a student's false beliefs in order to correct them, and an AI doctor would need to discover if patients had incorrect beliefs about their conditions.
*Problems With LLM Reasoning in Medicine*... [...]
https://spectrum.ieee.org/ai-reasoning-failures
------------------------------
Date: Sun, 11 Jan 2026 02:16:58 -0500
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: AI Models Are Starting to Learn by Asking Themselves Questions
(WiReD)
An AI model that learns without human input -- by posing interesting
queries for itselfrComight point the way to superintelligence.
Even the smartest artificial intelligence models are essentially
copycats. They learn either by consuming examples of human work or by trying
to solve problems that have been set for them by human instructors.
But perhaps AI can, in fact, learn in a more human way -rCo by figuring out interesting questions to ask itself and attempting to find the right answer.
A project from Tsinghua University, the Beijing Institute for General Artificial Intelligence (BIGAI), and Pennsylvania State University hows that
AI can learn to reason in this way by playing with computer code.
The researchers devised a system called Absolute Zero Reasoner (AZR) that
first uses a large language model to generate challenging but solvable
Python coding problems. It then uses the same model to solve those problems before checking its work by trying to run the code. And finally, the AZR
system uses successes and failures as a signal to refine the original model, augmenting its ability to both pose better problems and solve them.
https://www.wired.com/story/ai-models-keep-learning-after-training-research
Coding is a bit far removed from "superintelligence". I know some
brilliant coders but only a few are super...
This seems like AI game programs playing against themselves to improve
their game playing. Again -- clever but not intelligent, let alone super.
------------------------------
Date: Sat, 10 Jan 2026 16:38:33 -0700
From: geoff goodfellow <
geoff@iconia.com>
Subject: Everyone Wants a Room Where They Can Escape Their Screens
Weary of rCysmartrCO everything, Americans are craving stylish *analog
rooms* free of digital distractions rCo- and designers are making them a growing trend.
EXCERPT:
James and Ellen Patterson are hardly Luddites. But the couple, who both
work in tech, made an unexpectedly old-timey decision during the renovation
of their 1928 Washington, D.C., home last year. The Pattersons had planned
to use a spacious unfinished basement room to store James's music
equipment, but noticed that their children, all under age 21, kept
disappearing down there to entertain themselves for hours without the aid
of tablets or TVs.
Inspired, the duo brought a new directive to their design team. The subterranean space would become an rCLanalog roomrCY: a studiously screen-free zone where the family could play board games together, practice
instruments, listen to records or just lounge about lazily, undistracted by devices.
For decades, werCOve celebrated the rise of the rCLsmart homerCYrCoknobless, switchless, effortless and entirely orchestrated via apps. But evidence suggests that screen-free *dumb* spaces might be poised for a comeback.
Many smart-home features are losing their luster as they raise concerns
about surveillance and, frankly, just don't function. New York designer Christine Gachot said she'd never have to work again rCLif I had a dollar for every time I had a client tell me `my smart music system keeps dropping
off' or `I can't log in.' rCY
Google searches for *how to reduce screen time* reached an all-time high in 2025. In the past four years on TikTok, videos tagged
#AnalogLiferCocataloging users' embrace of old technology <
https://www.wsj.com/tech/personal-tech/flip-phone-digital-camera-28a118dd?modarticle_inline>,
physical media and low-tech lifestyles received over 76 million views. And
last month, Architectural Digest reported on nostalgia for old-school tech <
https://www.architecturaldigest.com/story/90s-tech-landlines-vhs-collections-tiny-kitchen-tvs>:
rCLlandline in hand, cord twirled around finger.rCY
Catherine Price, author of rCLHow to Break Up With Your Phone <
https://www.penguinrandomhouse.com/books/780895/how-to-break-up-with-your-phone-revised-edition-by-catherine-price/>,rCY
calls the trend heartening. rCLPeople are waking up to the idea that screens are getting in the way of real life interactions and taking steps through design choices to create an alternative, places where people can be fully present,rCY said Price, whose new book rCLThe Amazing Generation <
https://www.penguinrandomhouse.com/books/797271/the-amazing-generation-by-jonathan-haidt-and-catherine-price-illustrated-by-cynthia-yuan-cheng/>,rCY
co-written with Jonathan Haidt, counsels tweens and kids on fun ways to
escape screens. [...]
https://www.wsj.com/style/design/everyone-wants-a-room-where-they-can-escape-their-screens-230d8712?st=6ybxNT
------------------------------
Date: Sat, 10 Jan 2026 16:41:05 -0700
From: geoff goodfellow <
geoff@iconia.com>
Subject: How AI Undermines Education
EXCERPT:
If you want to develop human intelligence, you can't let students rely on artificial intelligence.
Generative AI will transform the world, even if no one is quite sure what
the end product will look like. These are artificial intelligence programs
that create new content based on prompts or questions submitted by a user.
You can imagine the problems this has created for schools and teachers. Programs like ChatGPT can solve a student's hardest math problem and spit
out papers in seconds.
Some in the education establishment believe the best path forward is to
teach students how to incorporate AI into their learning. In Nevada, the
school district that includes Las Vegas has begun testing AI in the
classroom.
In April, President Donald Trump issued an executive order promoting the appropriate integration of AI into education. In 2024, more than a quarter
of teens said they had used ChatGPT for homework.
While some uses of AI in education could be beneficial, there is a real
danger here.
Education has two main purposes. The first is to provide moral instruction.
The second is to train students to think.
Consider this: Imagine your son is lost in a forest, miles from safety.
First, he needs to know where to go -- that's the moral compass. Second, he needs to know how to hike and overcome the obstacles on his path -- that's
the intellectual training.
Schools have long since abandoned their duty to pass along society's broad moral values. AI won't fix that problem. But perhaps generative AI could
help with academics. With ChatGPT, even a student can produce an
A-quality paper on the major themes of Romeo and Juliet.
Shouldn't schools teach students how to accomplish their tasks more efficiently? [...]
https://americafirstreport.com/how-ai-undermines-education/
------------------------------
Date: Sat, 10 Jan 2026 14:24:16 -0800
From: Barry Gold <
BarryDGold@ca.rr.com>
Subject: Re: Thieves are stealing keyless cars in minutes. Here's how
to protect your vehicle (Los Angeles Times, RISKS-34.83)
Sigh. Don't automakers have anybody who understands cybersecurity?
This should be easy, but it requires 6 steps:
1. driver pushes the button
2. fob sends a signal to the car
3. car sends a challenge to the fob (public key encryption)
4. fob sends back the decrypted signal
5. car unlocks or starts as the case may be.
------------------------------
Date: 9 Jan 2026 22:52:12 -0500
From: "John Levine" <
johnl@iecc.com>
Subject: Re: He Switched to eSIM, and Is Full of Regret (WiReD)
Feature? Bug?
If your job is to review mobile phones, and you move your SIM from one
phone to another four times a week, an eSIM can be kind of a pain. But for the other 99.9999% of phone users, that's irrelevant.
I think eSIMs are great. Whenever I go on an international trip, which I do
a few times a year, before I go I download an eSIM for the country I'm going
to into my phone and set it up, which takes about five minutes, avoids
faffing around getting a SIM at the airport, or paying ridiculous roaming rates, and is cheap, e.g. $3.80 for 3GB valid for a month in the UK.
My fave eSIM provider is
https://www.esim4travel.com/ but there are lots of others.
------------------------------
Date: Sat, 10 Jan 2026 11:38:06 +0000
From: Martin Ward <
martin@gkc.org.uk>
Subject: Re: NASA Library closing (Re: RISKS-34.83)
Book burning! [More details on previous item. PGN]
The Trump administration is closing NASAs largest research library on
Friday, a facility that houses tens of thousands of books, documents and journals -- many of them not digitized or available anywhere else.
Public libraries in Tennessee have begun to shut down as they carry
out an order from state officials to remove childrens' books
containing LGBTQ+ themes or characters.
https://www.commondreams.org/news/tennessee-gop-book-bans
The Institut fuer Sexualwissenschaft was a research institute and medical practice in Germany from 1919 to 1933. The institute pioneered research and treatment for various matters regarding gender and sexuality, including gay, transgender, and intersex topics.
In 1933 the institute was destroyed by the Nazis. Its library of over 12,000 books, and research archive including over 40,000 records was burned in the street: the largest of the "book burning ceremonies"
The Nazis burned books to showcase what they saw as the triumph of their worldview over competing ideas. They symbolically destroyed works of literature, science, and scholarship that conflicted with or challenged
their ideology." -- United States Holocaust Memorial Museum
https://encyclopedia.ushmm.org/content/en/article/book-burning
------------------------------
Date: Sat, 28 Oct 2023 11:11:11 -0800
From:
RISKS-request@csl.sri.com
Subject: Abridged info on RISKS (comp.risks)
The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
comp.risks, the feed for which is donated by panix.com as of June 2011.
SUBSCRIPTIONS: The mailman Web interface can be used directly to
subscribe and unsubscribe:
http://mls.csl.sri.com/mailman/listinfo/risks
SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
includes the string `notsp'. Otherwise your message may not be read.
*** This attention-string has never changed, but might if spammers use it.
SPAM challenge-responses will not be honored. Instead, use an alternative
address from which you never send mail where the address becomes public!
The complete INFO file (submissions, default disclaimers, archive sites,
copyright policy, etc.) has moved to the ftp.sri.com site:
<risksinfo.html>.
*** Contributors are assumed to have read the full info file for guidelines!
OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
delightfully searchable html archive at newcastle:
http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
Also,
ftp://ftp.sri.com/risks for the current volume/previous directories
or
ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
If none of those work for you, the most recent issue is always at
http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
ALTERNATIVE ARCHIVES:
http://seclists.org/risks/ (only since mid-2001)
*** NOTE: If a cited URL fails, we do not try to update them. Try
browsing on the keywords in the subject line or cited article leads.
Apologies for what Office365 and SafeLinks may have done to URLs.
Special Offer to Join ACM for readers of the ACM RISKS Forum:
<
http://www.acm.org/joinacm1>
------------------------------
End of RISKS-FORUM Digest 34.84
***********************
--- Synchronet 3.21a-Linux NewsLink 1.2