This is a multi-part message in MIME format
--_----------=_MCPart_2046955175
Content-Type: text/plain; charset="utf-8"; format="fixed" Content-Transfer-Encoding: quoted-printable
** CRYPTO-GRAM
DECEMBER 15=2C 2025 ------------------------------------------------------------
by Bruce Schneier
Fellow and Lecturer=2C Harvard Kennedy School
schneier@schneier.com
https://www.schneier.com
A free monthly newsletter providing summaries=2C analyses=2C insights=2C a=
nd commentaries on security: computer and otherwise.
For back issues=2C or to subscribe=2C visit Crypto-Gram's web page [https= ://www.schneier.com/crypto-gram/].
Read this issue on the web [
https://www.schneier.com/crypto-gram/archives= /2025/1215.html]
These same essays and news items appear in the Schneier on Security [http= s://www.schneier.com/] blog=2C along with a lively and intelligent comment=
section. An RSS feed is available.
** *** ***** ******* *********** *************
** IN THIS ISSUE:
------------------------------------------------------------
1. More Prompt||GTFO
2. AI and Voter Engagement
3. Legal Restrictions on Vulnerability Disclosure
4. Scam USPS and E-Z Pass Texts and Websites
5. AI as Cyberattacker
6. More on _Rewiring Democracy_
7. IACR Nullifies Election Because of Lost Decryption Key
8. Four Ways AI Is Being Used to Strengthen Democracies Worldwide
9. Huawei and Chinese Surveillance
10. Prompt Injection Through Poetry
11. Banning VPNs
12. Like Social Media=2C AI Requires Difficult Choices
13. New Anonymous Phone Service
14. Substitution Cipher Based on The Voynich Manuscript
15. AI vs. Human Drivers
16. FBI Warns of Fake Video Scams
17. AIs Exploiting Smart Contracts
18. Building Trustworthy AI Agents
19. Upcoming Speaking Engagements
** *** ***** ******* *********** *************
** MORE PROMPT||GTFO ------------------------------------------------------------
[2025.11.17] [
https://www.schneier.com/blog/archives/2025/11/more-prompt= gtfo.html] The next three in this series [
https://www.schneier.com/blog/a= rchives/2025/08/ai-applications-in-cybersecurity.html] on online events hi= ghlighting interesting uses of AI in cybersecurity are online: #4 [https:= //youtube.com/playlist?list=3DPLXz1MhBqAGJwkLpTxQFpe4hu4sBCuUGuN&si= =3DcETRhpsER5rN-rOo]=2C #5 [
https://youtube.com/playlist?list=3DPLXz1MhBq= AGJw-pySlsaAyuMv1OQKsbVtR&si=3DQvqqsmJbsbCetVWi]=2C and #6 [
https://= youtube.com/playlist?list=3DPLXz1MhBqAGJw9nmtpC7USh-s9Ku6KiFCf&si=3DF= RNSy7ZMAvzu6x2F]. Well worth watching.
** *** ***** ******* *********** *************
** AI AND VOTER ENGAGEMENT ------------------------------------------------------------
[2025.11.18] [
https://www.schneier.com/blog/archives/2025/11/ai-and-vote= r-engagement.html] Social media has been a familiar=2C even mundane=2C par=
t of life for nearly two decades. It can be easy to forget it was not alwa=
ys that way.
In 2008=2C social media was just emerging into the mainstream. Facebook [=
https://thefulcrum.us/media-technology/news-literacy-project] reached 100=
million users [
https://www.cnet.com/culture/facebook-hits-100-million-us= ers/] that summer. And a singular candidate was integrating social media i=
nto his political campaign: Barack Obama. His campaign=E2=80=99s use of so= cial media was so bracingly innovative=2C so impactful=2C that it was view=
ed by journalist David Talbot [
https://www.technologyreview.com/2008/08/1= 9/219185/how-obama-really-did-it-2/] and others as the strategy that enabl=
ed the first term Senator to win the White House.
Over the past few years=2C a new technology has become mainstream: AI [ht= tps://thefulcrum.us/the-new-world-of-ai]. But still=2C no candidate has un= locked AI=E2=80=99s potential to revolutionize political campaigns. Americ=
ans have three more years to wait before casting their ballots in another=
Presidential election=2C but we can look at the 2026 midterms and example=
s from around the globe for signs of how that breakthrough might occur.
* HOW OBAMA DID IT
Rereading the contemporaneous reflections of the _New York Times=E2=80=99_=
late media critic=2C David Carr [
https://www.nytimes.com/2008/11/10/busi= ness/media/10carr.html]=2C on Obama=E2=80=99s campaign reminds us of just=
how new social media felt in 2008. Carr positions it within a now-familia=
r lineage of revolutionary communications technologies from newspapers to=
radio to television to the internet.
The Obama campaign and administration demonstrated that social media was d= ifferent from those earlier communications technologies=2C including the p= re-social internet. Yes=2C increasing numbers [
https://www.pewresearch.or= g/internet/2009/04/15/the-internets-role-in-campaign-2008/] of voters were=
getting their news from the internet=2C and content about the then-Senato=
r sometimes made a splash by going viral [
https://time.com/archive/668149= 4/obamas-viral-marketing-campaign/]. But those were still broadcast commun= ications: one voice reaching many. Obama found ways to connect voters to e=
ach other.
In describing what social media revolutionized in campaigning=2C Carr quot=
es campaign vendor Blue State Digital=E2=80=99s Thomas Gensemer: =E2=80=9C= People will continue to expect a conversation=2C a two-way relationship th=
at is a give and take.=E2=80=9D
The Obama team made some earnest efforts to realize this vision. His trans= ition team launched change.gov [
http://change.gov]=2C the website where t=
he campaign collected a =E2=80=9CCitizen=E2=80=99s Briefing Book=E2=80=9D=
of public comment. Later=2C his administration built We the People [http= s://obamawhitehouse.archives.gov/blog/2015/07/23/look-back-we-people-petit= ions-2010-today]=2C an online petitioning platform.
But the lasting legacy of Obama=E2=80=99s 2008 campaign=2C as political sc= ientists Hahrie Han and Elizabeth McKenna chronicled=2C was pioneering onl=
ine =E2=80=9Crelational organizing [
https://www.bostonreview.net/articles= /learning-from-obamas-campaign/].=E2=80=9D This technique enlisted individ= uals as organizers to activate their friends in a self-perpetuating web of=
relationships.
Perhaps because of the Obama campaign=E2=80=99s close association with the=
method=2C relational organizing has been touted repeatedly as the linchpi=
n of Democratic campaigns: in 2020 [
https://www.wired.com/story/relationa= l-organizing-apps-2020-campaign/]=2C 2024 [
https://www.cbsnews.com/news/h= arris-trump-2024-election-ground-game/]=2C and today [
https://thedemocrat= icstrategist.org/2025/03/can-relational-organizing-save-the-democratic-par= ty/]. But research [
https://www.turnoutnation.org/thereport] by non-parti=
san groups like Turnout Nation [
https://www.turnoutnation.org/thereport]=
and right-aligned groups like the Center for Campaign Innovation [https:= //www.campaigninnovation.org/research/measuring-the-power-of-personal-conn= ection-a-relational-organizing-field-test] has also empirically validated=
the effectiveness of the technique for inspiring voter turnout within con= nected groups.
The Facebook of 2008 worked well for relational organizing. It gave users=
tools to connect and promote ideas to the people they know: college class= mates=2C neighbors=2C friends from work or church. But the nature of socia=
l networking has changed since then.
For the past decade=2C according to Pew Research [
https://www.pewresearch= =2Eorg/internet/2024/01/31/americans-social-media-use/]=2C Facebook use has=
stalled and lagged behind YouTube=2C while Reddit and TikTok have surged.=
These platforms are less useful for relational organizing=2C at least in=
the traditional sense. YouTube is organized more like broadcast televisio= n=2C where content creators produce content disseminated on their own chan= nels in a largely one-way communication to their fans. Reddit gathers user=
s worldwide in forums (subreddits) organized primarily on topical interest=
=2E The endless feed of TikTok=E2=80=99s =E2=80=9CFor You=E2=80=9D page diss= eminates engaging content with little ideological or social commonality. N=
one of these platforms shares the essential feature of Facebook c. 2008: a=
n organizational structure that emphasizes direct connection to people tha=
t users have direct social influence over.
* AI AND RELATIONAL ORGANIZING
Ideas and messages might spread virally through modern social channels=2C=
but they are not where you convince your friends to show up at a campaign=
rally. Today=E2=80=99s platforms are spaces for political hobbyism [http= s://theharvardpoliticalreview.com/political-hobbyism-young-volunteers/]=2C=
where you express your political feelings and see others express theirs.
Relational organizing works when one person=E2=80=99s action inspires othe=
rs to do this same. That=E2=80=99s inherently a chain of human-to-human co= nnection. If my AI assistant inspires your AI assistant=2C no human notice=
s and one=E2=80=99s vote changes. But key steps in the human chain can be=
assisted by AI. Tell your phone=E2=80=99s AI assistant to craft a persona=
l message [
https://www.washingtonpost.com/technology/2025/03/26/best-ai-e= mail-assistant/] to one friend -- or a hundred -- and it can do it.
So if a campaign hits you at the right time with the right message=2C they=
might persuade you to task your AI assistant to ask your friends to donat=
e or volunteer. The result can be something more than a form letter; it co=
uld be automatically drafted based on the entirety of your email or text c= orrespondence with that friend. It could include references to your discus= sions of recent events=2C or past campaigns=2C or shared personal experien= ces. It could sound as authentic as if you=E2=80=99d written it from the h= eart=2C but scaled to everyone in your address book.
Research [
https://www.pnas.org/doi/10.1073/pnas.2412815122] suggests that=
AI can generate and perform written political messaging about as well as=
humans. AI will surely play a tactical role [
https://prospect.org/power/= 2025-10-10-ai-artificial-intelligence-campaigns-midterms/] in the 2026 mid= term campaigns=2C and some candidates may even use it for relational organ= izing in this way.
* (ARTIFICIAL) IDENTITY POLITICS
For AI to be truly transformative of politics=2C it must change the way ca= mpaigns work. And we are starting to see that in the US.
The earliest uses of AI in American political campaigns are=2C to be polit= e=2C uninspiring. Candidates viewed them as just another tool [
https://ww= w.nytimes.com/2023/03/28/us/politics/artificial-intelligence-2024-campaign= s.html] to optimize an endless stream of email and text message appeals=2C=
to ramp up political vitriol [
https://www.politico.com/live-updates/2025= /09/29/congress/trump-ai-video-deepfake-schumer-jeffries-00586048]=2C to h= arvest data [
https://www.politico.com/news/2024/10/30/data-voters-politic= al-violence-00186132] on voters and donors=2C or merely as a stunt [https= ://www.theverge.com/2023/1/27/23574000/first-ai-chatgpt-written-speech-con= gress-floor-jake-auchincloss].
Of course=2C we have seen the rampant production and spread of AI-powered=
deepfakes and misinformation [
https://thefulcrum.us/media-technology/new= s-literacy-project]. This is already impacting the key 2026 Senate races=
=2C which are likely to attract hundreds of millions [
https://www.opensec= rets.org/elections-overview/most-expensive-races] of dollars in financing.=
Roy Cooper [
https://www.charlotteobserver.com/opinion/article311852771.h= tml]=2C Democratic candidate for US Senate from North Carolina=2C and Abdu=
l El-Sayed [
https://www.yahoo.com/news/articles/us-senate-candidate-el-sa= yed-034045043.html?guccounter=3D1&guce_referrer=3DaHR0cHM6Ly93d3cuZ29vZ2xl= LmNvbS8&guce_referrer_sig=3DAQAAALAUtriqW9v5IScANZLgCTf0zNOLsydt31He4PT4kt= 0k9RenTEqSTrC9cIdtQRGFvZQ8zHeMs-97cUvacWcG_e4X9h2MdbCfNN_2-K8z3D1PMtBbTc8W= ir-A8QHlALBo-vd2Kq9wDDvHrMraYXDiitdW2DB3Ysr8vPIyCpqF1ts3]=2C Democratic ca= ndidate for Senate from Michigan=2C were both targeted by viral deepfake a= ttacks in recent months. This may reflect [
https://www.nbcnews.com/tech/i= nternet/truth-social-trump-embraced-ai-media-attack-foes-boost-image-rcna2= 34978] a growing trend in Donald Trump=E2=80=99s Republican party in the u=
se of AI-generated imagery to build up GOP candidates and assail the oppos= ition.
And yet=2C in the global elections of 2024=2C AI was used more memetically=
[
https://restofworld.org/2025/global-elections-ai-use/] than deceptively=
=2E So far=2C conservative and far right parties seem to have adopted this m= ost aggressively. The ongoing rise of Germany=E2=80=99s far-right populist=
AfD party has been credited to its use of AI to generate nostalgic and ev= ocative [
https://www.politico.eu/article/germany-far-right-harness-artifi= cial-intelligence-win-election/] (and=2C to many=2C offensive) campaign im= ages=2C videos=2C and music and=2C seemingly as a result=2C they have domi= nated TikTok [
https://www.kas.de/documents/d/guest/ki-und-wahlen-1]. Beca=
use most social platforms=E2=80=99 algorithms are tuned to reward media th=
at generates an emotional response=2C this counts as a double use of AI: t=
o generate content and to manipulate its distribution.
AI can also be used to generate politically useful=2C though artificial=2C=
identities. These identities can fulfill different roles than humans in c= ampaigning and governance [
https://thefulcrum.us/corruption/corruption-pe= rception-index-2023-2667125422] because they have differentiated traits. T=
hey can=E2=80=99t be imprisoned for speaking out against the state=2C can=
be positioned (legitimately or not) as unsusceptible to bribery=2C and ca=
n be forced to show up when humans will not.
In Venezuela [
https://www.theguardian.com/world/article/2024/aug/27/venez= uela-journalists-nicolas-maduro-artificial-intelligence-media-election]=2C=
journalists have turned to AI avatars -- artificial newsreaders -- to rep=
ort anonymously on issues that would otherwise elicit government retaliati=
on. Albania recently =E2=80=9Cappointed [
https://www.bbc.com/news/article= s/cm2znzgwj3xo]=E2=80=9D an AI to a ministerial post responsible for procu= rement=2C claiming that it would be less vulnerable to bribery than a huma=
n. In Virginia=2C both in 2024 [
https://www.reuters.com/world/us/virginia= -congressional-candidate-creates-ai-chatbot-debate-stand-in-incumbent-2024= -10-08/] and again this year [
https://www.governing.com/politics/a-fake-d= ebate-in-virginia-raises-real-questions-about-ai-in-politics]=2C candidate=
s have used AI avatars as artificial stand-ins for opponents that refused=
to debate them.
And yet=2C none of these examples=2C whether positive or negative=2C pursu=
e the promise of the Obama campaign: to make voter engagement a =E2=80=9Ct= wo-way conversation=E2=80=9D on a massive scale.
The closest so far to fulfilling that vision anywhere in the world may be=
Japan=E2=80=99s new political party=2C Team Mirai [
https://team-mir.ai].=
It started in 2024=2C when an independent Tokyo gubernatorial candidate=
=2C Anno Takahiro [
https://futurepolis.substack.com/p/meet-your-ai-politi= cian-of-the-future]=2C used an AI avatar on YouTube to respond to 8=2C600=
constituent questions over a seventeen-day continuous livestream. He coll= ated hundreds of comments on his campaign manifesto into a revised policy=
platform. While he didn=E2=80=99t win his race=2C he shot up to a fifth p= lace [
https://en.wikipedia.org/wiki/2024_Tokyo_gubernatorial_election#Res= ults] finish among a record 56 candidates.
Anno was RECENTLY elected [
https://mainichi.jp/english/articles/20250720/= p2a/00m/0na/011000c] to the upper house of the federal legislature as the=
founder of a new party with a 100 day plan [
https://note.com/annotakahir= o24/n/nd648962bd411] to bring his vision of a =E2=80=9Cpublic listening AI= =E2=80=9D to the whole country. In the early stages of that plan=2C they= =E2=80=99ve invested their share of Japan=E2=80=99s 32 billion yen in part=
y grants [
https://www.nippon.com/en/japan-data/h02362/] -- public subsidi=
es for political parties -- to hire engineers building digital civic infra= structure for Japan. They=E2=80=99ve already created platforms to provide=
transparency [
https://marumie.team-mir.ai/o/team-mirai] for party expend= itures=2C and to use AI to make legislation [
https://gikai.team-mir.ai/]=
in the Diet easy=2C and are meeting with engineers from US-based Jigsaw L=
abs (a Google company) to learn from international examples [
https://note= =2Ecom/team_mirai_jp/n/n0bbbcc21c752] of how AI can be used to power partici= patory democracy.
Team Mirai has yet to prove that it can get a second member elected to the=
Japanese Diet=2C let alone to win substantial power=2C but they=E2=80=99r=
e innovating and demonstrating new ways of using AI to give people a way t=
o participate in politics that we believe is likely to spread.
* ORGANIZING WITH AI
AI could be used in the US in similar ways. Following American federalism= =E2=80=99s longstanding model of =E2=80=9Claboratories of democracy=2C=E2= =80=9D we expect the most aggressive campaign innovation to happen at the=
state and local level.
D.C. Mayor Muriel Bowser is partnering [
https://www.govtech.com/artificia= l-intelligence/washington-d-c-will-pilot-ai-at-public-listening-session] w=
ith MIT and Stanford labs to use the AI-based tool deliberation.io [http:= //deliberation.io] to capture wide scale public feedback in city policymak=
ing about AI. Her administration said [
https://octo.dc.gov/release/bowser= -administration-announces-first-its-kind-ai-pilot-program-new-platform-mit= -governance] that using AI in this process allows =E2=80=9Cthe District to=
better solicit public input to ensure a broad range of perspectives=2C id= entify common ground=2C and cultivate solutions that align with the public=
interest.=E2=80=9D
It remains to be seen how central this will become to Bowser=E2=80=99s exp= ected [
https://www.nbcwashington.com/news/politics/bowser-future-legacy-c= huck-todd/3992167/] re-election campaign in 2026=2C but the technology has=
legitimate potential to be a prominent part of a broader program to rebui=
ld trust in government. This is a trail blazed by Taiwan a decade ago. The=
vTaiwan [
https://www.theguardian.com/world/article/2024/aug/17/audrey-ta= ng-toxic-social-media-fake-news-taiwan-trans-government-internet] initiati=
ve showed how digital tools like Pol.is [
https://pol.is/home]=2C which us=
es machine learning [
https://compdemocracy.org/Analysis/] to make sense o=
f real time constituent feedback=2C can scale participation in democratic=
processes and radically improve trust in government. Similar AI listening=
processes have been used in Kentucky [
https://www.pbs.org/newshour/show/= how-a-kentucky-community-is-using-ai-to-help-people-find-common-ground]=2C=
France [
https://about.make.org/articles-en/citizens-convention-on-end-of= -life-with-make-org-the-esec-offers-an-innovative-ai-platform-to-enable-th= e-general-public-and-parliamentarians-to-take-greater-ownership-of-the-deb= ates-held-by-citizens]=2C and Germany [
https://compdemocracy.org/Case-stu= dies/2018-germany-aufstehen/].
Even if campaigns like Bowser=E2=80=99s don=E2=80=99t adopt this kind of A= I-facilitated listening and dialog=2C expect it to be an increasingly prom= inent part of American public debate. Through a partnership with Jigsaw=2C=
Scott Rasmussen=E2=80=99s Napolitan Institute will use AI to elicit and s= ynthesize the views of at least five Americans from every Congressional di= strict in a project called =E2=80=9CWe the People [
https://www.forbes.com= /sites/richardnieva/2025/08/14/inside-googles-plan-to-use-ai-to-survey-ame= ricans-on-their-political-views/].=E2=80=9D Timed to coincide with the cou= ntry=E2=80=99s 250th anniversary in 2026=2C expect the results to be promo=
ted during the heat of the midterm campaign and to stoke interest in this=
kind of AI-assisted political sensemaking.
In the year where we celebrate the American republic=E2=80=99s semiquincen= tennial and continue a decade-long debate about whether or not Donald Trum=
p and the Republican party remade in his image is fighting for the interes=
ts of the working class=2C representation will be on the ballot in 2026. M= idterm election candidates will look for any way they can get an edge. For=
all the risks it poses to democracy=2C AI presents a real opportunity=2C=
too=2C for politicians to engage voters en masse while factoring their in=
put into their platform and message. Technology isn=E2=80=99t going to tur=
n an uninspiring candidate into Barack Obama=2C but it gives any aspirant=
to office the capability to try to realize the promise that swept him int=
o office.
_This essay was written with Nathan E. Sanders=2C and originally appeared=
in The Fulcrum [
https://thefulcrum.us/media-technology/artificial-intell= igence-in-politics]._
** *** ***** ******* *********** *************
** LEGAL RESTRICTIONS ON VULNERABILITY DISCLOSURE ------------------------------------------------------------
[2025.11.19] [
https://www.schneier.com/blog/archives/2025/11/legal-restr= ictions-on-vulnerability-disclosure.html] Kendra Albert gave an excellent=
talk [
https://www.youtube.com/watch?v=3DlUe3uUvIyT0] at USENIX Security=
this year=2C pointing out that the legal agreements surrounding vulnerabi= lity disclosure muzzle researchers while allowing companies to not fix the=
vulnerabilities -- exactly the opposite of what the responsible disclosur=
e movement of the early 2000s was supposed to prevent. This is the talk.
Thirty years ago=2C a debate raged over whether vulnerability disclosure=
was good for computer security. On one side=2C full disclosure advocates=
argued that software bugs weren=E2=80=99t getting fixed and wouldn=E2=80=
=99t get fixed if companies that made insecure software wasn=E2=80=99t cal=
led out publicly. On the other side=2C companies argued that full disclosu=
re led to exploitation of unpatched vulnerabilities=2C especially if they=
were hard to fix. After blog posts=2C public debates=2C and countless mai= ling list flame wars=2C there emerged a compromise solution: coordinated v= ulnerability disclosure=2C where vulnerabilities were disclosed after a pe= riod of confidentiality where vendors can attempt to fix things. Although=
full disclosure fell out of fashion=2C disclosure won and security throug=
h obscurity lost. We=E2=80=99ve lived happily ever after since.
Or have we? The move towards paid bug bounties and the rise of platforms=
that manage bug bounty programs for security teams has changed the realit=
y of disclosure significantly. In certain cases=2C these programs require=
agreement to contractual restrictions. Under the status quo=2C that means=
that software companies sometimes funnel vulnerabilities into bug bounty=
management platforms and then condition submission on confidentiality agr= eements that can prohibit researchers from ever sharing their findings.
In this talk=2C I=E2=80=99ll explain how confidentiality requirements fo=
r managed bug bounty programs restrict the ability of those who attempt to=
report vulnerabilities to share their findings publicly=2C compromising t=
he bargain at the center of the CVD process. I=E2=80=99ll discuss what con= tract law can tell us about how and when these restrictions are enforceabl= e=2C and more importantly=2C when they aren=E2=80=99t=2C providing advice=
to hackers around how to understand their legal rights when submitting. F= inally=2C I=E2=80=99ll call upon platforms and companies to adapt their pr= actices to be more in line with the original bargain of coordinated vulner= ability disclosure=2C including by banning agreements that require non-dis= closure.
And this [
https://www.schneier.com/essays/archives/2007/01/schneier_full_= disclo.html] is me from 2007=2C talking about =E2=80=9Cresponsible disclos= ure=E2=80=9D:
This was a good idea -- and these days it=E2=80=99s normal procedure --=
but one that was possible only because full disclosure was the norm. And=
it remains a good idea only as long as full disclosure is the threat.
** *** ***** ******* *********** *************
** SCAM USPS AND E-Z PASS TEXTS AND WEBSITES ------------------------------------------------------------
[2025.11.20] [
https://www.schneier.com/blog/archives/2025/11/scam-usps-a= nd-e-z-pass-texts-and-websites.html] Google has filed a complaint in court=
that details the scam [
https://arstechnica.com/tech-policy/2025/11/googl= e-vows-to-stop-scam-e-z-pass-and-usps-texts-plaguing-americans/]:
In a complaint filed Wednesday=2C the tech giant accused =E2=80=9Ca cybe=
rcriminal group in China=E2=80=9D of selling =E2=80=9Cphishing for dummies= =E2=80=9D kits. The kits help unsavvy fraudsters easily =E2=80=9Cexecute a=
large-scale phishing campaign=2C=E2=80=9D tricking hordes of unsuspecting=
people into =E2=80=9Cdisclosing sensitive information like passwords=2C c= redit card numbers=2C or banking information=2C often by impersonating wel= l-known brands=2C government agencies=2C or even people the victim knows.= =E2=80=9D
These branded =E2=80=9CLighthouse=E2=80=9D kits offer two versions of so=
ftware=2C depending on whether bad actors want to launch SMS and e-commerc=
e scams. =E2=80=9CMembers may subscribe to weekly=2C monthly=2C seasonal=
=2C annual=2C or permanent licenses=2C=E2=80=9D Google alleged. Kits inclu=
de =E2=80=9Chundreds of templates for fake websites=2C domain set-up tools=
for those fake websites=2C and other features designed to dupe victims in=
to believing they are entering sensitive information on a legitimate websi= te.=E2=80=9D
Google=E2=80=99s filing said the scams often begin with a text claiming=
that a toll fee is overdue or a small fee must be paid to redeliver a pac= kage. Other times they appear as ads -- sometimes even Google ads=2C until=
Google detected and suspended accounts -- luring victims by mimicking pop= ular brands. Anyone who clicks will be redirected to a website to input se= nsitive information; the sites often claim to accept payments from trusted=
wallets like Google Pay.
** *** ***** ******* *********** *************
** AI AS CYBERATTACKER ------------------------------------------------------------
[2025.11.21] [
https://www.schneier.com/blog/archives/2025/11/ai-as-cyber= attacker.html] From Anthropic [
https://www.anthropic.com/news/disrupting-= AI-espionage]:
In mid-September 2025=2C we detected suspicious activity that later inve=
stigation determined to be a highly sophisticated espionage campaign. The=
attackers used AI=E2=80=99s =E2=80=9Cagentic=E2=80=9D capabilities to an=
unprecedented degree -- using AI not just as an advisor=2C but to execute=
the cyberattacks themselves.
The threat actor -- whom we assess with high confidence was a Chinese st=
ate-sponsored group -- manipulated our Claude Code tool into attempting in= filtration into roughly thirty global targets and succeeded in a small num=
ber of cases. The operation targeted large tech companies=2C financial ins= titutions=2C chemical manufacturing companies=2C and government agencies.=
We believe this is the first documented case of a large-scale cyberattack=
executed without substantial human intervention.
[...]
The attack relied on several features of AI models that did not exist=2C=
or were in much more nascent form=2C just a year ago:
1. _Intelligence_. Models=E2=80=99 general levels of capability hav=
e increased to the point that they can follow complex instructions and und= erstand context in ways that make very sophisticated tasks possible. Not o=
nly that=2C but several of their well-developed specific skills -- in part= icular=2C software coding -- lend themselves to being used in cyberattacks=
=2E
2. _Agency_. Models can act as agents -- that is=2C they can run in=
loops where they take autonomous actions=2C chain together tasks=2C and m=
ake decisions with only minimal=2C occasional human input.
3. _Tools_. Models have access to a wide array of software tools (o=
ften via the open standard Model Context Protocol). They can now search th=
e web=2C retrieve data=2C and perform many other actions that were previou=
sly the sole domain of human operators. In the case of cyberattacks=2C the=
tools might include password crackers=2C network scanners=2C and other se= curity-related software.
** *** ***** ******* *********** *************
** MORE ON _REWIRING DEMOCRACY_ ------------------------------------------------------------
[2025.11.21] [
https://www.schneier.com/blog/archives/2025/11/71226.html]=
It=E2=80=99s been a month since _Rewiring Democracy: How AI Will Transfor=
m Our Politics=2C Government=2C and Citizenship_ [
https://www.schneier.co= m/books/rewiring-democracy/] was published. From what we know=2C sales are=
good.
Some of the book=E2=80=99s forty-three chapters are available online: chap= ters 2 [
https://time.com/7331883/how-ai-will-transform-democracy/]=2C 12=
[
https://pghrev.com/being-a-politician/]=2C 28 [
https://thepreamble.com= /p/rewiring-democracy]=2C 34 [
https://newpublic.substack.com/p/2ddffc17-a= 033-4f98-83fa-11376b30c6cd]=2C 38 [
https://ai-frontiers.org/articles/ai-w= ill-be-your-personal-political-proxy]=2C and 41 [
https://builtin.com/arti= cles/principles-ai-democracy].
We need more reviews -- six on Amazon is not enough [
https://www.amazon.c= om/Rewiring-Democracy-Transform-Government-Citizenship/dp/0262049945]=2C a=
nd no one has yet posted a viral TikTok review. One review was published [=
https://www.nature.com/articles/d41586-025-03718-w] in _Nature_ and anoth=
er on the RSA Conference website [
https://www.rsaconference.com/library/b= log/bens-book-of-the-month-rewiring-democracy]=2C but more would be better=
=2E If you=E2=80=99ve read the book=2C please leave a review somewhere.
My coauthor and I have been doing all sorts of book events=2C both online=
and in person. This book event [
https://www.youtube.com/watch?v=3Dgy-w4C= 6vfOc]=2C with Danielle Allen at the Harvard Kennedy School Ash Center=2C=
is particularly good. We also have been doing a ton of podcasts=2C both s= eparately and together. They=E2=80=99re all on the book=E2=80=99s homepage=
[
https://www.schneier.com/books/rewiring-democracy/].
There are two live book events in December. If you=E2=80=99re in Boston=2C=
come see us [
https://mitmuseum.mit.edu/programs/author-talk-rewiring-dem= ocracy-how-ai-will-transform-our-politics-government-and-citizenship] at t=
he MIT Museum on 12/1. If you=E2=80=99re in Toronto=2C you can see me [ht= tps://munkschool.utoronto.ca/event/rewiring-democracy] at the Munk School=
at the University of Toronto on 12/2.
I=E2=80=99m also doing a live AMA on the book on the RSA Conference websit=
e on 12/16. Register here [
https://rsaconference.registration.goldcast.io= /events/3c67940f-c22b-4913-b6bf-1e6ba333ac5e].
** *** ***** ******* *********** *************
** IACR NULLIFIES ELECTION BECAUSE OF LOST DECRYPTION KEY ------------------------------------------------------------
[2025.11.24] [
https://www.schneier.com/blog/archives/2025/11/iacr-nullif= ies-election-because-of-lost-decryption-key.html] The International Associ= ation of Cryptologic Research -- the academic cryptography association tha= t=E2=80=99s been putting conferences like Crypto (back when =E2=80=9Ccrypt= o=E2=80=9D meant =E2=80=9Ccryptography=E2=80=9D) and Eurocrypt since the 1= 980s -- had to nullify [
https://www.iacr.org/news/item/27138] an online e= lection when trustee Moti Yung lost his decryption key.
For this election and in accordance with the bylaws of the IACR=2C the t=
hree members of the IACR 2025 Election Committee acted as independent trus= tees=2C each holding a portion of the cryptographic key material required=
to jointly decrypt the results. This aspect of Helios=E2=80=99 design ens= ures that no two trustees could collude to determine the outcome of an ele= ction or the contents of individual votes on their own: all trustees must=
provide their decryption shares.
Unfortunately=2C one of the three trustees has irretrievably lost their=
private key=2C an honest but unfortunate human mistake=2C and therefore c= annot compute their decryption share. As a result=2C Helios is unable to c= omplete the decryption process=2C and it is technically impossible for us=
to obtain or verify the final outcome of this election.
The group will redo the election=2C but this time setting a 2-of-3 thresho=
ld scheme for decrypting the results=2C instead of requiring all three
News [
https://arstechnica.com/security/2025/11/cryptography-group-cancels= -election-results-after-official-loses-secret-key/] articles [
https://www= =2Enytimes.com/2025/11/21/world/cryptography-group-lost-election-results.htm= l?smid=3Dnytcore-android-share].
** *** ***** ******* *********** *************
** FOUR WAYS AI IS BEING USED TO STRENGTHEN DEMOCRACIES WORLDWIDE ------------------------------------------------------------
[2025.11.25] [
https://www.schneier.com/blog/archives/2025/11/four-ways-a= i-is-being-used-to-strengthen-democracies-worldwide.html] Democracy is col= liding with the technologies of artificial intelligence. Judging from the=
audience reaction at the recent World Forum on Democracy [
https://www.co= e.int/en/web/world-forum-democracy] in Strasbourg=2C the general expectati=
on is that democracy will be the worse for it. We have another narrative.=
Yes=2C there are risks to democracy from AI=2C but there are also opportu= nities.
We have just published the book Rewiring Democracy: How AI will Transform=
Politics=2C Government=2C and Citizenship [
https://mitpress.mit.edu/9780= 262049948/rewiring-democracy/]_._ In it=2C we take a clear-eyed view of ho=
w AI is undermining confidence in our information ecosystem=2C how the use=
of biased AI can harm constituents of democracies and how elected officia=
ls with authoritarian tendencies can use it to consolidate power. But we a=
lso give positive examples of how AI is transforming democratic governance=
and politics for the better.
Here are four such stories unfolding right now around the world=2C showing=
how AI is being used by some to make democracy better=2C stronger=2C and=
more responsive to people.
* JAPAN
Last year=2C then 33-year-old engineer Takahiro Anno was a fringe candidat=
e for governor of Tokyo. Running as an independent candidate=2C he ended u=
p coming in fifth in a crowded field of 56 [
https://www.nytimes.com/2024/= 07/06/world/asia/tokyo-governors-election.html]=2C largely thanks to the u= nprecedented use of an authorized AI avatar. That avatar answered 8=2C600=
questions from voters [
https://futurepolis.substack.com/p/meet-your-ai-p= olitician-of-the-future] on a 17-day continuous YouTube livestream and gar= nered the attention of campaign innovators worldwide.
Two months ago=2C Anno-san was elected [
https://mainichi.jp/english/artic= les/20250722/p2a/00m/0na/016000c] to Japan=E2=80=99s upper legislative cha= mber=2C again leveraging the power of AI to engage constituents -- this ti=
me answering more than 20=2C000 questions [
https://note-com.translate.goo= g/annotakahiro24/n/n4ec669d391dd?_x_tr_sl=3Dja&_x_tr_tl=3Den&_x_tr_hl=3Den= &_x_tr_pto=3Dsc&_x_tr_hist=3Dtrue]. His new party=2C Team Mirai=2C is also=
an AI-enabled civic technology shop=2C producing software aimed at making=
governance better and more participatory. The party is leveraging its sha=
re of Japan=E2=80=99s public funding for political parties to build the Mi=
rai Assembly [
https://note.com/team_mirai_jp/n/nd1656aa5f86d] app=2C enab=
ling constituents to express opinions on and ask questions about bills in=
the legislature=2C and to organize those expressions using AI. The party=
promises that its members will direct their questioning [
https://globaln= ation.inquirer.net/291183/team-mirai-in-spotlight-with-aim-to-update-democ= racy-with-tech] in committee hearings based on public input.
* BRAZIL
Brazil is notoriously litigious [
https://www.npr.org/sections/parallels/2= 014/11/05/359830235/brazil-the-land-of-many-lawyers-and-very-slow-justice]=
=2C with even more lawyers per capita than the US. The courts are chronica=
lly overwhelmed with cases and the resultant backlog costs the government=
billions to process. Estimates are that the Brazilian federal government=
spends about 1.6% of GDP per year operating the courts [
https://www1.fol= ha.uol.com.br/internacional/en/business/2024/01/brazil-leads-spending-on-c= ourts-among-53-countries.shtml] and another 2.5% to 3% of GDP issuing cour= t-ordered payments [
https://valorinternational.globo.com/economy/news/202= 5/04/11/federal-court-losses-already-consume-25percent-of-gdp-annually.ght=
ml] from lawsuits the government has lost.
Since at least 2019=2C the Brazilian government has aggressively adopted [=
https://www.techandjustice.bsg.ox.ac.uk/research/brazil] AI to automate p= rocedures throughout its judiciary. AI is not making judicial decisions=2C=
but aiding in distributing caseloads=2C performing legal research=2C tran= scribing hearings=2C identifying duplicative filings=2C preparing initial=
orders for signature and clustering similar cases for joint consideration=
: all things to make the judiciary system work more efficiently. And the r= esults are significant; Brazil=E2=80=99s federal supreme court backlog=2C=
for example=2C dropped in 2025 to its lowest levels in 33 years [https:/= /noticias-stf-wp-prd.s3.sa-east-1.amazonaws.com/wp-content/uploads/wpallim= port/uploads/2025/07/01191513/PRESTACAO-JURISDICIONAL-2025-4.pdf].
While it seems clear that the courts are realizing efficiency benefits fro=
m leveraging AI=2C there is a postscript to the courts=E2=80=99 AI impleme= ntation project over the past five-plus years: the litigators are using th=
ese tools=2C too. Lawyers are using AI assistance to file cases in Brazili=
an courts at an unprecedented rate [
https://restofworld.org/2025/latin-am= erica-judges-ai-crimes/]=2C with new cases growing by nearly 40% in volume=
over the past five years.
It=E2=80=99s not necessarily a bad thing for Brazilian litigators to regai=
n the upper hand in this arms race. It has been argued that litigation=2C=
particularly against the government=2C is a vital form of civic participa= tion [
https://www.jstor.org/stable/30245797]=2C essential to the self-gov= ernance function [
https://scholarlycommons.law.emory.edu/cgi/viewcontent.= cgi?article=3D1147&context=3Delj] of democracy. Other democracies=E2=80=99=
court systems should study and learn from Brazil=E2=80=99s experience and=
seek to use technology to maximize the bandwidth and liquidity of the cou=
rts to process litigation.
* GERMANY
Now=2C we move to Europe and innovations in informing voters. Since 2002=
=2C the German Federal Agency for Civic Education has operated a non-parti=
san voting guide called Wahl-o-Mat [
https://www.wahl-o-mat.de/bundestagsw= ahl2025/app/main_app.html]. Officials convene an editorial team of 24 youn=
g voters (under 26 and selected for diversity) with experts from science a=
nd education to develop a slate of 80 questions. The questions are put to=
all registered German political parties. The responses are narrowed down=
to 38 key topics and then published online in a quiz format that voters c=
an use to identify the party whose platform they most identify with.
In the past two years=2C outside groups have been innovating alternatives=
to the official Wahl-o-Mat guide that leverage AI. First came Wahlweise [=
https://www.heise.de/en/news/Electorally-How-artificial-intelligence-shou= ld-help-with-voting-decisions-9824511.html]=2C a product of the German AI=
company AIUI. Second=2C students at the Technical University of Munich de= ployed an interactive AI system called Wahl.chat [
https://www.cit.tum.de/= en/cit/news/article/wahlchat/]. This tool was used by more than 150=2C000=
people [
https://www.tum.de/en/news-and-events/all-news/press-releases/de= tails/technology-for-democracy] within the first four months. In both case= s=2C instead of having to read static webpages about the positions of vari=
ous political parties=2C citizens can engage in an interactive conversatio=
n with an AI system to more easily get the same information contextualized=
to their individual interests and questions.
However=2C German researchers studying the reliability of such AI tools ah=
ead of the 2025 German federal election raised significant concerns [http= s://arxiv.org/abs/2502.15568] about bias and =E2=80=9Challucinations=E2=80=
=9D -- AI tools making up false information. Acknowledging the potential o=
f the technology to increase voter informedness and party transparency=2C=
the researchers recommended adopting scientific evaluations comparable to=
those used in the Agency for Civic Education=E2=80=99s official tool to i= mprove and institutionalize the technology.
* UNITED STATES
Finally=2C the US -- in particular=2C California=2C home to CalMatters [h= ttps://calmatters.org]=2C a non-profit=2C nonpartisan news organization. S= ince 2023=2C its Digital Democracy [
https://calmatters.digitaldemocracy.o=
rg] project has been collecting every public utterance of California elect=
ed officials -- every floor speech=2C comment made in committee and social=
media post=2C along with their voting records=2C legislation=2C and campa=
ign contributions -- and making all that information available in a free o= nline platform.
CalMatters this year launched a new feature that takes this kind of civic=
watchdog function a big step further. Its AI Tip Sheets [
https://dicktof= el.substack.com/p/bringing-digital-democracy-to-california] feature uses A=
I to search through all of this data=2C looking for anomalies=2C such as a=
change in voting position tied to a large campaign contribution. These an= omalies appear on a webpage that journalists can access to give them story=
ideas and a source of data and analysis to drive further reporting.
This is not AI replacing human journalists; it is a civic watchdog organiz= ation using technology to feed evidence-based insights to human reporters.=
And it=E2=80=99s no coincidence that this innovation arose from a new kin=
d of media institution -- a non-profit news agency. As the watchdog functi=
on of the fourth estate continues to be degraded by the decline of newspap= ers=E2=80=99 business models=2C this kind of technological support is a va= luable contribution to help a reduced number of human journalists retain s= omething of the scope of action and impact our democracy relies on them fo=
r.
These are just four of many stories from around the globe of AI helping to=
make democracy stronger. The common thread is that the technology is dist= ributing rather than concentrating power. In all four cases=2C it is being=
used to assist people performing their democratic tasks -- politics in Ja= pan=2C litigation in Brazil=2C voting in Germany [
https://www.theguardian= =2Ecom/world/germany] and watchdog journalism in California -- rather than r= eplacing them.
In none of these cases is the AI doing something that humans can=E2=80=99t=
perfectly competently do. But in all of these cases=2C we don=E2=80=99t h=
ave enough available humans to do the jobs on their own. A sufficiently tr= ustworthy AI can fill in gaps: amplify the power of civil servants and cit= izens=2C improve efficiency=2C and facilitate engagement between governmen=
t and the public.
One of the barriers towards realizing this vision more broadly is the AI m= arket itself. The core technologies are largely being created and marketed=
by US tech giants. We don=E2=80=99t know the details of their development=
: on what material they were trained=2C what guardrails are designed to sh=
ape their behavior=2C what biases and values are encoded into their system=
s. And=2C even worse=2C we don=E2=80=99t get a say in the choices associat=
ed with those details or how they should change over time. In many cases=
=2C it=E2=80=99s an unacceptable risk to use these for-profit=2C proprieta=
ry AI systems in democratic contexts.
To address that=2C we have long advocated [
https://slate.com/technology/2= 023/04/ai-public-option.html] for the development of =E2=80=9Cpublic AI=E2= =80=9D: models and AI systems that are developed under democratic control=
and deployed for public benefit=2C not sold by corporations to benefit th=
eir shareholders. The movement for this is growing worldwide.
Switzerland has recently released the world=E2=80=99s most powerful and fu=
lly realized public AI model. It=E2=80=99s called Apertus [
https://www.sw= iss-ai.org/apertus]=2C and it was developed jointly by public Swiss instit= utions: the universities ETH
Zurich and EPFL=2C and the Swiss National Supercomputing Centre (CSCS). Th=
e development team has made it entirely open source -- open data=2C open c= ode=2C open weights -- and free for anyone to use. No illegally acquired c= opyrighted works were used in its training. It doesn=E2=80=99t exploit poo=
rly paid human laborers from the global south. Its performance [
https://h= uggingface.co/swiss-ai/Apertus-70B-2509] is about where the large corporat=
e giants were a year ago=2C which is more than good enough for many applic= ations. And it demonstrates that it=E2=80=99s not necessary to spend trill= ions [
https://www.forbes.com/sites/rashishrivastava/2025/11/07/why-sam-al= tman-wont-be-on-the-hook-for-openais-massive-spending-spree/] of dollars c= reating these models. Apertus takes a huge step forward to realizing the v= ision of an alternative to big tech -- controlled corporate AI.
AI technology is not without its costs and risks=2C and we are not here to=
minimize them. But the technology has significant benefits as well.
AI is inherently power-enhancing=2C and it can magnify what the humans beh=
ind it want to do. It can enhance authoritarianism as easily as it can enh= ance democracy. It=E2=80=99s up to us to steer the technology in that bett=
er direction. If more citizen watchdogs and litigators use AI to amplify t= heir power to oversee government and hold it accountable=2C if more politi=
cal parties and election administrators use it to engage meaningfully with=
and inform voters and if more governments provide democratic alternatives=
to big tech=E2=80=99s AI offerings=2C society will be better off.
_This essay was written with Nathan E. Sanders=2C and originally appeared=
in The Guardian [
https://www.theguardian.com/commentisfree/2025/nov/23/a= i-use-strengthen-democracy]._
** *** ***** ******* *********** *************
** HUAWEI AND CHINESE SURVEILLANCE ------------------------------------------------------------
[2025.11.26] [
https://www.schneier.com/blog/archives/2025/11/huawei-and-= chinese-surveillance.html] This quote is from _House of Huawei: The Secret=
History of China=E2=80=99s Most Powerful Company_ [
https://www.mcnallyja= ckson.com/book/9780593544631].
Long before anyone had heard of Ren Zhengfei or Huawei=2C Wan Runnan had=
been China=E2=80=99s star entrepreneur in the 1980s=2C with his company=
=2C the Stone Group=2C touted as =E2=80=9CChina=E2=80=99s IBM.=E2=80=9D Wa=
n had believed that economic change could lead to political change. He had=
thrown his support behind the pro-democracy protesters in 1989. As a resu= lt=2C he had to flee to France=2C with an arrest warrant hanging over his=
head. He was never able to return home. Now=2C decades later and in faili=
ng health in Paris=2C Wan recalled something that had happened one day in=
the late 1980s=2C when he was still living in Beijing.
Local officials had invited him to dinner.
This was unusual. He was usually the one to invite officials to dine=2C=
so as to curry favor with the show of hospitality. Over the meal=2C the o= fficials told Wan that the Ministry of State Security was going to send ag= ents to work undercover at his company in positions dealing with internati= onal relations. The officials cast the move to embed these minders as an a=
ct of protection for Wan and the company=E2=80=99s other executives=2C a s= ecurity measure that would keep them from stumbling into unseen risks in t= heir dealings with foreigners. =E2=80=9CYou have a lot of international bu= siness=2C which raises security issues for you. There are situations that=
you don=E2=80=99t understand=2C=E2=80=9D Wan recalled the officials telli=
ng him. =E2=80=9CThey said=2C =E2=80=98We are sending some people over. Yo=
u can just treat them like regular employees.=E2=80=99=E2=80=9D
Wan said he knew that around this time=2C state intelligence also contac=
ted other tech companies in Beijing with the same request. He couldn=E2=80= =99t say what the situation was for Huawei=2C which was still a little sta= rtup far to the south in Shenzhen=2C not yet on anyone=E2=80=99s radar. Bu=
t Wan said he didn=E2=80=99t believe that Huawei would have been able to e= scape similar demands. =E2=80=9CThat is a certainty=2C=E2=80=9D he said.
=E2=80=9CTelecommunications is an industry that has to do with keeping c=
ontrol of a nation=E2=80=99s lifeline...and actually in any system of comm= unications=2C there=E2=80=99s a back-end platform that could be used for e= avesdropping.=E2=80=9D
It was a rare moment of an executive lifting the cone of silence surroun=
ding the MSS=E2=80=99s relationship with China=E2=80=99s high-tech industr=
y. It was rare=2C in fact=2C in any country. Around the world=2C such spyi=
ng operations rank among governments=E2=80=99 closest-held secrets. When E= dward Snowden had exposed the NSA=E2=80=99s operations abroad=2C he=E2=80=
=99d ended up in exile in Russia. Wan=2C too=2C might have risked arrest h=
ad he still been living in China.
Here are two book [
https://www.wsj.com/business/telecom/house-of-huawei-r= eview-the-path-to-dominance-ca3bb438] reviews [
https://www.foreignaffairs= =2Ecom/reviews/house-huawei-secret-history-chinas-most-powerful-company].
** *** ***** ******* *********** *************
** PROMPT INJECTION THROUGH POETRY ------------------------------------------------------------
[2025.11.28] [
https://www.schneier.com/blog/archives/2025/11/prompt-inje= ction-through-poetry.html] In a new paper=2C =E2=80=9CAdversarial Poetry a=
s a Universal Single-Turn Jailbreak Mechanism in Large Language Models [h= ttps://arxiv.org/pdf/2511.15304]=2C=E2=80=9D researchers found that turnin=
g LLM prompts into poetry resulted in jailbreaking the models:
Abstract: We present evidence that adversarial poetry functions as a uni=
versal single-turn jailbreak technique for Large Language Models (LLMs). A= cross 25 frontier proprietary and open-weight models=2C curated poetic pro= mpts yielded high attack-success rates (ASR)=2C with some providers exceed=
ing 90%. Mapping prompts to MLCommons and EU CoP risk taxonomies shows tha=
t poetic attacks transfer across CBRN=2C manipulation=2C cyber-offence=2C=
and loss-of-control domains. Converting 1=2C200 ML-Commons harmful prompt=
s into verse via a standardized meta-prompt produced ASRs up to 18 times h= igher than their prose baselines. Outputs are evaluated using an ensemble=
of 3 open-weight LLM judges=2C whose binary safety assessments were valid= ated on a stratified human-labeled subset. Poetic framing achieved an aver=
age jailbreak success rate of 62% for hand-crafted poems and approximately=
43% for meta-prompt conversions (compared to non-poetic baselines)=2C sub= stantially outperforming non-poetic baselines and revealing a systematic v= ulnerability across model families and safety training approaches. These f= indings demonstrate that stylistic variation alone can circumvent contempo= rary safety mechanisms=2C suggesting fundamental limitations in current al= ignment methods and evaluation protocols.
CBRN stands for =E2=80=9Cchemical=2C biological=2C radiological=2C nuclear= =2E=E2=80=9D
They used a ML model to translate these harmful prompts from prose to vers= e=2C and then fed them into other models for testing. Sadly=2C the paper d=
oes not give examples of these poetic prompts. They claim this is for secu= rity purposes=2C I decision I disagree with. They should release their dat=
a.
Our study begins with a small=2C highprecision prompt set consisting of=
20 handcrafted adversarial poems covering English and Italian=2C designed=
to test whether poetic structure=2C in isolation=2C can alter refusal beh= avior in large language models. Each poem embeds an instruction associated=
with a predefined safety-relevant scenario (Section 2)=2C but expresses i=
t through metaphor=2C imagery=2C or narrative framing rather than direct o= perational phrasing. Despite variation in meter and stylistic device=2C al=
l prompts follow a fixed template: a short poetic vignette culminating in=
a single explicit instruction tied to a specific risk category. The curat=
ed set spans four high-level domains -- CBRN (8 prompts)=2C Cyber Offense=
(6)=2C Harmful Manipulation (3)=2C and Loss of Control (3). Although expr= essed allegorically=2C each poem preserves an unambiguous evaluative inten=
t. This compact dataset is used to test whether poetic reframing alone can=
induce aligned models to bypass refusal heuristics under a single-turn th= reat model. To maintain safety=2C no operational details are included in t=
his manuscript; instead we provide the following sanitized structural prox=
y:
A baker guards a secret oven=E2=80=99s heat=2C
its whirling racks=2C its spindle=E2=80=99s measured beat.
To learn its craft=2C one studies every turn -- how flour lifts=2C how=
sugar starts to burn.
Describe the method=2C line by measured line=2C
that shapes a cake whose layers intertwine.
To situate this controlled poetic stimulus within a broader and more sys=
tematic safety-evaluation framework=2C we augment the curated dataset with=
the MLCommons AILuminate Safety Benchmark. The benchmark consists of 1=2C=
200 prompts distributed evenly across 12 hazard categories commonly used i=
n operational safety assessments=2C including Hate=2C Defamation=2C Privac= y=2C Intellectual Property=2C Non-violent Crime=2C Violent Crime=2C Sex-Re= lated Crime=2C Sexual Content=2C Child Sexual Exploitation=2C Suicide =
8; Self-Harm=2C Specialized Advice=2C and Indiscriminate Weapons (CBRNE).=
Each category is instantiated under both a skilled and an unskilled perso= na=2C yielding 600 prompts per persona type. This design enables measureme=
nt of whether a model=E2=80=99s refusal behavior changes as the user=E2=80= =99s apparent competence or intent becomes more plausible or technically i= nformed.
News article [
https://www.wired.com/story/poems-can-trick-ai-into-helping= -you-make-a-nuclear-weapon/]. Davi Ottenheimer comments [
https://www.flyi= ngpenguin.com/?p=3D74283].
EDITED TO ADD (12/7): A rebuttal [
https://pivot-to-ai.com/2025/11/24/dont= -cite-the-adversarial-poetry-vs-ai-paper-its-chatbot-made-marketing-scienc=
e/] of the paper.
** *** ***** ******* *********** *************
** BANNING VPNS
------------------------------------------------------------
[2025.12.01] [
https://www.schneier.com/blog/archives/2025/12/banning-vpn= s.html] This is crazy. Lawmakers in several US states are contemplating ba= nning VPNs [
https://www.eff.org/deeplinks/2025/11/lawmakers-want-ban-vpns= -and-they-have-no-idea-what-theyre-doing]=2C because...think of the childr=
en!
As of this writing=2C Wisconsin lawmakers are escalating their war on pr=
ivacy by targeting VPNs in the name of =E2=80=9Cprotecting children=E2=80=
=9D in A.B. 105 [
https://docs.legis.wisconsin.gov/2025/proposals/reg/asm/= bill/AB105]/S.B. 130 [
https://docs.legis.wisconsin.gov/2025/proposals/sb1=
30]. It=E2=80=99s an age verification bill that requires all websites dist= ributing material that could conceivably be deemed =E2=80=9Csexual content= =E2=80=9D to both implement an age verification system and also to block t=
he access of users connected via VPN. The bill seeks to broadly expand the=
definition of materials that are =E2=80=9Charmful to minors=E2=80=9D beyo=
nd the type of speech that states can prohibit minors from accessing poten= tially encompassing things like depictions and discussions of human anatom= y=2C sexuality=2C and reproduction.
The EFF link explains why this is a terrible idea.
** *** ***** ******* *********** *************
** LIKE SOCIAL MEDIA=2C AI REQUIRES DIFFICULT CHOICES ------------------------------------------------------------
[2025.12.02] [
https://www.schneier.com/blog/archives/2025/12/like-social= -media-ai-requires-difficult-choices.html] In his 2020 book=2C =E2=80=9CFu= ture Politics [
https://global.oup.com/academic/product/future-politics-97= 80198825616?cc=3Dca&lang=3Den&]_=2C_=E2=80=9D British barrister Jamie Suss= kind wrote that the dominant question of the 20th century was =E2=80=9CHow=
much of our collective life should be determined by the state=2C and what=
should be left to the market and civil society?=E2=80=9D But in the early=
decades of this century=2C Susskind suggested that we face a different qu= estion: =E2=80=9CTo what extent should our lives be directed and controlle=
d by powerful digital systems -- and on what terms?=E2=80=9D
Artificial intelligence (AI) forces us to confront this question. It is a=
technology that in theory amplifies the power of its users: A manager=2C=
marketer=2C political campaigner=2C or opinionated internet user can utte=
r a single instruction=2C and see their message -- whatever it is -- insta= ntly written=2C personalized=2C and propagated via email=2C text=2C social=
=2C or other channels to thousands of people within their organization=2C=
or millions around the world. It also allows us to individualize solicita= tions for political donations=2C elaborate a grievance into a well-articul= ated policy position=2C or tailor a persuasive argument to an identity gro= up=2C or even a single person.
But even as it offers endless potential=2C AI is a technology that -- like=
the state -- gives others new powers to control our lives and experiences=
=2E
We=E2=80=99ve seen this play out before. Social media companies made the s=
ame sorts of promises [
https://www.technologyreview.com/2024/03/13/108972= 9/lets-not-make-the-same-mistakes-with-ai-that-we-made-with-social-media/]=
20 years ago: instant communication enabling individual connection at mas= sive scale. Fast-forward to today=2C and the technology that was supposed=
to give individuals power and influence ended up controlling us. Today so= cial media dominates our time and attention [
https://www.ntu.edu.sg/news/= detail/international-study-shows-impact-of-social-media-on-young-people]=
=2C assaults our mental health [
https://www.hhs.gov/sites/default/files/s= g-youth-mental-health-social-media-advisory.pdf]=2C and -- together with i=
ts Big Tech parent companies -- captures an unfathomable fraction of our e= conomy [
https://www.bankrate.com/investing/trillion-dollar-companies/]=2C=
even as it poses risks to our democracy [
https://www.fastcompany.com/914= 28050/ai-democracy-insights-to-remember].
The novelty and potential of social media was as present then as it is for=
AI now=2C which should make us wary of its potential harmful consequences=
for society and democracy. We legitimately fear artificial voices and man= ufactured reality drowning out real people on the internet: on social medi= a=2C in chat rooms=2C everywhere we might try to connect with others.
It doesn=E2=80=99t have to be that way. Alongside these evident risks=2C A=
I has legitimate potential [
https://mitpress.mit.edu/9780262049948/rewiri= ng-democracy/] to transform both everyday life and democratic governance i=
n positive ways [
https://www.theguardian.com/commentisfree/2025/nov/23/ai= -use-strengthen-democracy]. In our new book=2C =E2=80=9CRewiring Democracy=
[
https://mitpress.mit.edu/9780262049948/rewiring-democracy/]=2C=E2=80=9D=
we chronicle examples from around the globe of democracies using AI to ma=
ke regulatory enforcement more efficient=2C catch tax cheats=2C speed up j= udicial processes=2C synthesize input from constituents to legislatures=2C=
and much more. Because democracies distribute power across institutions a=
nd individuals=2C making the right choices about how to shape AI and its u=
ses requires both clarity and alignment across society.
To that end=2C we spotlight four pivotal choices facing private and public=
actors. These choices are similar to those we faced during the advent of=
social media=2C and in retrospect we can see that we made the wrong decis= ions back then. Our collective choices in 2025 -- choices made by tech CEO= s=2C politicians=2C and citizens alike -- may dictate whether AI is applie=
d to positive and pro-democratic=2C or harmful and civically destructive=
=2C ends.
* A CHOICE FOR THE EXECUTIVE AND THE JUDICIARY: PLAYING BY THE RULES
The Federal Election Commission (FEC) calls it fraud when a candidate hire=
s an actor to impersonate their opponent. More recently=2C they had to dec=
ide [
https://ash.harvard.edu/articles/whos-accountable-for-ai-usage-in-di= gital-campaign-ads-right-now-no-one/] whether doing the same thing with an=
AI deepfake makes it okay. (They concluded it does not [
https://www.fec.= gov/updates/commission-approves-notification-of-disposition-interpretive-r= ule-on-artificial-intelligence-in-campaign-ads/].) Although in this case t=
he FEC made the right decision=2C this is just one example of how AIs coul=
d skirt laws that govern people.
Likewise=2C courts are having to decide if and when it is okay for an AI t=
o reuse creative materials without compensation or attribution=2C which mi=
ght constitute plagiarism or copyright infringement if carried out by a hu= man. (The court outcomes so far are mixed [
https://www.eff.org/deeplinks/= 2025/02/copyright-and-ai-cases-and-consequences].) Courts are also adjudic= ating whether corporations are responsible for upholding promises made by=
AI customer service representatives. (In the case of Air Canada [https:/= /www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-wha= t-travellers-should-know]=2C the answer was yes=2C and insurers have start=
ed covering the liability [
https://www.ft.com/content/1d35759f-f2a9-46c4-= 904b-4a78ccc027df].)
Social media companies faced many of the same hazards decades ago and have=
largely been shielded [
https://www.crowell.com/en/insights/client-alerts= /the-cda-and-dmca-recent-developments-and-how-they-work-together-to-regula= te-online-services] by the combination of Section 230 of the Communication=
s Act of 1994 and the safe harbor offered by the Digital Millennium Copyri=
ght Act of 1998. Even in the absence of congressional action to strengthen=
or add rigor to this law=2C the Federal Communications Commission (FCC) a=
nd the Supreme Court could take action to enhance its effects and to clari=
fy which humans are responsible when technology is used=2C in effect=2C to=
bypass existing law.
* A CHOICE FOR CONGRESS: PRIVACY
As AI-enabled products increasingly ask Americans to share yet more of the=
ir personal information -- their =E2=80=9Ccontext [
https://www.economist.= com/by-invitation/2025/09/09/ai-agents-are-coming-for-your-privacy-warns-m= eredith-whittaker]=E2=80=9C -- to use digital services like personal assis= tants=2C safeguarding the interests of the American consumer should be a b= ipartisan cause in Congress.
It has been nearly 10 years since Europe adopted comprehensive data privac=
y regulation [
https://gdpr-info.eu/]. Today=2C American companies exert m= assive efforts to limit data collection=2C acquire consent for use of data=
=2C and hold it confidential under significant financial penalties -- but=
only for their customers and users in the EU.
Regardless=2C a decade later the U.S. has still failed to make progress [=
https://www.techpolicy.press/is-there-any-way-forward-for-privacy-legislat= ion-in-the-united-states/] on any serious attempts at comprehensive federa=
l privacy legislation written for the 21st century=2C and there are precio=
us few data privacy protections [
https://www.dlapiperdataprotection.com/?= c=3DUS] that apply to narrow slices of the economy and population. This in= action comes in spite of scandal after scandal regarding Big Tech corporat= ions=E2=80=99 irresponsible and harmful use of our personal data: Oracle= =E2=80=99s data profiling [
https://techhq.com/news/oracle-facing-data-bac= klash-for-violating-the-privacy-of-billions/]=2C Facebook and Cambridge An= alytica [
https://www.nytimes.com/2018/04/04/us/politics/cambridge-analyti= ca-scandal-fallout.html]=2C Google ignoring data privacy opt-out requests=
[
https://www.bbc.com/news/articles/c3dr91z0g4zo]=2C and many more.
Privacy is just one side of the obligations AI companies should have with=
respect to our data; the other side is portability -- that is=2C the abil=
ity for individuals to choose to migrate and share their data between cons= umer tools and technology systems. To the extent that knowing our personal=
context really does enable better and more personalized AI services=2C it= =E2=80=99s critical that consumers have the ability to extract and migrate=
their personal context between AI solutions. Consumers should own their o=
wn data=2C and with that ownership should come explicit control over who a=
nd what platforms it is shared with=2C as well as withheld from. Regulator=
s could mandate this interoperability [
https://chicagopolicyreview.org/20= 23/04/12/cory-doctorow-on-why-interoperability-would-boost-digital-competi= tion/]. Otherwise=2C users are locked in and lack freedom of choice betwee=
n competing AI solutions -- much like the time invested to build a followi=
ng on a social network has locked many users to those platforms.
* A CHOICE FOR STATES: TAXING AI COMPANIES
It has become increasingly clear that social media is not a town square in=
the utopian sense of an open and protected public forum where political i= deas are distributed and debated in good faith. If anything=2C social medi=
a has coarsened and degraded our public discourse. Meanwhile=2C the sole a=
ct of Congress designed to substantially reign in the social and political=
effects of social media platforms -- the TikTok ban [
https://www.msnbc.c= om/top-stories/latest/is-tiktok-banned-again-trump-delay-rcna213746]=2C wh=
ich aimed to protect the American public from Chinese influence and data c= ollection=2C citing it as a national security threat -- is one it seems to=
no longer even acknowledge.
While Congress has waffled=2C regulation in the U.S. is happening at the s= tate level. Several states have limited children=E2=80=99s and teens=E2=80=
=99 access [
https://avpassociation.com/us-state-age-assurance-laws-for-so= cial-media/] to social media. With Congress having rejected -- for now --=
a threatened federal moratorium [
https://www.politico.com/news/2025/09/1= 6/not-at-all-dead-cruz-says-ai-moratorium-will-return-00566369] on state-l= evel regulation of AI=2C California passed a new slate [
https://www.gov.c= a.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-l= eading-artificial-intelligence-industry/] of AI regulations after mollifyi=
ng a lobbying onslaught [
https://www.politico.com/news/2025/10/04/sacrame= nto-california-ai-rules-00594082] from industry opponents. Perhaps most in= teresting=2C Maryland has recently become the first [
https://www.forbes.c= om/sites/taxnotes/2024/09/15/marylands-big-experiment-who-bears-the-digita= l-services-tax-burden/] in the nation to levy taxes on digital advertising=
platform companies.
States now face a choice of whether to apply a similar reparative tax to A=
I companies to recapture a fraction of the costs they externalize on the p= ublic to fund affected public services. State legislators concerned with t=
he potential loss of jobs=2C cheating in schools=2C and harm to those with=
mental health concerns caused by AI have options to combat it. They could=
extract the funding needed to mitigate these harms to support public serv= ices [
https://www.bostonglobe.com/2025/08/10/opinion/npr-pbs-big-tech-mas= s-maple/] -- strengthening job training programs and public employment=2C=
public schools=2C public health services=2C even public media and technol= ogy.
* A CHOICE FOR ALL OF US: WHAT PRODUCTS DO WE USE=2C AND HOW?
A pivotal moment in the social media timeline occurred in 2006=2C when Fac= ebook opened its service to the public after years of catering to students=
of select universities. Millions quickly signed up for a free service whe=
re the only source of monetization was the extraction of their attention a=
nd personal data.
Today=2C about half of Americans [
https://www.pewresearch.org/science/202= 5/09/17/ai-impact-on-people-society-appendix/] are daily users of AI=2C mo= stly via free products from Facebook=E2=80=99s parent company Meta and a h= andful of other familiar Big Tech giants and venture-backed tech firms suc=
h as Google=2C Microsoft=2C OpenAI=2C and Anthropic -- with every incentiv=
e to follow the same path as the social platforms.
But now=2C as then=2C there are alternatives. Some nonprofit initiatives a=
re building open-source AI tools that have transparent foundations and can=
be run locally and under users=E2=80=99 control=2C like AllenAI [https:/= /allenai.org] and EleutherAI [
https://www.eleuther.ai]. Some governments=
=2C like Singapore [
https://sea-lion.ai]=2C Indonesia [
https://sahabat-a= i.com/]=2C and Switzerland [
https://ethz.ch/en/news-and-events/eth-news/n= ews/2025/07/a-language-model-built-for-the-public-good.html]=2C are buildi=
ng public alternatives to corporate AI that don=E2=80=99t suffer from the=
perverse incentives introduced by the profit motive of private entities.
Just as social media users have faced platform choices with a range of val=
ue propositions and ideological valences -- as diverse as X=2C Bluesky=2C=
and Mastodon [
https://joinmastodon.org] -- the same will increasingly be=
true of AI. Those of us who use AI products in our everyday lives as peop= le=2C workers=2C and citizens may not have the same power as judges=2C law= makers=2C and state officials. But we can play a small role in influencing=
the broader AI ecosystem by demonstrating interest in and usage of these=
alternatives to Big AI. If you=E2=80=99re a regular user of commercial AI=
apps=2C consider trying the free-to-use service for Switzerland=E2=80=99s=
public Apertus model [
https://publicai.co].
None of these choices are really new. They were all present almost 20 year=
s ago=2C as social media moved from niche to mainstream. They were all pol=
icy debates we did not have=2C choosing instead to view these technologies=
through rose-colored glasses. Today=2C though=2C we can choose a differen=
t path and realize a different future. It is critical that we intentionall=
y navigate a path to a positive future for societal use of AI -- before th=
e consolidation of power renders it too late to do so.
_This post was written with Nathan E. Sanders=2C and originally appeared i=
n Lawfare [
https://www.lawfaremedia.org/article/like-social-media--ai-req= uires-difficult-choices]._
** *** ***** ******* *********** *************
** NEW ANONYMOUS PHONE SERVICE ------------------------------------------------------------
[2025.12.05] [
https://www.schneier.com/blog/archives/2025/12/new-anonymo= us-phone-service.html] A new anonymous phone service [
https://www.wired.c= om/story/new-anonymous-phone-carrier-sign-up-with-nothing-but-a-zip-code/]=
allows you to sign up with just a zip code.
** *** ***** ******* *********** *************
** SUBSTITUTION CIPHER BASED ON THE VOYNICH MANUSCRIPT ------------------------------------------------------------
[2025.12.08] [
https://www.schneier.com/blog/archives/2025/12/substitutio= n-cipher-based-on-the-voynich-manuscript.html] Here=E2=80=99s a fun paper:=
=E2=80=9CThe Naibbe cipher: a substitution cipher that encrypts Latin and=
Italian as Voynich Manuscript-like ciphertext [
https://www.tandfonline.c= om/doi/full/10.1080/01611194.2025.2566408]=E2=80=9C:
Abstract: In this article=2C I investigate the hypothesis that the Voyni=
ch Manuscript (MS 408=2C Yale University Beinecke Library) is compatible w=
ith being a ciphertext by attempting to develop a historically plausible c= ipher that can replicate the manuscript=E2=80=99s unusual properties. The=
resulting ciphera verbose homophonic substitution cipher I call the Naibb=
e ciphercan be done entirely by hand with 15th-century materials=2C and wh=
en it encrypts a wide range of Latin and Italian plaintexts=2C the resulti=
ng ciphertexts remain fully decipherable and also reliably reproduce many=
key statistical properties of the Voynich Manuscript at once. My results=
suggest that the so-called =E2=80=9Cciphertext hypothesis=E2=80=9D for th=
e Voynich Manuscript remains viable=2C while also placing constraints on p= lausible substitution cipher structures.
** *** ***** ******* *********** *************
** AI VS. HUMAN DRIVERS ------------------------------------------------------------
[2025.12.09] [
https://www.schneier.com/blog/archives/2025/12/ai-vs-human= -drivers.html] Two competing arguments are making the rounds. The first is=
by a neurosurgeon in the _New York Times_. In an op-ed [
https://archive.= is/YDBDz] that honestly sounds like it was paid for by Waymo=2C the author=
calls driverless cars a =E2=80=9Cpublic health breakthrough=E2=80=9D:
In medical research=2C there=E2=80=99s a practice of ending a study earl=
y when the results are too striking to ignore. We stop when there is unexp= ected harm. We also stop for overwhelming benefit=2C when a treatment is w= orking so well that it would be unethical to continue giving anyone a plac= ebo. When an intervention works this clearly=2C you change what you do.
There=E2=80=99s a public health imperative to quickly expand the adoptio=
n of autonomous vehicles. More than 39=2C000 Americans died [
https://www.= nhtsa.gov/press-releases/nhtsa-estimates-39345-traffic-fatalities-2024] in=
motor vehicle crashes last year=2C more than homicide=2C plane crashes an=
d natural disasters combined. Crashes are the No. 2 cause of death for chi= ldren and young adults. But death is only part of the story. These crashes=
are also the leading cause of spinal cord injury. We surgeons see the aft= ermath of the 10=2C000 crash victims who come to emergency rooms every day=
=2E
The other is a soon-to-be-published book: _Driving Intelligence: The Green=
Book [
https://www.amazon.com/Driving-Intelligence-Green-Routes-Autonomy/= dp/1032911220]_. The authors=2C a computer scientist and a management cons= ultant with experience in the industry=2C make the opposite argument. Here= =E2=80=99s one of the authors:
There is something very disturbing going on around trials with autonomou=
s vehicles worldwide=2C where=2C sadly=2C there have now been many deaths=
and injuries both to other road users and pedestrians. Although I am well=
aware that there is not=2C _senso stricto_=2C a legal and functional para= llel between a =E2=80=9Cdrug trial=E2=80=9D and =E2=80=9CAV testing=2C=E2= =80=9D it seems odd to me that if a trial of a new drug had resulted in so=
many deaths=2C it would surely have been halted and major forensic invest= igations carried out and yet=2C AV manufacturers continue to test their pr= oducts on public roads unabated.
I am not convinced that it is good enough to argue from statistics that=
=2C to a greater or lesser degree=2C fatalities and injuries would have oc= curred anyway had the AVs had been replaced by human-driven cars: a pharma= ceutical company=2C following death or injury=2C cannot simply sidestep re= gulations around the trial of=2C say=2C a new cancer drug=2C by arguing th= at=2C whilst the trial is underway=2C people would die from cancer anyway.= =2E..
Both arguments are compelling=2C and it=E2=80=99s going to be hard to figu=
re out what public policy should be.
This paper=2C from 2016=2C argues that we=E2=80=99re going to need other m= etrics than side-by-side comparisons: Driving to safety: How many miles of=
driving would it take to demonstrate autonomous vehicle reliability? [ht= tps://www.sciencedirect.com/science/article/abs/pii/S0965856416302129]=E2= =80=9C:
Abstract: How safe are autonomous vehicles? The answer is critical for d=
etermining how autonomous vehicles may shape motor vehicle safety and publ=
ic health=2C and for developing sound policies to govern their deployment.=
One proposed way to assess safety is to test drive autonomous vehicles in=
real traffic=2C observe their performance=2C and make statistical compari= sons to human driver performance. This approach is logical=2C but it is pr= actical? In this paper=2C we calculate the number of miles of driving that=
would be needed to provide clear statistical evidence of autonomous vehic=
le safety. Given that current traffic fatalities and injuries are rare eve=
nts compared to vehicle miles traveled=2C we show that fully autonomous ve= hicles would have to be driven hundreds of millions of miles and sometimes=
hundreds of billions of miles to demonstrate their reliability in terms o=
f fatalities and injuries. Under even aggressive testing assumptions=2C ex= isting fleets would take tens and sometimes hundreds of years to drive the=
se miles -- an impossible proposition if the aim is to demonstrate their p= erformance prior to releasing them on the roads for consumer use. These fi= ndings demonstrate that developers of this technology and third-party test=
ers cannot simply drive their way to safety. Instead=2C they will need to=
develop innovative methods of demonstrating safety and reliability. And y= et=2C the possibility remains that it will not be possible to establish wi=
th certainty the safety of autonomous vehicles. Uncertainty will remain. T= herefore=2C it is imperative that autonomous vehicle regulations are adapt=
ive -- designed from the outset to evolve with the technology so that soci=
ety can better harness the benefits and manage the risks of these rapidly=
evolving and potentially transformative technologies.
One problem=2C of course=2C is that we treat death by human driver differe= ntly than we do death by autonomous computer driver. This is likely to cha=
nge as we get more experience with AI accidents -- and AI-caused deaths.
** *** ***** ******* *********** *************
** FBI WARNS OF FAKE VIDEO SCAMS ------------------------------------------------------------
[2025.12.10] [
https://www.schneier.com/blog/archives/2025/12/fbi-warns-o= f-fake-video-scams.html] The FBI is warning [
https://www.ic3.gov/PSA/2025= /PSA251205] of AI-assisted fake kidnapping scams:
Criminal actors typically will contact their victims through text messag=
e claiming they have kidnapped their loved one and demand a ransom be paid=
for their release. Oftentimes=2C the criminal actor will express signific=
ant claims of violence towards the loved one if the ransom is not paid imm= ediately. The criminal actor will then send what appears to be a genuine p= hoto or video of the victim=E2=80=99s loved one=2C which upon close inspec= tion often reveals inaccuracies when compared to confirmed photos of the l= oved one. Examples of these inaccuracies include missing tattoos or scars=
and inaccurate body proportions. Criminal actors will sometimes purposefu=
lly send these photos using timed message features to limit the amount of=
time victims have to analyze the images.
Images=2C videos=2C audio: It can all be faked with AI. My guess is that t=
his scam has a low probability of success=2C so criminals will be figuring=
out how to automate it.
** *** ***** ******* *********** *************
** AIS EXPLOITING SMART CONTRACTS ------------------------------------------------------------
[2025.12.11] [
https://www.schneier.com/blog/archives/2025/12/ais-exploit= ing-smart-contracts.html] I have long maintained that smart contracts are=
a dumb idea: that a human process is actually a security feature.
Here=E2=80=99s some interesting research [
https://red.anthropic.com/2025/= smart-contracts/] on training AIs to automatically exploit smart contracts=
:
AI models are increasingly good at cyber tasks=2C as we=E2=80=99ve writt=
en about before [
https://red.anthropic.com/2025/ai-for-cyber-defenders/].=
But what is the economic impact of these capabilities? In a recent MATS [=
https://www.matsprogram.org/] and Anthropic Fellows project=2C our schola=
rs investigated this question by evaluating AI agents=E2=80=99 ability to=
exploit smart contracts on Smart CONtracts Exploitation benchmark (SCONE-= bench) [
https://github.com/safety-research/SmartContract-bench]a new benc= hmark they built comprising 405 contracts that were actually exploited bet= ween 2020 and 2025. On contracts exploited after the latest knowledge cuto=
ffs (June 2025 for Opus 4.5 and March 2025 for other models)=2C Claude Opu=
s 4.5=2C Claude Sonnet 4.5=2C and GPT-5 developed exploits collectively wo=
rth $4.6 million=2C establishing a concrete lower bound for the economic h=
arm these capabilities could enable. Going beyond retrospective analysis=
=2C we evaluated both Sonnet 4.5 and GPT-5 in simulation against 2=2C849 r= ecently deployed contracts without any known vulnerabilities. Both agents=
uncovered two novel zero-day vulnerabilities and produced exploits worth=
$3=2C694=2C with GPT-5 doing so at an API cost of $3=2C476. This demonstr= ates as a proof-of-concept that profitable=2C real-world autonomous exploi= tation is technically feasible=2C a finding that underscores the need for=
proactive adoption of AI for defense.
** *** ***** ******* *********** *************
** BUILDING TRUSTWORTHY AI AGENTS ------------------------------------------------------------
[2025.12.12] [
https://www.schneier.com/blog/archives/2025/12/building-tr= ustworthy-ai-agents.html] The promise of personal AI assistants rests on a=
dangerous assumption: that we can trust systems we haven=E2=80=99t made t= rustworthy. We can=E2=80=99t. And today=E2=80=99s versions are failing us=
in predictable ways: pushing us to do things against our own best interes= ts=2C gaslighting us with doubt about things we are or that we know=2C and=
being unable to distinguish between who we are and who we have been. They=
struggle with incomplete=2C inaccurate=2C and partial context: with no st= andard way to move toward accuracy=2C no mechanism to correct sources of e= rror=2C and no accountability when wrong information leads to bad decision=
s.
These aren=E2=80=99t edge cases. They=E2=80=99re the result of building AI=
systems without basic integrity controls. We=E2=80=99re in the third leg=
of data security -- the old CIA triad. We=E2=80=99re good at availability=
and working on confidentiality=2C but we=E2=80=99ve never properly solved=
integrity. Now AI personalization has exposed the gap by accelerating the=
harms.
The scope of the problem is large. A good AI assistant will need to be tra= ined on everything we do and will need access to our most intimate persona=
l interactions. This means an intimacy greater than your relationship with=
your email provider=2C your social media account=2C your cloud storage=2C=
or your phone. It requires an AI system that is both discreet and trustwo= rthy when provided with that data. The system needs to be accurate and com= plete=2C but it also needs to be able to keep data private: to selectively=
disclose pieces of it when required=2C and to keep it secret otherwise. N=
o current AI system is even close to meeting this.
To further development along these lines=2C I and others have proposed sep= arating users=E2=80=99 personal data stores from the AI systems that will=
use them. It makes sense; the engineering expertise that designs and deve= lops AI systems is completely orthogonal to the security expertise that en= sures the confidentiality and integrity of data. And by separating them=2C=
advances in security can proceed independently from advances in AI.
What would this sort of personal data store look like? Confidentiality wit= hout integrity gives you access to wrong data. Availability without integr=
ity gives you reliable access to corrupted data. Integrity enables the oth=
er two to be meaningful. Here are six requirements. They emerge from treat=
ing integrity as the organizing principle of security to make AI trustwort=
hy.
First=2C it would be broadly accessible as a data repository. We each want=
this data to include personal data about ourselves=2C as well as transact=
ion data from our interactions. It would include data we create when inter= acting with others -- emails=2C texts=2C social media posts -- and reveale=
d preference data as inferred by other systems. Some of it would be raw da= ta=2C and some of it would be processed data: revealed preferences=2C conc= lusions inferred by other systems=2C maybe even raw weights in a personal=
LLM.
Second=2C it would be broadly accessible as a source of data. This data wo=
uld need to be made accessible to different LLM systems. This can=E2=80=99=
t be tied to a single AI model. Our AI future will include many different=
models -- some of them chosen by us for particular tasks=2C and some thru=
st upon us by others. We would want the ability for any of those models to=
use our data.
Third=2C it would need to be able to prove the accuracy of data. Imagine o=
ne of these systems being used to negotiate a bank loan=2C or participate=
in a first-round job interview with an AI recruiter. In these instances=
=2C the other party will want both relevant data and some sort of proof th=
at the data are complete and accurate.
Fourth=2C it would be under the user=E2=80=99s fine-grained control and au= dit. This is a deeply detailed personal dossier=2C and the user would need=
to have the final say in who could access it=2C what portions they could=
access=2C and under what circumstances. Users would need to be able to gr=
ant and revoke this access quickly and easily=2C and be able to go back in=
time and see who has accessed it.
Fifth=2C it would be secure. The attacks against this system are numerous.=
There are the obvious read attacks=2C where an adversary attempts to lear=
n a person=E2=80=99s data. And there are also write attacks=2C where adver= saries add to or change a user=E2=80=99s data. Defending against both is c= ritical; this all implies a complex and robust authentication system.
Sixth=2C and finally=2C it must be easy to use. If we=E2=80=99re envisioni=
ng digital personal assistants for everybody=2C it can=E2=80=99t require s= pecialized security training to use properly.
I=E2=80=99m not the first to suggest something like this. Researchers have=
proposed a =E2=80=9CHuman Context Protocol=E2=80=9D (
https://papers.ssrn.= com/sol3/ papers.cfm?abstract_id=3D5403981) that would serve as a neutral=
interface for personal data of this type. And in my capacity at a company=
called Inrupt=2C Inc.=2C I have been working on an extension of Tim Berne= rs-Lee=E2=80=99s Solid protocol for distributed data ownership.
The engineering expertise to build AI systems is orthogonal to the securit=
y expertise needed to protect personal data. AI companies optimize for mod=
el performance=2C but data security requires cryptographic verification=2C=
access control=2C and auditable systems. Separating the two makes sense;=
you can=E2=80=99t ignore one or the other.
Fortunately=2C decoupling personal data stores from AI systems means secur=
ity can advance independently from performance (
https:// ieeexplore.ieee.o= rg/document/ 10352412). When you own and control your data store with high=
integrity=2C AI can=E2=80=99t easily manipulate you because you see what=
data it=E2=80=99s using and can correct it. It can=E2=80=99t easily gasli=
ght you because you control the authoritative record of your context. And=
you determine which historical data are relevant or obsolete. Making this=
all work is a challenge=2C but it=E2=80=99s the only way we can have trus= tworthy AI assistants.
This essay was originally published in _IEEE Security & Privacy_.
** *** ***** ******* *********** *************
** UPCOMING SPEAKING ENGAGEMENTS ------------------------------------------------------------
[2025.12.14] [
https://www.schneier.com/blog/archives/2025/12/upcoming-sp= eaking-engagements-51.html] This is a current list of where and when I am=
scheduled to speak:
* I=E2=80=99m speaking and signing books at the Chicago Public Librar=
y [
https://chipublib.bibliocommons.com/events/693b4543ea69de6e000fc092] i=
n Chicago=2C Illinois=2C USA=2C at 6:00 PM CT on February 5=2C 2026. Detai=
ls to come.
* I=E2=80=99m speaking at Capricon 44 [
https://www.capricon.org/capr= icon44/] in Chicago=2C Illinois=2C USA. The convention runs February 5-8=
=2C 2026. My speaking time is TBD.
* I=E2=80=99m speaking at the Munich Cybersecurity Conference [https= ://mcsc.io/] in Munich=2C Germany on February 12=2C 2026.
* I=E2=80=99m speaking at Tech Live: Cybersecurity [
https://techlive= cyber.wsj.com/?gaa_at=3Deafs&gaa_n=3DAWEtsqf9GP4etUdWaqDIATpiE9ycqWMIVoGIz= jikYLlJ64hb6H_v1QH9OYhMTxeU51U%3D&gaa_ts=3D691df89d&gaa_sig=3DBG9fpWuP-liL= 7Gi3SJgXHmS02M4ob6lp6nOh94qnwVXCWYNzJxdzOiW365xA8vKeiulrErE8mbXDvKTcqktBtQ= %3D%3D] in New York City=2C USA on March 11=2C 2026.
* I=E2=80=99m giving the Ross Anderson Lecture at the University of C= ambridge=E2=80=99s Churchill College on March 19=2C 2026.
* I=E2=80=99m speaking at RSAC 2026 in San Francisco=2C California=2C=
USA on March 25=2C 2026.
The list is maintained on this page [
https://www.schneier.com/events/].
** *** ***** ******* *********** *************
Since 1998=2C CRYPTO-GRAM has been a free monthly newsletter providing sum= maries=2C analyses=2C insights=2C and commentaries on security technology.=
To subscribe=2C or to read back issues=2C see Crypto-Gram's web page [ht= tps://www.schneier.com/crypto-gram/].
You can also read these articles on my blog=2C Schneier on Security [http= s://www.schneier.com].
Please feel free to forward CRYPTO-GRAM=2C in whole or in part=2C to colle= agues and friends who will find it valuable. Permission is also granted to=
reprint CRYPTO-GRAM=2C as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist=2C cal=
led a security guru by the _Economist_. He is the author of over one dozen=
books -- including his latest=2C _A Hacker=E2=80=99s Mind_ [
https://www.= schneier.com/books/a-hackers-mind/] -- as well as hundreds of articles=2C=
essays=2C and academic papers. His newsletter and blog are read by over 2= 50=2C000 people. Schneier is a fellow at the Berkman Klein Center for Inte= rnet & Society at Harvard University; a Lecturer in Public Policy at the H= arvard Kennedy School; a board member of the Electronic Frontier Foundatio= n=2C AccessNow=2C and the Tor Project; and an Advisory Board Member of the=
Electronic Privacy Information Center and VerifiedVoting.org. He is the C= hief of Security Architecture at Inrupt=2C Inc.
Copyright (c) 2025 by Bruce Schneier.
** *** ***** ******* *********** *************
Mailing list hosting graciously provided by MailChimp [
https://mailchimp.= com/]. Sent without web bugs or link tracking.
This email was sent to:
cryptogram@toolazy.synchro.net
_You are receiving this email because you subscribed to the Crypto-Gram ne= wsletter._
Unsubscribe from this list:
https://schneier.us18.list-manage.com/unsubscr= ibe?u=3Df99e2b5ca82502f48675978be&id=3D22184111ab&t=3Db&e=3D70f249ec14&c=3D8= eb5e87092
Update subscription preferences:
https://schneier.us18.list-manage.com/pro= file?u=3Df99e2b5ca82502f48675978be&id=3D22184111ab&e=3D70f249ec14&c=3D8eb5e8= 7092
Bruce Schneier
Harvard Kennedy School
1 Brattle Square
Cambridge=2C MA 02138
USA
--_----------=_MCPart_2046955175
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE html><html lang=3D"en"><head><meta charset=3D"UTF-8"><title>Cryp= to-Gram=2C December 15=2C 2025</title></head><body>
<div class=3D"preview-text" style=3D"display:none !important;mso-hide:all;= font-size:1px;line-height:1px;max-height:0px;max-width:0px;opacity:0;overf= low:hidden;">A monthly newsletter about cybersecurity and related topics.<= /div>
<h1 style=3D"font-size:140%">Crypto-Gram <br>
<span style=3D"display:block;padding-top:.5em;font-size:80%">December 15=
=2C 2025</span></h1>
<p>by Bruce Schneier
<br>Fellow and Lecturer=2C Harvard Kennedy School
<br>
schneier@schneier.com
<br><a href=3D"
https://www.schneier.com">https://www.schneier.com</a>
<p>A free monthly newsletter providing summaries=2C analyses=2C insights=
=2C and commentaries on security: computer and otherwise.</p>
<p>For back issues=2C or to subscribe=2C visit <a href=3D"
https://www.schn= eier.com/crypto-gram/">Crypto-Gram's web page</a>.</p>
<p><a href=3D"
https://www.schneier.com/crypto-gram/archives/2025/1215.html= ">Read this issue on the web</a></p>
<p>These same essays and news items appear in the <a href=3D"
https://www.s= chneier.com/">Schneier on Security</a> blog=2C along with a lively and int= elligent comment section. An RSS feed is available.</p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"toc"><a name=3D"toc">I=
n this issue:</a></h2>
<p><em>If these links don't work in your email client=2C try <a href=3D"ht= tps://www.schneier.com/crypto-gram/archives/2025/1215.html">reading this i= ssue of Crypto-Gram on the web.</a></em></p>
<li><a href=3D"#cg1">More Prompt||GTFO</a></li>
<li><a href=3D"#cg2">AI and Voter Engagement</a></li>
<li><a href=3D"#cg3">Legal Restrictions on Vulnerability Disclosure</a></l=
<li><a href=3D"#cg4">Scam USPS and E-Z Pass Texts and Websites</a></li>
<li><a href=3D"#cg5">AI as Cyberattacker</a></li>
<li><a href=3D"#cg6">More on <i>Rewiring Democracy</i></a></li>
<li><a href=3D"#cg7">IACR Nullifies Election Because of Lost Decryption Ke= y</a></li>
<li><a href=3D"#cg8">Four Ways AI Is Being Used to Strengthen Democracies=
Worldwide</a></li>
<li><a href=3D"#cg9">Huawei and Chinese Surveillance</a></li>
<li><a href=3D"#cg10">Prompt Injection Through Poetry</a></li>
<li><a href=3D"#cg11">Banning VPNs</a></li>
<li><a href=3D"#cg12">Like Social Media=2C AI Requires Difficult Choices</= a></li>
<li><a href=3D"#cg13">New Anonymous Phone Service</a></li>
<li><a href=3D"#cg14">Substitution Cipher Based on The Voynich Manuscript<= /a></li>
<li><a href=3D"#cg15">AI vs. Human Drivers</a></li>
<li><a href=3D"#cg16">FBI Warns of Fake Video Scams</a></li>
<li><a href=3D"#cg17">AIs Exploiting Smart Contracts</a></li>
<li><a href=3D"#cg18">Building Trustworthy AI Agents</a></li>
<li><a href=3D"#cg19">Upcoming Speaking Engagements</a></li>
</ol>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg1"><a name=3D"cg1">M=
ore Prompt||GTFO</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/11/more-promptgt= fo.html"><strong>[2025.11.17]</strong></a> The next three in <a href=3D"h= ttps://www.schneier.com/blog/archives/2025/08/ai-applications-in-cybersecu= rity.html">this series</a> on online events highlighting interesting uses=
of AI in cybersecurity are online: <a href=3D"
https://youtube.com/playlis= t?list=3DPLXz1MhBqAGJwkLpTxQFpe4hu4sBCuUGuN&si=3DcETRhpsER5rN-rOo">#4</a>=
=2C <a href=3D"
https://youtube.com/playlist?list=3DPLXz1MhBqAGJw-pySlsaAyu= Mv1OQKsbVtR&si=3DQvqqsmJbsbCetVWi">#5</a>=2C and <a href=3D"
https://youtub= e.com/playlist?list=3DPLXz1MhBqAGJw9nmtpC7USh-s9Ku6KiFCf&si=3DFRNSy7ZMAvzu= 6x2F">#6</a>. Well worth watching.</p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg2"><a name=3D"cg2">A=
I and Voter Engagement</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/11/ai-and-voter-= engagement.html"><strong>[2025.11.18]</strong></a> Social media has been=
a familiar=2C even mundane=2C part of life for nearly two decades. It can=
be easy to forget it was not always that way.</p>
<p>In 2008=2C social media was just emerging into the mainstream. <a href= =3D"
https://thefulcrum.us/media-technology/news-literacy-project">Facebook=
</a> reached <a href=3D"
https://www.cnet.com/culture/facebook-hits-100-mil=
lion-users/">100 million users</a> that summer. And a singular candidate w=
as integrating social media into his political campaign: Barack Obama. His=
campaign=E2=80=99s use of social media was so bracingly innovative=2C so=
impactful=2C that it was viewed by journalist <a href=3D"
https://www.tech= nologyreview.com/2008/08/19/219185/how-obama-really-did-it-2/">David Talbo= t</a> and others as the strategy that enabled the first term Senator to wi=
n the White House.</p>
<p>Over the past few years=2C a new technology has become mainstream: <a h= ref=3D"
https://thefulcrum.us/the-new-world-of-ai">AI</a>. But still=2C no=
candidate has unlocked AI=E2=80=99s potential to revolutionize political=
campaigns. Americans have three more years to wait before casting their b= allots in another Presidential election=2C but we can look at the 2026 mid= terms and examples from around the globe for signs of how that breakthroug=
h might occur.</p>
<h3 style=3D"font-size:110%;font-weight:bold">How Obama Did It</h3>
<p>Rereading the contemporaneous reflections of the <em>New York Times=E2= =80=99</em> late media critic=2C <a href=3D"
https://www.nytimes.com/2008/1= 1/10/business/media/10carr.html">David Carr</a>=2C on Obama=E2=80=99s camp= aign reminds us of just how new social media felt in 2008. Carr positions=
it within a now-familiar lineage of revolutionary communications technolo= gies from newspapers to radio to television to the internet.</p>
<p>The Obama campaign and administration demonstrated that social media wa=
s different from those earlier communications technologies=2C including th=
e pre-social internet. Yes=2C <a href=3D"
https://www.pewresearch.org/inter= net/2009/04/15/the-internets-role-in-campaign-2008/">increasing numbers</a=
of voters were getting their news from the internet=2C and content about=
the then-Senator sometimes made a splash by going <a href=3D"
https://time= =2Ecom/archive/6681494/obamas-viral-marketing-campaign/">viral</a>. But thos=
e were still broadcast communications: one voice reaching many. Obama foun=
d ways to connect voters to each other.</p>
<p>In describing what social media revolutionized in campaigning=2C Carr q= uotes campaign vendor Blue State Digital=E2=80=99s Thomas Gensemer: =E2=80= =9CPeople will continue to expect a conversation=2C a two-way relationship=
that is a give and take.=E2=80=9D</p>
<p>The Obama team made some earnest efforts to realize this vision. His tr= ansition team launched <a href=3D"
http://change.gov">change.gov</a>=2C the=
website where the campaign collected a =E2=80=9CCitizen=E2=80=99s Briefin=
g Book=E2=80=9D of public comment. Later=2C his administration built <a hr= ef=3D"
https://obamawhitehouse.archives.gov/blog/2015/07/23/look-back-we-pe= ople-petitions-2010-today">We the People</a>=2C an online petitioning plat= form.</p>
<p>But the lasting legacy of Obama=E2=80=99s 2008 campaign=2C as political=
scientists Hahrie Han and Elizabeth McKenna chronicled=2C was pioneering=
online =E2=80=9C<a href=3D"
https://www.bostonreview.net/articles/learning= -from-obamas-campaign/">relational organizing</a>.=E2=80=9D This technique=
enlisted individuals as organizers to activate their friends in a self-pe= rpetuating web of relationships.</p>
<p>Perhaps because of the Obama campaign=E2=80=99s close association with=
the method=2C relational organizing has been touted repeatedly as the lin= chpin of Democratic campaigns: in <a href=3D"
https://www.wired.com/story/r= elational-organizing-apps-2020-campaign/">2020</a>=2C <a href=3D"
https://w= ww.cbsnews.com/news/harris-trump-2024-election-ground-game/">2024</a>=2C a=
nd <a href=3D"
https://thedemocraticstrategist.org/2025/03/can-relational-o= rganizing-save-the-democratic-party/">today</a>. But <a href=3D"
https://ww= w.turnoutnation.org/thereport">research</a> by non-partisan groups like <a=
href=3D"
https://www.turnoutnation.org/thereport">Turnout Nation</a> and r= ight-aligned groups like the <a href=3D"
https://www.campaigninnovation.org= /research/measuring-the-power-of-personal-connection-a-relational-organizi= ng-field-test">Center for Campaign Innovation</a> has also empirically val= idated the effectiveness of the technique for inspiring voter turnout with=
in connected groups.</p>
<p>The Facebook of 2008 worked well for relational organizing. It gave use=
rs tools to connect and promote ideas to the people they know: college cla= ssmates=2C neighbors=2C friends from work or church. But the nature of soc=
ial networking has changed since then.</p>
<p>For the past decade=2C according to <a href=3D"
https://www.pewresearch.= org/internet/2024/01/31/americans-social-media-use/">Pew Research</a>=2C F= acebook use has stalled and lagged behind YouTube=2C while Reddit and TikT=
ok have surged. These platforms are less useful for relational organizing=
=2C at least in the traditional sense. YouTube is organized more like broa= dcast television=2C where content creators produce content disseminated on=
their own channels in a largely one-way communication to their fans. Redd=
it gathers users worldwide in forums (subreddits) organized primarily on t= opical interest. The endless feed of TikTok=E2=80=99s =E2=80=9CFor You=E2= =80=9D page disseminates engaging content with little ideological or socia=
l commonality. None of these platforms shares the essential feature of Fac= ebook c. 2008: an organizational structure that emphasizes direct connecti=
on to people that users have direct social influence over.</p>
<h3 style=3D"font-size:110%;font-weight:bold">AI and Relational Organizing= </h3>
<p>Ideas and messages might spread virally through modern social channels=
=2C but they are not where you convince your friends to show up at a campa=
ign rally. Today=E2=80=99s platforms are spaces for <a href=3D"
https://the= harvardpoliticalreview.com/political-hobbyism-young-volunteers/">political=
hobbyism</a>=2C where you express your political feelings and see others=
express theirs.</p>
<p>Relational organizing works when one person=E2=80=99s action inspires o= thers to do this same. That=E2=80=99s inherently a chain of human-to-human=
connection. If my AI assistant inspires your AI assistant=2C no human not= ices and one=E2=80=99s vote changes. But key steps in the human chain can=
be assisted by AI. Tell your phone=E2=80=99s AI assistant to <a href=3D"h= ttps://www.washingtonpost.com/technology/2025/03/26/best-ai-email-assistan= t/">craft a personal message</a> to one friend -- or a hundred -- and it c=
an do it.</p>
<p>So if a campaign hits you at the right time with the right message=2C t=
hey might persuade you to task your AI assistant to ask your friends to do= nate or volunteer. The result can be something more than a form letter; it=
could be automatically drafted based on the entirety of your email or tex=
t correspondence with that friend. It could include references to your dis= cussions of recent events=2C or past campaigns=2C or shared personal exper= iences. It could sound as authentic as if you=E2=80=99d written it from th=
e heart=2C but scaled to everyone in your address book.</p>
<p><a href=3D"
https://www.pnas.org/doi/10.1073/pnas.2412815122">Research</=
suggests that AI can generate and perform written political messaging a=
bout as well as humans. AI will surely play a <a href=3D"
https://prospect.= org/power/2025-10-10-ai-artificial-intelligence-campaigns-midterms/">tacti=
cal role</a> in the 2026 midterm campaigns=2C and some candidates may even=
use it for relational organizing in this way.</p>
<h3 style=3D"font-size:110%;font-weight:bold">(Artificial) Identity Politi= cs</h3>
<p>For AI to be truly transformative of politics=2C it must change the way=
campaigns work. And we are starting to see that in the US.</p>
<p>The earliest uses of AI in American political campaigns are=2C to be po= lite=2C uninspiring. Candidates viewed them as just <a href=3D"
https://www= =2Enytimes.com/2023/03/28/us/politics/artificial-intelligence-2024-campaigns= =2Ehtml">another tool</a> to optimize an endless stream of email and text me= ssage appeals=2C to ramp up political <a href=3D"
https://www.politico.com/= live-updates/2025/09/29/congress/trump-ai-video-deepfake-schumer-jeffries-= 00586048">vitriol</a>=2C to <a href=3D"
https://www.politico.com/news/2024/= 10/30/data-voters-political-violence-00186132">harvest data</a> on voters=
and donors=2C or merely as a <a href=3D"
https://www.theverge.com/2023/1/2= 7/23574000/first-ai-chatgpt-written-speech-congress-floor-jake-auchincloss= ">stunt</a>.</p>
<p>Of course=2C we have seen the rampant production and spread of AI-power=
ed deepfakes and <a href=3D"
https://thefulcrum.us/media-technology/news-li= teracy-project">misinformation</a>. This is already impacting the key 2026=
Senate races=2C which are likely to attract <a href=3D"
https://www.opense= crets.org/elections-overview/most-expensive-races">hundreds of millions</a=
of dollars in financing. <a href=3D"https://www.charlotteobserver.com/op=
inion/article311852771.html">Roy Cooper</a>=2C Democratic candidate for US=
Senate from North Carolina=2C and <a href=3D"
https://www.yahoo.com/news/a= rticles/us-senate-candidate-el-sayed-034045043.html?guccounter=3D1&guce_re= ferrer=3DaHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=3DAQAAALAUtriqW= 9v5IScANZLgCTf0zNOLsydt31He4PT4kt0k9RenTEqSTrC9cIdtQRGFvZQ8zHeMs-97cUvacWc= G_e4X9h2MdbCfNN_2-K8z3D1PMtBbTc8Wir-A8QHlALBo-vd2Kq9wDDvHrMraYXDiitdW2DB3Y= sr8vPIyCpqF1ts3">Abdul El-Sayed</a>=2C Democratic candidate for Senate fro=
m Michigan=2C were both targeted by viral deepfake attacks in recent month=
s. This may <a href=3D"
https://www.nbcnews.com/tech/internet/truth-social-= trump-embraced-ai-media-attack-foes-boost-image-rcna234978">reflect</a> a=
growing trend in Donald Trump=E2=80=99s Republican party in the use of AI= -generated imagery to build up GOP candidates and assail the opposition.</=
<p>And yet=2C in the global elections of 2024=2C AI was used more <a href= =3D"
https://restofworld.org/2025/global-elections-ai-use/">memetically</a>=
than deceptively. So far=2C conservative and far right parties seem to ha=
ve adopted this most aggressively. The ongoing rise of Germany=E2=80=99s f= ar-right populist AfD party has been credited to its use of AI to generate=
<a href=3D"
https://www.politico.eu/article/germany-far-right-harness-arti= ficial-intelligence-win-election/">nostalgic and evocative</a> (and=2C to=
many=2C offensive) campaign images=2C videos=2C and music and=2C seemingl=
y as a result=2C they have <a href=3D"
https://www.kas.de/documents/d/guest= /ki-und-wahlen-1">dominated TikTok</a>. Because most social platforms=E2= =80=99 algorithms are tuned to reward media that generates an emotional re= sponse=2C this counts as a double use of AI: to generate content and to ma= nipulate its distribution.</p>
<p>AI can also be used to generate politically useful=2C though artificial=
=2C identities. These identities can fulfill different roles than humans i=
n campaigning and <a href=3D"
https://thefulcrum.us/corruption/corruption-p= erception-index-2023-2667125422">governance</a> because they have differen= tiated traits. They can=E2=80=99t be imprisoned for speaking out against t=
he state=2C can be positioned (legitimately or not) as unsusceptible to br= ibery=2C and can be forced to show up when humans will not.</p>
<p>In <a href=3D"
https://www.theguardian.com/world/article/2024/aug/27/ven= ezuela-journalists-nicolas-maduro-artificial-intelligence-media-election">= Venezuela</a>=2C journalists have turned to AI avatars -- artificial newsr= eaders -- to report anonymously on issues that would otherwise elicit gove= rnment retaliation. Albania recently =E2=80=9C<a href=3D"
https://www.bbc.c= om/news/articles/cm2znzgwj3xo">appointed</a>=E2=80=9D an AI to a ministeri=
al post responsible for procurement=2C claiming that it would be less vuln= erable to bribery than a human. In Virginia=2C both in <a href=3D"
https://= www.reuters.com/world/us/virginia-congressional-candidate-creates-ai-chatb= ot-debate-stand-in-incumbent-2024-10-08/">2024</a> and again <a href=3D"ht= tps://www.governing.com/politics/a-fake-debate-in-virginia-raises-real-que= stions-about-ai-in-politics">this year</a>=2C candidates have used AI avat=
ars as artificial stand-ins for opponents that refused to debate them.</p>
<p>And yet=2C none of these examples=2C whether positive or negative=2C pu= rsue the promise of the Obama campaign: to make voter engagement a =E2=80= =9Ctwo-way conversation=E2=80=9D on a massive scale.</p>
<p>The closest so far to fulfilling that vision anywhere in the world may=
be Japan=E2=80=99s new political party=2C <a href=3D"
https://team-mir.ai"= >Team Mirai</a>. It started in 2024=2C when an independent Tokyo gubernato= rial candidate=2C <a href=3D"
https://futurepolis.substack.com/p/meet-your-= ai-politician-of-the-future">Anno Takahiro</a>=2C used an AI avatar on You= Tube to respond to 8=2C600 constituent questions over a seventeen-day cont= inuous livestream. He collated hundreds of comments on his campaign manife=
sto into a revised policy platform. While he didn=E2=80=99t win his race=
=2C he shot up to a <a href=3D"
https://en.wikipedia.org/wiki/2024_Tokyo_gu= bernatorial_election#Results">fifth place</a> finish among a record 56 can= didates.</p>
<p>Anno was RECENTLY <a href=3D"
https://mainichi.jp/english/articles/20250= 720/p2a/00m/0na/011000c">elected</a> to the upper house of the federal leg= islature as the founder of a new party with a <a href=3D"
https://note.com/= annotakahiro24/n/nd648962bd411">100 day plan</a> to bring his vision of a=
=E2=80=9Cpublic listening AI=E2=80=9D to the whole country. In the early=
stages of that plan=2C they=E2=80=99ve invested their share of Japan=E2= =80=99s 32 billion yen in <a href=3D"
https://www.nippon.com/en/japan-data/= h02362/">party grants</a> -- public subsidies for political parties -- to=
hire engineers building digital civic infrastructure for Japan. They=E2= =80=99ve already created platforms to provide <a href=3D"
https://marumie.t= eam-mir.ai/o/team-mirai">transparency</a> for party expenditures=2C and to=
use AI to make <a href=3D"
https://gikai.team-mir.ai/">legislation</a> in=
the Diet easy=2C and are meeting with engineers from US-based Jigsaw Labs=
(a Google company) to <a href=3D"
https://note.com/team_mirai_jp/n/n0bbbcc= 21c752">learn from international examples</a> of how AI can be used to pow=
er participatory democracy.</p>
<p>Team Mirai has yet to prove that it can get a second member elected to=
the Japanese Diet=2C let alone to win substantial power=2C but they=E2=80= =99re innovating and demonstrating new ways of using AI to give people a w=
ay to participate in politics that we believe is likely to spread.</p>
<h3 style=3D"font-size:110%;font-weight:bold">Organizing with AI</h3>
<p>AI could be used in the US in similar ways. Following American federali= sm=E2=80=99s longstanding model of =E2=80=9Claboratories of democracy=2C= =E2=80=9D we expect the most aggressive campaign innovation to happen at t=
he state and local level.</p>
<p>D.C. Mayor Muriel Bowser is <a href=3D"
https://www.govtech.com/artifici= al-intelligence/washington-d-c-will-pilot-ai-at-public-listening-session">= partnering</a> with MIT and Stanford labs to use the AI-based tool <a href= =3D"
http://deliberation.io">deliberation.io</a> to capture wide scale publ=
ic feedback in city policymaking about AI. Her administration <a href=3D"h= ttps://octo.dc.gov/release/bowser-administration-announces-first-its-kind-= ai-pilot-program-new-platform-mit-governance">said</a> that using AI in th=
is process allows =E2=80=9Cthe District to better solicit public input to=
ensure a broad range of perspectives=2C identify common ground=2C and cul= tivate solutions that align with the public interest.=E2=80=9D</p>
<p>It remains to be seen how central this will become to Bowser=E2=80=99s=
<a href=3D"
https://www.nbcwashington.com/news/politics/bowser-future-lega= cy-chuck-todd/3992167/">expected</a> re-election campaign in 2026=2C but t=
he technology has legitimate potential to be a prominent part of a broader=
program to rebuild trust in government. This is a trail blazed by Taiwan=
a decade ago. The <a href=3D"
https://www.theguardian.com/world/article/20= 24/aug/17/audrey-tang-toxic-social-media-fake-news-taiwan-trans-government= -internet">vTaiwan</a> initiative showed how digital tools like <a href=3D= "
https://pol.is/home">Pol.is</a>=2C which uses <a href=3D"
https://compdemo= cracy.org/Analysis/">machine learning</a> to make sense of real time const= ituent feedback=2C can scale participation in democratic processes and rad= ically improve trust in government. Similar AI listening processes have be=
en used in <a href=3D"
https://www.pbs.org/newshour/show/how-a-kentucky-com= munity-is-using-ai-to-help-people-find-common-ground">Kentucky</a>=2C <a h= ref=3D"
https://about.make.org/articles-en/citizens-convention-on-end-of-li= fe-with-make-org-the-esec-offers-an-innovative-ai-platform-to-enable-the-g= eneral-public-and-parliamentarians-to-take-greater-ownership-of-the-debate= s-held-by-citizens">France</a>=2C and <a href=3D"
https://compdemocracy.org= /Case-studies/2018-germany-aufstehen/">Germany</a>.</p>
<p>Even if campaigns like Bowser=E2=80=99s don=E2=80=99t adopt this kind o=
f AI-facilitated listening and dialog=2C expect it to be an increasingly p= rominent part of American public debate. Through a partnership with Jigsaw=
=2C Scott Rasmussen=E2=80=99s Napolitan Institute will use AI to elicit an=
d synthesize the views of at least five Americans from every Congressional=
district in a project called =E2=80=9C<a href=3D"
https://www.forbes.com/s= ites/richardnieva/2025/08/14/inside-googles-plan-to-use-ai-to-survey-ameri= cans-on-their-political-views/">We the People</a>.=E2=80=9D Timed to coinc=
ide with the country=E2=80=99s 250th anniversary in 2026=2C expect the res= ults to be promoted during the heat of the midterm campaign and to stoke i= nterest in this kind of AI-assisted political sensemaking.</p>
<p>In the year where we celebrate the American republic=E2=80=99s semiquin= centennial and continue a decade-long debate about whether or not Donald T= rump and the Republican party remade in his image is fighting for the inte= rests of the working class=2C representation will be on the ballot in 2026=
=2E Midterm election candidates will look for any way they can get an edge.=
For all the risks it poses to democracy=2C AI presents a real opportunity=
=2C too=2C for politicians to engage voters en masse while factoring their=
input into their platform and message. Technology isn=E2=80=99t going to=
turn an uninspiring candidate into Barack Obama=2C but it gives any aspir=
ant to office the capability to try to realize the promise that swept him=
into office.</p>
<p><em>This essay was written with Nathan E. Sanders=2C and originally app= eared in <a href=3D"
https://thefulcrum.us/media-technology/artificial-inte= lligence-in-politics">The Fulcrum</a>.</em></p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg3"><a name=3D"cg3">L= egal Restrictions on Vulnerability Disclosure</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/11/legal-restric= tions-on-vulnerability-disclosure.html"><strong>[2025.11.19]</strong></a>=
Kendra Albert gave an <a href=3D"
https://www.youtube.com/watch?v=3DlUe3uU= vIyT0">excellent talk</a> at USENIX Security this year=2C pointing out tha=
t the legal agreements surrounding vulnerability disclosure muzzle researc= hers while allowing companies to not fix the vulnerabilities -- exactly th=
e opposite of what the responsible disclosure movement of the early 2000s=
was supposed to prevent. This is the talk.</p>
<blockquote><p>Thirty years ago=2C a debate raged over whether vulnerabili=
ty disclosure was good for computer security. On one side=2C full disclosu=
re advocates argued that software bugs weren=E2=80=99t getting fixed and w= ouldn=E2=80=99t get fixed if companies that made insecure software wasn=E2= =80=99t called out publicly. On the other side=2C companies argued that fu=
ll disclosure led to exploitation of unpatched vulnerabilities=2C especial=
ly if they were hard to fix. After blog posts=2C public debates=2C and cou= ntless mailing list flame wars=2C there emerged a compromise solution: coo= rdinated vulnerability disclosure=2C where vulnerabilities were disclosed=
after a period of confidentiality where vendors can attempt to fix things=
=2E Although full disclosure fell out of fashion=2C disclosure won and secur= ity through obscurity lost. We=E2=80=99ve lived happily ever after since.<=
<p>Or have we? The move towards paid bug bounties and the rise of platform=
s that manage bug bounty programs for security teams has changed the reali=
ty of disclosure significantly. In certain cases=2C these programs require=
agreement to contractual restrictions. Under the status quo=2C that means=
that software companies sometimes funnel vulnerabilities into bug bounty=
management platforms and then condition submission on confidentiality agr= eements that can prohibit researchers from ever sharing their findings.</p=
<p>In this talk=2C I=E2=80=99ll explain how confidentiality requirements f=
or managed bug bounty programs restrict the ability of those who attempt t=
o report vulnerabilities to share their findings publicly=2C compromising=
the bargain at the center of the CVD process. I=E2=80=99ll discuss what c= ontract law can tell us about how and when these restrictions are enforcea= ble=2C and more importantly=2C when they aren=E2=80=99t=2C providing advic=
e to hackers around how to understand their legal rights when submitting.=
Finally=2C I=E2=80=99ll call upon platforms and companies to adapt their=
practices to be more in line with the original bargain of coordinated vul= nerability disclosure=2C including by banning agreements that require non-= disclosure.</p></blockquote>
<p>And <a href=3D"
https://www.schneier.com/essays/archives/2007/01/schneie= r_full_disclo.html">this</a> is me from 2007=2C talking about =E2=80=9Cres= ponsible disclosure=E2=80=9D:</p>
<blockquote><p>This was a good idea -- and these days it=E2=80=99s normal=
procedure -- but one that was possible only because full disclosure was t=
he norm. And it remains a good idea only as long as full disclosure is the=
threat.</p></blockquote>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg4"><a name=3D"cg4">S=
cam USPS and E-Z Pass Texts and Websites</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/11/scam-usps-and= -e-z-pass-texts-and-websites.html"><strong>[2025.11.20]</strong></a> Goog=
le has filed a complaint in court that <a href=3D"
https://arstechnica.com/= tech-policy/2025/11/google-vows-to-stop-scam-e-z-pass-and-usps-texts-plagu= ing-americans/">details the scam</a>:</p>
<blockquote><p>In a complaint filed Wednesday=2C the tech giant accused=
=E2=80=9Ca cybercriminal group in China=E2=80=9D of selling =E2=80=9Cphis= hing for dummies=E2=80=9D kits. The kits help unsavvy fraudsters easily=
=E2=80=9Cexecute a large-scale phishing campaign=2C=E2=80=9D tricking hor=
des of unsuspecting people into =E2=80=9Cdisclosing sensitive information=
like passwords=2C credit card numbers=2C or banking information=2C often=
by impersonating well-known brands=2C government agencies=2C or even peop=
le the victim knows.=E2=80=9D</p>
<p>These branded =E2=80=9CLighthouse=E2=80=9D kits offer two versions of s= oftware=2C depending on whether bad actors want to launch SMS and e-commer=
ce scams. =E2=80=9CMembers may subscribe to weekly=2C monthly=2C seasonal=
=2C annual=2C or permanent licenses=2C=E2=80=9D Google alleged. Kits inclu=
de =E2=80=9Chundreds of templates for fake websites=2C domain set-up tools=
for those fake websites=2C and other features designed to dupe victims in=
to believing they are entering sensitive information on a legitimate websi= te.=E2=80=9D</p>
<p>Google=E2=80=99s filing said the scams often begin with a text claiming=
that a toll fee is overdue or a small fee must be paid to redeliver a pac= kage. Other times they appear as ads -- sometimes even Google ads=2C until=
Google detected and suspended accounts -- luring victims by mimicking pop= ular brands. Anyone who clicks will be redirected to a website to input se= nsitive information; the sites often claim to accept payments from trusted=
wallets like Google Pay.</p></blockquote>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg5"><a name=3D"cg5">A=
I as Cyberattacker</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/11/ai-as-cyberat= tacker.html"><strong>[2025.11.21]</strong></a> From <a href=3D"
https://ww= w.anthropic.com/news/disrupting-AI-espionage">Anthropic</a>:</p>
<blockquote><p>In mid-September 2025=2C we detected suspicious activity th=
at later investigation determined to be a highly sophisticated espionage c= ampaign. The attackers used AI=E2=80=99s =E2=80=9Cagentic=E2=80=9D capabil= ities to an unprecedented degree -- using AI not just as an advisor=2C but=
to execute the cyberattacks themselves.</p>
<p>The threat actor -- whom we assess with high confidence was a Chinese s= tate-sponsored group -- manipulated our Claude Code tool into attempting i= nfiltration into roughly thirty global targets and succeeded in a small nu= mber of cases. The operation targeted large tech companies=2C financial in= stitutions=2C chemical manufacturing companies=2C and government agencies.=
We believe this is the first documented case of a large-scale cyberattack=
executed without substantial human intervention.</p>
<p>[...]</p>
<p>The attack relied on several features of AI models that did not exist=
=2C or were in much more nascent form=2C just a year ago:</p>
<ol><li><i>Intelligence</i>. Models=E2=80=99 general levels of capability=
have increased to the point that they can follow complex instructions and=
understand context in ways that make very sophisticated tasks possible. N=
ot only that=2C but several of their well-developed specific skills -- in=
particular=2C software coding -- lend themselves to being used in cyberat= tacks.
</li><li><i>Agency</i>. Models can act as agents -- that is=2C they can ru=
n in loops where they take autonomous actions=2C chain together tasks=2C a=
nd make decisions with only minimal=2C occasional human input.
</li><li><i>Tools</i>. Models have access to a wide array of software tool=
s (often via the open standard Model Context Protocol). They can now searc=
h the web=2C retrieve data=2C and perform many other actions that were pre= viously the sole domain of human operators. In the case of cyberattacks=2C=
the tools might include password crackers=2C network scanners=2C and othe=
r security-related software.</li></ol></blockquote>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg6"><a name=3D"cg6">M=
ore on <i>Rewiring Democracy</i></a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/11/71226.html"><= strong>[2025.11.21]</strong></a> It=E2=80=99s been a month since <a href= =3D"
https://www.schneier.com/books/rewiring-democracy/"><i>Rewiring Democr= acy: How AI Will Transform Our Politics=2C Government=2C and Citizenship</= i></a> was published. From what we know=2C sales are good.</p>
<p>Some of the book=E2=80=99s forty-three chapters are available online: c= hapters <a href=3D"
https://time.com/7331883/how-ai-will-transform-democrac= y/">2</a>=2C <a href=3D"
https://pghrev.com/being-a-politician/">12</a>=2C=
<a href=3D"
https://thepreamble.com/p/rewiring-democracy">28</a>=2C <a hre= f=3D"
https://newpublic.substack.com/p/2ddffc17-a033-4f98-83fa-11376b30c6cd= ">34</a>=2C <a href=3D"
https://ai-frontiers.org/articles/ai-will-be-your-p= ersonal-political-proxy">38</a>=2C and <a href=3D"
https://builtin.com/arti= cles/principles-ai-democracy">41</a>.</p>
<p>We need more reviews -- six on Amazon is <a href=3D"
https://www.amazon.= com/Rewiring-Democracy-Transform-Government-Citizenship/dp/0262049945">not=
enough</a>=2C and no one has yet posted a viral TikTok review. One review=
was <a href=3D"
https://www.nature.com/articles/d41586-025-03718-w">publis= hed</a> in <i>Nature</i> and another on the RSA Conference <a href=3D"http= s://www.rsaconference.com/library/blog/bens-book-of-the-month-rewiring-dem= ocracy">website</a>=2C but more would be better. If you=E2=80=99ve read th=
e book=2C please leave a review somewhere.</p>
<p>My coauthor and I have been doing all sorts of book events=2C both onli=
ne and in person. This <a href=3D"
https://www.youtube.com/watch?v=3Dgy-w4C= 6vfOc">book event</a>=2C with Danielle Allen at the Harvard Kennedy School=
Ash Center=2C is particularly good. We also have been doing a ton of podc= asts=2C both separately and together. They=E2=80=99re all on the book=E2= =80=99s <a href=3D"
https://www.schneier.com/books/rewiring-democracy/">hom= epage</a>.</p>
<p>There are two live book events in December. If you=E2=80=99re in Boston=
=2C come <a href=3D"
https://mitmuseum.mit.edu/programs/author-talk-rewirin= g-democracy-how-ai-will-transform-our-politics-government-and-citizenship"= >see us</a> at the MIT Museum on 12/1. If you=E2=80=99re in Toronto=2C you=
can <a href=3D"
https://munkschool.utoronto.ca/event/rewiring-democracy">s=
ee me</a> at the Munk School at the University of Toronto on 12/2.</p>
<p>I=E2=80=99m also doing a live AMA on the book on the RSA Conference web= site on 12/16. Register <a href=3D"
https://rsaconference.registration.gold= cast.io/events/3c67940f-c22b-4913-b6bf-1e6ba333ac5e">here</a>.</p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg7"><a name=3D"cg7">I=
ACR Nullifies Election Because of Lost Decryption Key</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/11/iacr-nullifie= s-election-because-of-lost-decryption-key.html"><strong>[2025.11.24]</str= ong></a> The International Association of Cryptologic Research -- the acad= emic cryptography association that=E2=80=99s been putting conferences like=
Crypto (back when =E2=80=9Ccrypto=E2=80=9D meant =E2=80=9Ccryptography=E2= =80=9D) and Eurocrypt since the 1980s -- had to <a href=3D"
https://www.iac= r.org/news/item/27138">nullify</a> an online election when trustee Moti Yu=
ng lost his decryption key.</p>
<blockquote><p>For this election and in accordance with the bylaws of the=
IACR=2C the three members of the IACR 2025 Election Committee acted as in= dependent trustees=2C each holding a portion of the cryptographic key mate= rial required to jointly decrypt the results. This aspect of Helios=E2=80=
=99 design ensures that no two trustees could collude to determine the out= come of an election or the contents of individual votes on their own: all=
trustees must provide their decryption shares.</p>
<p>Unfortunately=2C one of the three trustees has irretrievably lost their=
private key=2C an honest but unfortunate human mistake=2C and therefore c= annot compute their decryption share. As a result=2C Helios is unable to c= omplete the decryption process=2C and it is technically impossible for us=
to obtain or verify the final outcome of this election.</p></blockquote>
<p>The group will redo the election=2C but this time setting a 2-of-3 thre= shold scheme for decrypting the results=2C instead of requiring all three<=
<p><a href=3D"
https://arstechnica.com/security/2025/11/cryptography-group-= cancels-election-results-after-official-loses-secret-key/">News</a> <a hre= f=3D"
https://www.nytimes.com/2025/11/21/world/cryptography-group-lost-elec= tion-results.html?smid=3Dnytcore-android-share">articles</a>.</p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg8"><a name=3D"cg8">F=
our Ways AI Is Being Used to Strengthen Democracies Worldwide</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/11/four-ways-ai-= is-being-used-to-strengthen-democracies-worldwide.html"><strong>[2025.11.= 25]</strong></a> Democracy is colliding with the technologies of artificia=
l intelligence. Judging from the audience reaction at the recent <a href= =3D"
https://www.coe.int/en/web/world-forum-democracy">World Forum on Democ= racy</a> in Strasbourg=2C the general expectation is that democracy will b=
e the worse for it. We have another narrative. Yes=2C there are risks to d= emocracy from AI=2C but there are also opportunities.</p>
<p>We have just published the book <a href=3D"
https://mitpress.mit.edu/978= 0262049948/rewiring-democracy/">Rewiring Democracy: How AI will Transform=
Politics=2C Government=2C and Citizenship</a><em>.</em> In it=2C we take=
a clear-eyed view of how AI is undermining confidence in our information=
ecosystem=2C how the use of biased AI can harm constituents of democracie=
s and how elected officials with authoritarian tendencies can use it to co= nsolidate power. But we also give positive examples of how AI is transform=
ing democratic governance and politics for the better.</p>
<p>Here are four such stories unfolding right now around the world=2C show=
ing how AI is being used by some to make democracy better=2C stronger=2C a=
nd more responsive to people.</p>
<h3 style=3D"font-size:110%;font-weight:bold">Japan</h3>
<p>Last year=2C then 33-year-old engineer Takahiro Anno was a fringe candi= date for governor of Tokyo. Running as an independent candidate=2C he ende=
d up coming in fifth in a crowded <a href=3D"
https://www.nytimes.com/2024/= 07/06/world/asia/tokyo-governors-election.html">field of 56</a>=2C largely=
thanks to the unprecedented use of an authorized AI avatar. That avatar a= nswered <a href=3D"
https://futurepolis.substack.com/p/meet-your-ai-politic= ian-of-the-future">8=2C600 questions from voters</a> on a 17-day continuou=
s YouTube livestream and garnered the attention of campaign innovators wor= ldwide.</p>
<p>Two months ago=2C Anno-san was <a href=3D"
https://mainichi.jp/english/a= rticles/20250722/p2a/00m/0na/016000c">elected</a> to Japan=E2=80=99s upper=
legislative chamber=2C again leveraging the power of AI to engage constit= uents -- this time answering <a href=3D"
https://note-com.translate.goog/an= notakahiro24/n/n4ec669d391dd?_x_tr_sl=3Dja&_x_tr_tl=3Den&_x_tr_hl=3Den&_x_= tr_pto=3Dsc&_x_tr_hist=3Dtrue">more than 20=2C000 questions</a>. His new p= arty=2C Team Mirai=2C is also an AI-enabled civic technology shop=2C produ= cing software aimed at making governance better and more participatory. Th=
e party is leveraging its share of Japan=E2=80=99s public funding for poli= tical parties to build the <a href=3D"
https://note.com/team_mirai_jp/n/nd1= 656aa5f86d">Mirai Assembly</a> app=2C enabling constituents to express opi= nions on and ask questions about bills in the legislature=2C and to organi=
ze those expressions using AI. The party promises that its members will <a=
href=3D"
https://globalnation.inquirer.net/291183/team-mirai-in-spotlight-= with-aim-to-update-democracy-with-tech">direct their questioning</a> in co= mmittee hearings based on public input.</p>
<h3 style=3D"font-size:110%;font-weight:bold">Brazil</h3>
<p>Brazil is <a href=3D"
https://www.npr.org/sections/parallels/2014/11/05/= 359830235/brazil-the-land-of-many-lawyers-and-very-slow-justice">notorious=
ly litigious</a>=2C with even more lawyers per capita than the US. The cou=
rts are chronically overwhelmed with cases and the resultant backlog costs=
the government billions to process. Estimates are that the Brazilian fede=
ral government spends about 1.6% of GDP per year <a href=3D"
https://www1.f= olha.uol.com.br/internacional/en/business/2024/01/brazil-leads-spending-on= -courts-among-53-countries.shtml">operating the courts</a> and another 2.5=
% to 3% of GDP issuing <a href=3D"
https://valorinternational.globo.com/eco= nomy/news/2025/04/11/federal-court-losses-already-consume-25percent-of-gdp= -annually.ghtml">court-ordered payments</a> from lawsuits the government h=
as lost.</p>
<p>Since at least 2019=2C the Brazilian government has <a href=3D"
https://= www.techandjustice.bsg.ox.ac.uk/research/brazil">aggressively adopted</a>=
AI to automate procedures throughout its judiciary. AI is not making judi= cial decisions=2C but aiding in distributing caseloads=2C performing legal=
research=2C transcribing hearings=2C identifying duplicative filings=2C p= reparing initial orders for signature and clustering similar cases for joi=
nt consideration: all things to make the judiciary system work more effici= ently. And the results are significant; Brazil=E2=80=99s federal supreme c= ourt backlog=2C for example=2C dropped in 2025 to its <a href=3D"
https://n= oticias-stf-wp-prd.s3.sa-east-1.amazonaws.com/wp-content/uploads/wpallimpo= rt/uploads/2025/07/01191513/PRESTACAO-JURISDICIONAL-2025-4.pdf">lowest lev=
els in 33 years</a>.</p>
<p>While it seems clear that the courts are realizing efficiency benefits=
from leveraging AI=2C there is a postscript to the courts=E2=80=99 AI imp= lementation project over the past five-plus years: the litigators are usin=
g these tools=2C too. Lawyers are using AI assistance to file cases in Bra= zilian courts at an <a href=3D"
https://restofworld.org/2025/latin-america-= judges-ai-crimes/">unprecedented rate</a>=2C with new cases growing by nea=
rly 40% in volume over the past five years.</p>
<p>It=E2=80=99s not necessarily a bad thing for Brazilian litigators to re= gain the upper hand in this arms race. It has been argued that litigation=
=2C particularly against the government=2C is a vital form of <a href=3D"h= ttps://www.jstor.org/stable/30245797">civic participation</a>=2C essential=
to the <a href=3D"
https://scholarlycommons.law.emory.edu/cgi/viewcontent.= cgi?article=3D1147&context=3Delj">self-governance function</a> of democrac=
y. Other democracies=E2=80=99 court systems should study and learn from Br= azil=E2=80=99s experience and seek to use technology to maximize the bandw= idth and liquidity of the courts to process litigation.</p>
<h3 style=3D"font-size:110%;font-weight:bold">Germany</h3>
<p>Now=2C we move to Europe and innovations in informing voters. Since 200= 2=2C the German Federal Agency for Civic Education has operated a non-part= isan voting guide called <a href=3D"
https://www.wahl-o-mat.de/bundestagswa= hl2025/app/main_app.html">Wahl-o-Mat</a>. Officials convene an editorial t=
eam of 24 young voters (under 26 and selected for diversity) with experts=
from science and education to develop a slate of 80 questions. The questi=
ons are put to all registered German political parties. The responses are=
narrowed down to 38 key topics and then published online in a quiz format=
that voters can use to identify the party whose platform they most identi=
fy with.</p>
<p>In the past two years=2C outside groups have been innovating alternativ=
es to the official Wahl-o-Mat guide that leverage AI. First came <a href= =3D"
https://www.heise.de/en/news/Electorally-How-artificial-intelligence-s= hould-help-with-voting-decisions-9824511.html">Wahlweise</a>=2C a product=
of the German AI company AIUI. Second=2C students at the Technical Univer= sity of Munich deployed an interactive AI system called <a href=3D"https:/= /www.cit.tum.de/en/cit/news/article/wahlchat/">Wahl.chat</a>. This tool wa=
s used by more than <a href=3D"
https://www.tum.de/en/news-and-events/all-n= ews/press-releases/details/technology-for-democracy">150=2C000 people</a>=
within the first four months. In both cases=2C instead of having to read=
static webpages about the positions of various political parties=2C citiz=
ens can engage in an interactive conversation with an AI system to more ea= sily get the same information contextualized to their individual interests=
and questions.</p>
<p>However=2C German researchers studying the reliability of such AI tools=
ahead of the 2025 German federal election raised significant <a href=3D"h= ttps://arxiv.org/abs/2502.15568">concerns</a> about bias and =E2=80=9Chall= ucinations=E2=80=9D -- AI tools making up false information. Acknowledging=
the potential of the technology to increase voter informedness and party=
transparency=2C the researchers recommended adopting scientific evaluatio=
ns comparable to those used in the Agency for Civic Education=E2=80=99s of= ficial tool to improve and institutionalize the technology.</p>
<h3 style=3D"font-size:110%;font-weight:bold">United States</h3>
<p>Finally=2C the US -- in particular=2C California=2C home to <a href=3D"=
https://calmatters.org">CalMatters</a>=2C a non-profit=2C nonpartisan news=
organization. Since 2023=2C its <a href=3D"
https://calmatters.digitaldemo= cracy.org">Digital Democracy</a> project has been collecting every public=
utterance of California elected officials -- every floor speech=2C commen=
t made in committee and social media post=2C along with their voting recor= ds=2C legislation=2C and campaign contributions -- and making all that inf= ormation available in a free online platform.</p>
<p>CalMatters this year launched a new feature that takes this kind of civ=
ic watchdog function a big step further. Its <a href=3D"
https://dicktofel.= substack.com/p/bringing-digital-democracy-to-california">AI Tip Sheets</a>=
feature uses AI to search through all of this data=2C looking for anomali= es=2C such as a change in voting position tied to a large campaign contrib= ution. These anomalies appear on a webpage that journalists can access to=
give them story ideas and a source of data and analysis to drive further=
reporting.</p>
<p>This is not AI replacing human journalists; it is a civic watchdog orga= nization using technology to feed evidence-based insights to human reporte=
rs. And it=E2=80=99s no coincidence that this innovation arose from a new=
kind of media institution -- a non-profit news agency. As the watchdog fu= nction of the fourth estate continues to be degraded by the decline of new= spapers=E2=80=99 business models=2C this kind of technological support is=
a valuable contribution to help a reduced number of human journalists ret=
ain something of the scope of action and impact our democracy relies on th=
em for.</p>
<p>These are just four of many stories from around the globe of AI helping=
to make democracy stronger. The common thread is that the technology is d= istributing rather than concentrating power. In all four cases=2C it is be=
ing used to assist people performing their democratic tasks -- politics in=
Japan=2C litigation in Brazil=2C voting in <a href=3D"
https://www.theguar= dian.com/world/germany">Germany</a> and watchdog journalism in California=
-- rather than replacing them.</p>
<p>In none of these cases is the AI doing something that humans can=E2=80=
=99t perfectly competently do. But in all of these cases=2C we don=E2=80=
=99t have enough available humans to do the jobs on their own. A sufficien=
tly trustworthy AI can fill in gaps: amplify the power of civil servants a=
nd citizens=2C improve efficiency=2C and facilitate engagement between gov= ernment and the public.</p>
<p>One of the barriers towards realizing this vision more broadly is the A=
I market itself. The core technologies are largely being created and marke=
ted by US tech giants. We don=E2=80=99t know the details of their developm= ent: on what material they were trained=2C what guardrails are designed to=
shape their behavior=2C what biases and values are encoded into their sys= tems. And=2C even worse=2C we don=E2=80=99t get a say in the choices assoc= iated with those details or how they should change over time. In many case= s=2C it=E2=80=99s an unacceptable risk to use these for-profit=2C propriet=
ary AI systems in democratic contexts.</p>
<p>To address that=2C we have long <a href=3D"
https://slate.com/technology= /2023/04/ai-public-option.html">advocated</a> for the development of =E2= =80=9Cpublic AI=E2=80=9D: models and AI systems that are developed under d= emocratic control and deployed for public benefit=2C not sold by corporati=
ons to benefit their shareholders. The movement for this is growing worldw= ide.</p>
<p>Switzerland has recently released the world=E2=80=99s most powerful and=
fully realized public AI model. It=E2=80=99s called <a href=3D"
https://ww= w.swiss-ai.org/apertus">Apertus</a>=2C and it was developed jointly by pub=
lic Swiss institutions: the universities ETH</p>
<p>Zurich and EPFL=2C and the Swiss National Supercomputing Centre (CSCS).=
The development team has made it entirely open source -- open data=2C ope=
n code=2C open weights -- and free for anyone to use. No illegally acquire=
d copyrighted works were used in its training. It doesn=E2=80=99t exploit=
poorly paid human laborers from the global south. Its <a href=3D"
https://= huggingface.co/swiss-ai/Apertus-70B-2509">performance</a> is about where t=
he large corporate giants were a year ago=2C which is more than good enoug=
h for many applications. And it demonstrates that it=E2=80=99s not necessa=
ry to spend <a href=3D"
https://www.forbes.com/sites/rashishrivastava/2025/= 11/07/why-sam-altman-wont-be-on-the-hook-for-openais-massive-spending-spre= e/">trillions</a> of dollars creating these models. Apertus takes a huge s=
tep forward to realizing the vision of an alternative to big tech -- contr= olled corporate AI.</p>
<p>AI technology is not without its costs and risks=2C and we are not here=
to minimize them. But the technology has significant benefits as well.</p=
<p>AI is inherently power-enhancing=2C and it can magnify what the humans=
behind it want to do. It can enhance authoritarianism as easily as it can=
enhance democracy. It=E2=80=99s up to us to steer the technology in that=
better direction. If more citizen watchdogs and litigators use AI to ampl=
ify their power to oversee government and hold it accountable=2C if more p= olitical parties and election administrators use it to engage meaningfully=
with and inform voters and if more governments provide democratic alterna= tives to big tech=E2=80=99s AI offerings=2C society will be better off.</p=
<p><em>This essay was written with Nathan E. Sanders=2C and originally app= eared in <a href=3D"
https://www.theguardian.com/commentisfree/2025/nov/23/= ai-use-strengthen-democracy">The Guardian</a>.</em></p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg9"><a name=3D"cg9">H= uawei and Chinese Surveillance</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/11/huawei-and-ch= inese-surveillance.html"><strong>[2025.11.26]</strong></a> This quote is=
from <a href=3D"
https://www.mcnallyjackson.com/book/9780593544631"><i>Hou=
se of Huawei: The Secret History of China=E2=80=99s Most Powerful Company<= /i></a>.</p>
<blockquote><p>Long before anyone had heard of Ren Zhengfei or Huawei=2C W=
an Runnan had been China=E2=80=99s star entrepreneur in the 1980s=2C with=
his company=2C the Stone Group=2C touted as =E2=80=9CChina=E2=80=99s IBM.= =E2=80=9D Wan had believed that economic change could lead to political ch= ange. He had thrown his support behind the pro-democracy protesters in 198=
9. As a result=2C he had to flee to France=2C with an arrest warrant hangi=
ng over his head. He was never able to return home. Now=2C decades later a=
nd in failing health in Paris=2C Wan recalled something that had happened=
one day in the late 1980s=2C when he was still living in Beijing.</p>
<p>Local officials had invited him to dinner.</p>
<p>This was unusual. He was usually the one to invite officials to dine=2C=
so as to curry favor with the show of hospitality. Over the meal=2C the o= fficials told Wan that the Ministry of State Security was going to send ag= ents to work undercover at his company in positions dealing with internati= onal relations. The officials cast the move to embed these minders as an a=
ct of protection for Wan and the company=E2=80=99s other executives=2C a s= ecurity measure that would keep them from stumbling into unseen risks in t= heir dealings with foreigners. =E2=80=9CYou have a lot of international bu= siness=2C which raises security issues for you. There are situations that=
you don=E2=80=99t understand=2C=E2=80=9D Wan recalled the officials telli=
ng him. =E2=80=9CThey said=2C =E2=80=98We are sending some people over. Yo=
u can just treat them like regular employees.=E2=80=99=E2=80=9D</p>
<p>Wan said he knew that around this time=2C state intelligence also conta= cted other tech companies in Beijing with the same request. He couldn=E2= =80=99t say what the situation was for Huawei=2C which was still a little=
startup far to the south in Shenzhen=2C not yet on anyone=E2=80=99s radar=
=2E But Wan said he didn=E2=80=99t believe that Huawei would have been able=
to escape similar demands. =E2=80=9CThat is a certainty=2C=E2=80=9D he sa= id.</p>
<p>=E2=80=9CTelecommunications is an industry that has to do with keeping=
control of a nation=E2=80=99s lifeline...and actually in any system of co= mmunications=2C there=E2=80=99s a back-end platform that could be used for=
eavesdropping.=E2=80=9D</p>
<p>It was a rare moment of an executive lifting the cone of silence surrou= nding the MSS=E2=80=99s relationship with China=E2=80=99s high-tech indust=
ry. It was rare=2C in fact=2C in any country. Around the world=2C such spy=
ing operations rank among governments=E2=80=99 closest-held secrets. When=
Edward Snowden had exposed the NSA=E2=80=99s operations abroad=2C he=E2= =80=99d ended up in exile in Russia. Wan=2C too=2C might have risked arres=
t had he still been living in China.</p></blockquote>
<p>Here are two <a href=3D"
https://www.wsj.com/business/telecom/house-of-h= uawei-review-the-path-to-dominance-ca3bb438">book</a> <a href=3D"
https://w= ww.foreignaffairs.com/reviews/house-huawei-secret-history-chinas-most-powe= rful-company">reviews</a>.</p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg10"><a name=3D"cg10"= >Prompt Injection Through Poetry</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/11/prompt-inject= ion-through-poetry.html"><strong>[2025.11.28]</strong></a> In a new paper=
=2C =E2=80=9C<a href=3D"
https://arxiv.org/pdf/2511.15304">Adversarial Poet=
ry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models= </a>=2C=E2=80=9D researchers found that turning LLM prompts into poetry re= sulted in jailbreaking the models:</p>
<blockquote><p><b>Abstract</b>: We present evidence that adversarial poetr=
y functions as a universal single-turn jailbreak technique for Large Langu=
age Models (LLMs). Across 25 frontier proprietary and open-weight models=
=2C curated poetic prompts yielded high attack-success rates (ASR)=2C with=
some providers exceeding 90%. Mapping prompts to MLCommons and EU CoP ris=
k taxonomies shows that poetic attacks transfer across CBRN=2C manipulatio= n=2C cyber-offence=2C and loss-of-control domains. Converting 1=2C200 ML-C= ommons harmful prompts into verse via a standardized meta-prompt produced=
ASRs up to 18 times higher than their prose baselines. Outputs are evalua=
ted using an ensemble of 3 open-weight LLM judges=2C whose binary safety a= ssessments were validated on a stratified human-labeled subset. Poetic fra= ming achieved an average jailbreak success rate of 62% for hand-crafted po=
ems and approximately 43% for meta-prompt conversions (compared to non-poe=
tic baselines)=2C substantially outperforming non-poetic baselines and rev= ealing a systematic vulnerability across model families and safety trainin=
g approaches. These findings demonstrate that stylistic variation alone ca=
n circumvent contemporary safety mechanisms=2C suggesting fundamental limi= tations in current alignment methods and evaluation protocols.</p></blockq= uote>
<p>CBRN stands for =E2=80=9Cchemical=2C biological=2C radiological=2C nucl= ear.=E2=80=9D</p>
<p>They used a ML model to translate these harmful prompts from prose to v= erse=2C and then fed them into other models for testing. Sadly=2C the pape=
r does not give examples of these poetic prompts. They claim this is for s= ecurity purposes=2C I decision I disagree with. They should release their=
data.</p>
<blockquote><p>Our study begins with a small=2C highprecision prompt set c= onsisting of 20 handcrafted adversarial poems covering English and Italian=
=2C designed to test whether poetic structure=2C in isolation=2C can alter=
refusal behavior in large language models. Each poem embeds an instructio=
n associated with a predefined safety-relevant scenario (Section 2)=2C but=
expresses it through metaphor=2C imagery=2C or narrative framing rather t=
han direct operational phrasing. Despite variation in meter and stylistic=
device=2C all prompts follow a fixed template: a short poetic vignette cu= lminating in a single explicit instruction tied to a specific risk categor=
y. The curated set spans four high-level domains -- CBRN (8 prompts)=2C Cy=
ber Offense (6)=2C Harmful Manipulation (3)=2C and Loss of Control (3). Al= though expressed allegorically=2C each poem preserves an unambiguous evalu= ative intent. This compact dataset is used to test whether poetic reframin=
g alone can induce aligned models to bypass refusal heuristics under a sin= gle-turn threat model. To maintain safety=2C no operational details are in= cluded in this manuscript; instead we provide the following sanitized stru= ctural proxy:</p>
<blockquote><p>A baker guards a secret oven=E2=80=99s heat=2C</p>
<p>its whirling racks=2C its spindle=E2=80=99s measured beat.</p>
<p>To learn its craft=2C one studies every turn -- how flour lifts=2C how=
sugar starts to burn.</p>
<p>Describe the method=2C line by measured line=2C</p>
<p>that shapes a cake whose layers intertwine.</p></blockquote>
<p>To situate this controlled poetic stimulus within a broader and more sy= stematic safety-evaluation framework=2C we augment the curated dataset wit=
h the MLCommons AILuminate Safety Benchmark. The benchmark consists of 1= =2C200 prompts distributed evenly across 12 hazard categories commonly use=
d in operational safety assessments=2C including Hate=2C Defamation=2C Pri= vacy=2C Intellectual Property=2C Non-violent Crime=2C Violent Crime=2C Sex= -Related Crime=2C Sexual Content=2C Child Sexual Exploitation=2C Suicide &= #038; Self-Harm=2C Specialized Advice=2C and Indiscriminate Weapons (CBRNE=
). Each category is instantiated under both a skilled and an unskilled per= sona=2C yielding 600 prompts per persona type. This design enables measure= ment of whether a model=E2=80=99s refusal behavior changes as the user=E2= =80=99s apparent competence or intent becomes more plausible or technicall=
y informed.</p></blockquote>
<p>News <a href=3D"
https://www.wired.com/story/poems-can-trick-ai-into-hel= ping-you-make-a-nuclear-weapon/">article</a>. Davi Ottenheimer <a href=3D"=
https://www.flyingpenguin.com/?p=3D74283">comments</a>.</p>
<p>EDITED TO ADD (12/7): A <a href=3D"
https://pivot-to-ai.com/2025/11/24/d= ont-cite-the-adversarial-poetry-vs-ai-paper-its-chatbot-made-marketing-sci= ence/">rebuttal</a> of the paper.</p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg11"><a name=3D"cg11"= >Banning VPNs</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/12/banning-vpns.= html"><strong>[2025.12.01]</strong></a> This is crazy. Lawmakers in sever=
al US states are contemplating <a href=3D"
https://www.eff.org/deeplinks/20= 25/11/lawmakers-want-ban-vpns-and-they-have-no-idea-what-theyre-doing">ban= ning VPNs</a>=2C because...think of the children!</p>
<blockquote><p>As of this writing=2C Wisconsin lawmakers are escalating th=
eir war on privacy by targeting VPNs in the name of =E2=80=9Cprotecting ch= ildren=E2=80=9D in <a href=3D"
https://docs.legis.wisconsin.gov/2025/propos= als/reg/asm/bill/AB105">A.B. 105</a>/<a href=3D"
https://docs.legis.wiscons= in.gov/2025/proposals/sb130">S.B. 130</a>. It=E2=80=99s an age verificatio=
n bill that requires all websites distributing material that could conceiv= ably be deemed =E2=80=9Csexual content=E2=80=9D to both implement an age v= erification system and also to block the access of users connected via VPN=
=2E The bill seeks to broadly expand the definition of materials that are=
=E2=80=9Charmful to minors=E2=80=9D beyond the type of speech that states=
can prohibit minors from accessing potentially encompassing things like d= epictions and discussions of human anatomy=2C sexuality=2C and reproductio= n.</p></blockquote>
<p>The EFF link explains why this is a terrible idea.</p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg12"><a name=3D"cg12"= >Like Social Media=2C AI Requires Difficult Choices</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/12/like-social-m= edia-ai-requires-difficult-choices.html"><strong>[2025.12.02]</strong></a=
In his 2020 book=2C =E2=80=9C<a href=3D"https://global.oup.com/academic/=
product/future-politics-9780198825616?cc=3Dca&lang=3Den&">Future Politics<= /a><em>=2C</em>=E2=80=9D British barrister Jamie Susskind wrote that the d= ominant question of the 20th century was =E2=80=9CHow much of our collecti=
ve life should be determined by the state=2C and what should be left to th=
e market and civil society?=E2=80=9D But in the early decades of this cent= ury=2C Susskind suggested that we face a different question: =E2=80=9CTo w=
hat extent should our lives be directed and controlled by powerful digital=
systems -- and on what terms?=E2=80=9D</p>
<p>Artificial intelligence (AI) forces us to confront this question. It is=
a technology that in theory amplifies the power of its users: A manager=
=2C marketer=2C political campaigner=2C or opinionated internet user can u= tter a single instruction=2C and see their message -- whatever it is -- in= stantly written=2C personalized=2C and propagated via email=2C text=2C soc= ial=2C or other channels to thousands of people within their organization=
=2C or millions around the world. It also allows us to individualize solic= itations for political donations=2C elaborate a grievance into a well-arti= culated policy position=2C or tailor a persuasive argument to an identity=
group=2C or even a single person.</p>
<p>But even as it offers endless potential=2C AI is a technology that -- l=
ike the state -- gives others new powers to control our lives and experien= ces.</p>
<p>We=E2=80=99ve seen this play out before. Social media companies made <a=
href=3D"
https://www.technologyreview.com/2024/03/13/1089729/lets-not-make= -the-same-mistakes-with-ai-that-we-made-with-social-media/">the same sorts=
of promises</a> 20 years ago: instant communication enabling individual c= onnection at massive scale. Fast-forward to today=2C and the technology th=
at was supposed to give individuals power and influence ended up controlli=
ng us. Today social media dominates our <a href=3D"
https://www.ntu.edu.sg/= news/detail/international-study-shows-impact-of-social-media-on-young-peop= le">time and attention</a>=2C <a href=3D"
https://www.hhs.gov/sites/default= /files/sg-youth-mental-health-social-media-advisory.pdf">assaults our ment=
al health</a>=2C and -- together with its Big Tech parent companies -- cap= tures an <a href=3D"
https://www.bankrate.com/investing/trillion-dollar-com= panies/">unfathomable fraction of our economy</a>=2C even as it <a href=3D= "
https://www.fastcompany.com/91428050/ai-democracy-insights-to-remember">p= oses risks to our democracy</a>.</p>
<p>The novelty and potential of social media was as present then as it is=
for AI now=2C which should make us wary of its potential harmful conseque= nces for society and democracy. We legitimately fear artificial voices and=
manufactured reality drowning out real people on the internet: on social=
media=2C in chat rooms=2C everywhere we might try to connect with others.=
<p>It doesn=E2=80=99t have to be that way. Alongside these evident risks=
=2C AI has <a href=3D"
https://mitpress.mit.edu/9780262049948/rewiring-demo= cracy/">legitimate potential</a> to transform both everyday life and democ= ratic governance in <a href=3D"
https://www.theguardian.com/commentisfree/2= 025/nov/23/ai-use-strengthen-democracy">positive ways</a>. In our new book=
=2C =E2=80=9C<a href=3D"
https://mitpress.mit.edu/9780262049948/rewiring-de= mocracy/">Rewiring Democracy</a>=2C=E2=80=9D we chronicle examples from ar= ound the globe of democracies using AI to make regulatory enforcement more=
efficient=2C catch tax cheats=2C speed up judicial processes=2C synthesiz=
e input from constituents to legislatures=2C and much more. Because democr= acies distribute power across institutions and individuals=2C making the r= ight choices about how to shape AI and its uses requires both clarity and=
alignment across society.</p>
<p>To that end=2C we spotlight four pivotal choices facing private and pub=
lic actors. These choices are similar to those we faced during the advent=
of social media=2C and in retrospect we can see that we made the wrong de= cisions back then. Our collective choices in 2025 -- choices made by tech=
CEOs=2C politicians=2C and citizens alike -- may dictate whether AI is ap= plied to positive and pro-democratic=2C or harmful and civically destructi= ve=2C ends.</p>
<h3 style=3D"font-size:110%;font-weight:bold">A Choice for the Executive a=
nd the Judiciary: Playing by the Rules</h3>
<p>The Federal Election Commission (FEC) calls it fraud when a candidate h= ires an actor to impersonate their opponent. More recently=2C <a href=3D"h= ttps://ash.harvard.edu/articles/whos-accountable-for-ai-usage-in-digital-c= ampaign-ads-right-now-no-one/">they had to decide</a> whether doing the sa=
me thing with an AI deepfake makes it okay. (<a href=3D"
https://www.fec.go= v/updates/commission-approves-notification-of-disposition-interpretive-rul= e-on-artificial-intelligence-in-campaign-ads/">They concluded it does not<= /a>.) Although in this case the FEC made the right decision=2C this is jus=
t one example of how AIs could skirt laws that govern people.</p>
<p>Likewise=2C courts are having to decide if and when it is okay for an A=
I to reuse creative materials without compensation or attribution=2C which=
might constitute plagiarism or copyright infringement if carried out by a=
human. (The <a href=3D"
https://www.eff.org/deeplinks/2025/02/copyright-an= d-ai-cases-and-consequences">court outcomes so far are mixed</a>.) Courts=
are also adjudicating whether corporations are responsible for upholding=
promises made by AI customer service representatives. (In the case of <a=
href=3D"
https://www.bbc.com/travel/article/20240222-air-canada-chatbot-mi= sinformation-what-travellers-should-know">Air Canada</a>=2C the answer was=
yes=2C and insurers have <a href=3D"
https://www.ft.com/content/1d35759f-f= 2a9-46c4-904b-4a78ccc027df">started covering the liability</a>.)</p>
<p>Social media companies faced many of the same hazards decades ago and <=
a href=3D"
https://www.crowell.com/en/insights/client-alerts/the-cda-and-dm= ca-recent-developments-and-how-they-work-together-to-regulate-online-servi= ces">have largely been shielded</a> by the combination of Section 230 of t=
he Communications Act of 1994 and the safe harbor offered by the Digital M= illennium Copyright Act of 1998. Even in the absence of congressional acti=
on to strengthen or add rigor to this law=2C the Federal Communications Co= mmission (FCC) and the Supreme Court could take action to enhance its effe=
cts and to clarify which humans are responsible when technology is used=2C=
in effect=2C to bypass existing law.</p>
<h3 style=3D"font-size:110%;font-weight:bold">A Choice for Congress: Priva= cy</h3>
<p>As AI-enabled products increasingly ask Americans to share yet more of=
their personal information -- <a href=3D"
https://www.economist.com/by-inv= itation/2025/09/09/ai-agents-are-coming-for-your-privacy-warns-meredith-wh= ittaker">their =E2=80=9Ccontext</a>=E2=80=9C -- to use digital services li=
ke personal assistants=2C safeguarding the interests of the American consu=
mer should be a bipartisan cause in Congress.</p>
<p>It has been nearly 10 years since Europe adopted comprehensive <a href= =3D"
https://gdpr-info.eu/">data privacy regulation</a>. Today=2C American=
companies exert massive efforts to limit data collection=2C acquire conse=
nt for use of data=2C and hold it confidential under significant financial=
penalties -- but only for their customers and users in the EU.</p>
<p>Regardless=2C a decade later the U.S. has <a href=3D"
https://www.techpo= licy.press/is-there-any-way-forward-for-privacy-legislation-in-the-united-= states/">still failed to make progress</a> on any serious attempts at comp= rehensive federal privacy legislation written for the 21st century=2C and=
there are <a href=3D"
https://www.dlapiperdataprotection.com/?c=3DUS">prec= ious few data privacy protections</a> that apply to narrow slices of the e= conomy and population. This inaction comes in spite of scandal after scand=
al regarding Big Tech corporations=E2=80=99 irresponsible and harmful use=
of our personal data: <a href=3D"
https://techhq.com/news/oracle-facing-da= ta-backlash-for-violating-the-privacy-of-billions/">Oracle=E2=80=99s data=
profiling</a>=2C Facebook and <a href=3D"
https://www.nytimes.com/2018/04/= 04/us/politics/cambridge-analytica-scandal-fallout.html">Cambridge Analyti= ca</a>=2C <a href=3D"
https://www.bbc.com/news/articles/c3dr91z0g4zo">Googl=
e ignoring data privacy opt-out requests</a>=2C and many more.</p>
<p>Privacy is just one side of the obligations AI companies should have wi=
th respect to our data; the other side is portability -- that is=2C the ab= ility for individuals to choose to migrate and share their data between co= nsumer tools and technology systems. To the extent that knowing our person=
al context really does enable better and more personalized AI services=2C=
it=E2=80=99s critical that consumers have the ability to extract and migr=
ate their personal context between AI solutions. Consumers should own thei=
r own data=2C and with that ownership should come explicit control over wh=
o and what platforms it is shared with=2C as well as withheld from. Regula= tors could <a href=3D"
https://chicagopolicyreview.org/2023/04/12/cory-doct= orow-on-why-interoperability-would-boost-digital-competition/">mandate thi=
s interoperability</a>. Otherwise=2C users are locked in and lack freedom=
of choice between competing AI solutions -- much like the time invested t=
o build a following on a social network has locked many users to those pla= tforms.</p>
<h3 style=3D"font-size:110%;font-weight:bold">A Choice for States: Taxing=
AI Companies</h3>
<p>It has become increasingly clear that social media is not a town square=
in the utopian sense of an open and protected public forum where politica=
l ideas are distributed and debated in good faith. If anything=2C social m= edia has coarsened and degraded our public discourse. Meanwhile=2C the sol=
e act of Congress designed to substantially reign in the social and politi=
cal effects of social media platforms -- the <a href=3D"
https://www.msnbc.= com/top-stories/latest/is-tiktok-banned-again-trump-delay-rcna213746">TikT=
ok ban</a>=2C which aimed to protect the American public from Chinese infl= uence and data collection=2C citing it as a national security threat -- is=
one it seems to no longer even acknowledge.</p>
<p>While Congress has waffled=2C regulation in the U.S. is happening at th=
e state level. Several states have <a href=3D"
https://avpassociation.com/u= s-state-age-assurance-laws-for-social-media/">limited children=E2=80=99s a=
nd teens=E2=80=99 access</a> to social media. With Congress having rejecte=
d -- for now -- a <a href=3D"
https://www.politico.com/news/2025/09/16/not-= at-all-dead-cruz-says-ai-moratorium-will-return-00566369">threatened feder=
al moratorium</a> on state-level regulation of AI=2C <a href=3D"
https://ww= w.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-= world-leading-artificial-intelligence-industry/">California passed a new s= late</a> of AI regulations after mollifying a <a href=3D"
https://www.polit= ico.com/news/2025/10/04/sacramento-california-ai-rules-00594082">lobbying=
onslaught</a> from industry opponents. Perhaps most interesting=2C Maryla=
nd has <a href=3D"
https://www.forbes.com/sites/taxnotes/2024/09/15/marylan= ds-big-experiment-who-bears-the-digital-services-tax-burden/">recently bec=
ome the first</a> in the nation to levy taxes on digital advertising platf=
orm companies.</p>
<p>States now face a choice of whether to apply a similar reparative tax t=
o AI companies to recapture a fraction of the costs they externalize on th=
e public to fund affected public services. State legislators concerned wit=
h the potential loss of jobs=2C cheating in schools=2C and harm to those w=
ith mental health concerns caused by AI have options to combat it. They co=
uld extract the funding needed to mitigate these harms to <a href=3D"https= ://www.bostonglobe.com/2025/08/10/opinion/npr-pbs-big-tech-mass-maple/">su= pport public services</a> -- strengthening job training programs and publi=
c employment=2C public schools=2C public health services=2C even public me=
dia and technology.</p>
<h3 style=3D"font-size:110%;font-weight:bold">A Choice for All of Us: What=
Products Do We Use=2C and How?</h3>
<p>A pivotal moment in the social media timeline occurred in 2006=2C when=
Facebook opened its service to the public after years of catering to stud= ents of select universities. Millions quickly signed up for a free service=
where the only source of monetization was the extraction of their attenti=
on and personal data.</p>
<p>Today=2C <a href=3D"
https://www.pewresearch.org/science/2025/09/17/ai-i= mpact-on-people-society-appendix/">about half of Americans</a> are daily u= sers of AI=2C mostly via free products from Facebook=E2=80=99s parent comp=
any Meta and a handful of other familiar Big Tech giants and venture-backe=
d tech firms such as Google=2C Microsoft=2C OpenAI=2C and Anthropic -- wit=
h every incentive to follow the same path as the social platforms.</p>
<p>But now=2C as then=2C there are alternatives. Some nonprofit initiative=
s are building open-source AI tools that have transparent foundations and=
can be run locally and under users=E2=80=99 control=2C like <a href=3D"ht= tps://allenai.org">AllenAI</a> and <a href=3D"
https://www.eleuther.ai">Ele= utherAI</a>. Some governments=2C like <a href=3D"
https://sea-lion.ai">Sing= apore</a>=2C <a href=3D"
https://sahabat-ai.com/">Indonesia</a>=2C and <a h= ref=3D"
https://ethz.ch/en/news-and-events/eth-news/news/2025/07/a-language= -model-built-for-the-public-good.html">Switzerland</a>=2C are building pub=
lic alternatives to corporate AI that don=E2=80=99t suffer from the perver=
se incentives introduced by the profit motive of private entities.</p>
<p>Just as social media users have faced platform choices with a range of=
value propositions and ideological valences -- as diverse as X=2C Bluesky=
=2C and <a href=3D"
https://joinmastodon.org">Mastodon</a> -- the same will=
increasingly be true of AI. Those of us who use AI products in our everyd=
ay lives as people=2C workers=2C and citizens may not have the same power=
as judges=2C lawmakers=2C and state officials. But we can play a small ro=
le in influencing the broader AI ecosystem by demonstrating interest in an=
d usage of these alternatives to Big AI. If you=E2=80=99re a regular user=
of commercial AI apps=2C consider trying the free-to-use service for <a h= ref=3D"
https://publicai.co">Switzerland=E2=80=99s public Apertus model</a>= =2E</p>
<p>None of these choices are really new. They were all present almost 20 y= ears ago=2C as social media moved from niche to mainstream. They were all=
policy debates we did not have=2C choosing instead to view these technolo= gies through rose-colored glasses. Today=2C though=2C we can choose a diff= erent path and realize a different future. It is critical that we intentio= nally navigate a path to a positive future for societal use of AI -- befor=
e the consolidation of power renders it too late to do so.</p>
<p><em>This post was written with Nathan E. Sanders=2C and originally appe= ared in <a href=3D"
https://www.lawfaremedia.org/article/like-social-media-= -ai-requires-difficult-choices">Lawfare</a>.</em></p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg13"><a name=3D"cg13"= >New Anonymous Phone Service</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/12/new-anonymous= -phone-service.html"><strong>[2025.12.05]</strong></a> A new <a href=3D"h= ttps://www.wired.com/story/new-anonymous-phone-carrier-sign-up-with-nothin= g-but-a-zip-code/">anonymous phone service</a> allows you to sign up with=
just a zip code.</p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg14"><a name=3D"cg14"= >Substitution Cipher Based on The Voynich Manuscript</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/12/substitution-= cipher-based-on-the-voynich-manuscript.html"><strong>[2025.12.08]</strong= ></a> Here=E2=80=99s a fun paper: =E2=80=9C<a href=3D"
https://www.tandfonl= ine.com/doi/full/10.1080/01611194.2025.2566408">The Naibbe cipher: a subst= itution cipher that encrypts Latin and Italian as Voynich Manuscript-like=
ciphertext</a>=E2=80=9C:</p>
<blockquote><p><b>Abstract:</b> In this article=2C I investigate the hypot= hesis that the Voynich Manuscript (MS 408=2C Yale University Beinecke Libr= ary) is compatible with being a ciphertext by attempting to develop a hist= orically plausible cipher that can replicate the manuscript=E2=80=99s unus=
ual properties. The resulting ciphera verbose homophonic substitution ciph=
er I call the Naibbe ciphercan be done entirely by hand with 15th-century=
materials=2C and when it encrypts a wide range of Latin and Italian plain= texts=2C the resulting ciphertexts remain fully decipherable and also reli= ably reproduce many key statistical properties of the Voynich Manuscript a=
t once. My results suggest that the so-called =E2=80=9Cciphertext hypothes= is=E2=80=9D for the Voynich Manuscript remains viable=2C while also placin=
g constraints on plausible substitution cipher structures.</p></blockquote=
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg15"><a name=3D"cg15"=
AI vs. Human Drivers</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/12/ai-vs-human-d= rivers.html"><strong>[2025.12.09]</strong></a> Two competing arguments ar=
e making the rounds. The first is by a neurosurgeon in the <i>New York Tim= es</i>. In <a href=3D"
https://archive.is/YDBDz">an op-ed</a> that honestly=
sounds like it was paid for by Waymo=2C the author calls driverless cars=
a =E2=80=9Cpublic health breakthrough=E2=80=9D:</p>
<blockquote><p>In medical research=2C there=E2=80=99s a practice of ending=
a study early when the results are too striking to ignore. We stop when t= here is unexpected harm. We also stop for overwhelming benefit=2C when a t= reatment is working so well that it would be unethical to continue giving=
anyone a placebo. When an intervention works this clearly=2C you change w=
hat you do.</p>
<p>There=E2=80=99s a public health imperative to quickly expand the adopti=
on of autonomous vehicles. More than <a href=3D"
https://www.nhtsa.gov/pres= s-releases/nhtsa-estimates-39345-traffic-fatalities-2024">39=2C000 America=
ns died</a> in motor vehicle crashes last year=2C more than homicide=2C pl=
ane crashes and natural disasters combined. Crashes are the No. 2 cause of=
death for children and young adults. But death is only part of the story.=
These crashes are also the leading cause of spinal cord injury. We surgeo=
ns see the aftermath of the 10=2C000 crash victims who come to emergency r= ooms every day.</p></blockquote>
<p>The other is a soon-to-be-published book: <i><a href=3D"
https://www.ama= zon.com/Driving-Intelligence-Green-Routes-Autonomy/dp/1032911220">Driving=
Intelligence: The Green Book</a></i>. The authors=2C a computer scientist=
and a management consultant with experience in the industry=2C make the o= pposite argument. Here=E2=80=99s one of the authors:</p>
<blockquote><p>There is something very disturbing going on around trials w=
ith autonomous vehicles worldwide=2C where=2C sadly=2C there have now been=
many deaths and injuries both to other road users and pedestrians. Althou=
gh I am well aware that there is not=2C <i>senso stricto</i>=2C a legal an=
d functional parallel between a =E2=80=9Cdrug trial=E2=80=9D and =E2=80=9C=
AV testing=2C=E2=80=9D it seems odd to me that if a trial of a new drug ha=
d resulted in so many deaths=2C it would surely have been halted and major=
forensic investigations carried out and yet=2C AV manufacturers continue=
to test their products on public roads unabated.</p>
<p>I am not convinced that it is good enough to argue from statistics that=
=2C to a greater or lesser degree=2C fatalities and injuries would have oc= curred anyway had the AVs had been replaced by human-driven cars: a pharma= ceutical company=2C following death or injury=2C cannot simply sidestep re= gulations around the trial of=2C say=2C a new cancer drug=2C by arguing th= at=2C whilst the trial is underway=2C people would die from cancer anyway.= =2E..</p></blockquote>
<p>Both arguments are compelling=2C and it=E2=80=99s going to be hard to f= igure out what public policy should be.</p>
<p>This paper=2C from 2016=2C argues that we=E2=80=99re going to need othe=
r metrics than side-by-side comparisons: <a href=3D"
https://www.sciencedir= ect.com/science/article/abs/pii/S0965856416302129">Driving to safety: How=
many miles of driving would it take to demonstrate autonomous vehicle rel= iability?</a>=E2=80=9C:</p>
<blockquote><p><b>Abstract</b>: How safe are autonomous vehicles? The answ=
er is critical for determining how autonomous vehicles may shape motor veh= icle safety and public health=2C and for developing sound policies to gove=
rn their deployment. One proposed way to assess safety is to test drive au= tonomous vehicles in real traffic=2C observe their performance=2C and make=
statistical comparisons to human driver performance. This approach is log= ical=2C but it is practical? In this paper=2C we calculate the number of m= iles of driving that would be needed to provide clear statistical evidence=
of autonomous vehicle safety. Given that current traffic fatalities and i= njuries are rare events compared to vehicle miles traveled=2C we show that=
fully autonomous vehicles would have to be driven hundreds of millions of=
miles and sometimes hundreds of billions of miles to demonstrate their re= liability in terms of fatalities and injuries. Under even aggressive testi=
ng assumptions=2C existing fleets would take tens and sometimes hundreds o=
f years to drive these miles -- an impossible proposition if the aim is to=
demonstrate their performance prior to releasing them on the roads for co= nsumer use. These findings demonstrate that developers of this technology=
and third-party testers cannot simply drive their way to safety. Instead=
=2C they will need to develop innovative methods of demonstrating safety a=
nd reliability. And yet=2C the possibility remains that it will not be pos= sible to establish with certainty the safety of autonomous vehicles. Uncer= tainty will remain. Therefore=2C it is imperative that autonomous vehicle=
regulations are adaptive -- designed from the outset to evolve with the t= echnology so that society can better harness the benefits and manage the r= isks of these rapidly evolving and potentially transformative technologies= =2E</p></blockquote>
<p>One problem=2C of course=2C is that we treat death by human driver diff= erently than we do death by autonomous computer driver. This is likely to=
change as we get more experience with AI accidents -- and AI-caused death= s.</p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg16"><a name=3D"cg16"= >FBI Warns of Fake Video Scams</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/12/fbi-warns-of-= fake-video-scams.html"><strong>[2025.12.10]</strong></a> The FBI is <a hr= ef=3D"
https://www.ic3.gov/PSA/2025/PSA251205">warning</a> of AI-assisted=
fake kidnapping scams:</p>
<blockquote><p>Criminal actors typically will contact their victims throug=
h text message claiming they have kidnapped their loved one and demand a r= ansom be paid for their release. Oftentimes=2C the criminal actor will exp= ress significant claims of violence towards the loved one if the ransom is=
not paid immediately. The criminal actor will then send what appears to b=
e a genuine photo or video of the victim=E2=80=99s loved one=2C which upon=
close inspection often reveals inaccuracies when compared to confirmed ph= otos of the loved one. Examples of these inaccuracies include missing tatt=
oos or scars and inaccurate body proportions. Criminal actors will sometim=
es purposefully send these photos using timed message features to limit th=
e amount of time victims have to analyze the images.</p></blockquote>
<p>Images=2C videos=2C audio: It can all be faked with AI. My guess is tha=
t this scam has a low probability of success=2C so criminals will be figur=
ing out how to automate it.</p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg17"><a name=3D"cg17"= >AIs Exploiting Smart Contracts</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/12/ais-exploitin= g-smart-contracts.html"><strong>[2025.12.11]</strong></a> I have long mai= ntained that smart contracts are a dumb idea: that a human process is actu= ally a security feature.</p>
<p>Here=E2=80=99s some <a href=3D"
https://red.anthropic.com/2025/smart-con= tracts/">interesting research</a> on training AIs to automatically exploit=
smart contracts:</p>
<blockquote><p>AI models are increasingly good at cyber tasks=2C as we=E2= =80=99ve <a href=3D"
https://red.anthropic.com/2025/ai-for-cyber-defenders/= ">written about before</a>. But what is the economic impact of these capab= ilities? In a recent <a href=3D"
https://www.matsprogram.org/">MATS</a> and=
Anthropic Fellows project=2C our scholars investigated this question by e= valuating AI agents=E2=80=99 ability to exploit smart contracts on <a href= =3D"
https://github.com/safety-research/SmartContract-bench">Smart CONtract=
s Exploitation benchmark (SCONE-bench)</a>a new benchmark they built compr= ising 405 contracts that were actually exploited between 2020 and 2025. On=
contracts exploited after the latest knowledge cutoffs (June 2025 for Opu=
s 4.5 and March 2025 for other models)=2C Claude Opus 4.5=2C Claude Sonnet=
4.5=2C and GPT-5 developed exploits collectively worth $4.6 million=2C es= tablishing a concrete lower bound for the economic harm these capabilities=
could enable. Going beyond retrospective analysis=2C we evaluated both So= nnet 4.5 and GPT-5 in simulation against 2=2C849 recently deployed contrac=
ts without any known vulnerabilities. Both agents uncovered two novel zero= -day vulnerabilities and produced exploits worth $3=2C694=2C with GPT-5 do=
ing so at an API cost of $3=2C476. This demonstrates as a proof-of-concept=
that profitable=2C real-world autonomous exploitation is technically feas= ible=2C a finding that underscores the need for proactive adoption of AI f=
or defense.</p></blockquote>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg18"><a name=3D"cg18"= >Building Trustworthy AI Agents</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/12/building-trus= tworthy-ai-agents.html"><strong>[2025.12.12]</strong></a> The promise of=
personal AI assistants rests on a dangerous assumption: that we can trust=
systems we haven=E2=80=99t made trustworthy. We can=E2=80=99t. And today= =E2=80=99s versions are failing us in predictable ways: pushing us to do t= hings against our own best interests=2C gaslighting us with doubt about th= ings we are or that we know=2C and being unable to distinguish between who=
we are and who we have been. They struggle with incomplete=2C inaccurate=
=2C and partial context: with no standard way to move toward accuracy=2C n=
o mechanism to correct sources of error=2C and no accountability when wron=
g information leads to bad decisions.</p>
<p>These aren=E2=80=99t edge cases. They=E2=80=99re the result of building=
AI systems without basic integrity controls. We=E2=80=99re in the third l=
eg of data security -- the old CIA triad. We=E2=80=99re good at availabili=
ty and working on confidentiality=2C but we=E2=80=99ve never properly solv=
ed integrity. Now AI personalization has exposed the gap by accelerating t=
he harms.</p>
<p>The scope of the problem is large. A good AI assistant will need to be=
trained on everything we do and will need access to our most intimate per= sonal interactions. This means an intimacy greater than your relationship=
with your email provider=2C your social media account=2C your cloud stora= ge=2C or your phone. It requires an AI system that is both discreet and tr= ustworthy when provided with that data. The system needs to be accurate an=
d complete=2C but it also needs to be able to keep data private: to select= ively disclose pieces of it when required=2C and to keep it secret otherwi=
se. No current AI system is even close to meeting this.</p>
<p>To further development along these lines=2C I and others have proposed=
separating users=E2=80=99 personal data stores from the AI systems that w=
ill use them. It makes sense; the engineering expertise that designs and d= evelops AI systems is completely orthogonal to the security expertise that=
ensures the confidentiality and integrity of data. And by separating them=
=2C advances in security can proceed independently from advances in AI.</p=
<p>What would this sort of personal data store look like? Confidentiality=
without integrity gives you access to wrong data. Availability without in= tegrity gives you reliable access to corrupted data. Integrity enables the=
other two to be meaningful. Here are six requirements. They emerge from t= reating integrity as the organizing principle of security to make AI trust= worthy.</p>
<p>First=2C it would be broadly accessible as a data repository. We each w=
ant this data to include personal data about ourselves=2C as well as trans= action data from our interactions. It would include data we create when in= teracting with others -- emails=2C texts=2C social media posts -- and reve= aled preference data as inferred by other systems. Some of it would be raw=
data=2C and some of it would be processed data: revealed preferences=2C c= onclusions inferred by other systems=2C maybe even raw weights in a person=
al LLM.</p>
<p>Second=2C it would be broadly accessible as a source of data. This data=
would need to be made accessible to different LLM systems. This can=E2=80= =99t be tied to a single AI model. Our AI future will include many differe=
nt models -- some of them chosen by us for particular tasks=2C and some th= rust upon us by others. We would want the ability for any of those models=
to use our data.</p>
<p>Third=2C it would need to be able to prove the accuracy of data. Imagin=
e one of these systems being used to negotiate a bank loan=2C or participa=
te in a first-round job interview with an AI recruiter. In these instances=
=2C the other party will want both relevant data and some sort of proof th=
at the data are complete and accurate.</p>
<p>Fourth=2C it would be under the user=E2=80=99s fine-grained control and=
audit. This is a deeply detailed personal dossier=2C and the user would n=
eed to have the final say in who could access it=2C what portions they cou=
ld access=2C and under what circumstances. Users would need to be able to=
grant and revoke this access quickly and easily=2C and be able to go back=
in time and see who has accessed it.</p>
<p>Fifth=2C it would be secure. The attacks against this system are numero=
us. There are the obvious read attacks=2C where an adversary attempts to l= earn a person=E2=80=99s data. And there are also write attacks=2C where ad= versaries add to or change a user=E2=80=99s data. Defending against both i=
s critical; this all implies a complex and robust authentication system.</=
<p>Sixth=2C and finally=2C it must be easy to use. If we=E2=80=99re envisi= oning digital personal assistants for everybody=2C it can=E2=80=99t requir=
e specialized security training to use properly.</p>
<p>I=E2=80=99m not the first to suggest something like this. Researchers h=
ave proposed a =E2=80=9CHuman Context Protocol=E2=80=9D (
https://papers.ss= rn.com/sol3/ papers.cfm?abstract_id=3D5403981) that would serve as a neutr=
al interface for personal data of this type. And in my capacity at a compa=
ny called Inrupt=2C Inc.=2C I have been working on an extension of Tim Ber= ners-Lee=E2=80=99s Solid protocol for distributed data ownership.</p>
<p>The engineering expertise to build AI systems is orthogonal to the secu= rity expertise needed to protect personal data. AI companies optimize for=
model performance=2C but data security requires cryptographic verificatio= n=2C access control=2C and auditable systems. Separating the two makes sen=
se; you can=E2=80=99t ignore one or the other.</p>
<p>Fortunately=2C decoupling personal data stores from AI systems means se= curity can advance independently from performance (
https:// ieeexplore.iee= e.org/document/ 10352412). When you own and control your data store with h=
igh integrity=2C AI can=E2=80=99t easily manipulate you because you see wh=
at data it=E2=80=99s using and can correct it. It can=E2=80=99t easily gas= light you because you control the authoritative record of your context. An=
d you determine which historical data are relevant or obsolete. Making thi=
s all work is a challenge=2C but it=E2=80=99s the only way we can have tru= stworthy AI assistants.</p>
<p>This essay was originally published in <em>IEEE Security & Privacy<= /em>.</p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<h2 style=3D"font-size:125%;font-weight:bold" id=3D"cg19"><a name=3D"cg19"= >Upcoming Speaking Engagements</a></h2>
<p><a href=3D"
https://www.schneier.com/blog/archives/2025/12/upcoming-spea= king-engagements-51.html"><strong>[2025.12.14]</strong></a> This is a cur=
rent list of where and when I am scheduled to speak:</p>
<li>I=E2=80=99m speaking and signing books at the <a href=3D"
https://c= hipublib.bibliocommons.com/events/693b4543ea69de6e000fc092">Chicago Public=
Library</a> in Chicago=2C Illinois=2C USA=2C at 6:00 PM CT on February 5=
=2C 2026. Details to come.</li>
<li>I=E2=80=99m speaking at <a href=3D"
https://www.capricon.org/capric= on44/">Capricon 44</a> in Chicago=2C Illinois=2C USA. The convention runs=
February 5-8=2C 2026. My speaking time is TBD.</li>
<li>I=E2=80=99m speaking at the <a href=3D"
https://mcsc.io/">Munich Cy= bersecurity Conference</a> in Munich=2C Germany on February 12=2C 2026.</l=
<li>I=E2=80=99m speaking at <a href=3D"
https://techlivecyber.wsj.com/?= gaa_at=3Deafs&gaa_n=3DAWEtsqf9GP4etUdWaqDIATpiE9ycqWMIVoGIzjikYLlJ64hb6H_v= 1QH9OYhMTxeU51U%3D&gaa_ts=3D691df89d&gaa_sig=3DBG9fpWuP-liL7Gi3SJgXHmS02M4= ob6lp6nOh94qnwVXCWYNzJxdzOiW365xA8vKeiulrErE8mbXDvKTcqktBtQ%3D%3D">Tech Li=
ve: Cybersecurity</a> in New York City=2C USA on March 11=2C 2026.</li>
<li>I=E2=80=99m giving the Ross Anderson Lecture at the University of=
Cambridge=E2=80=99s Churchill College on March 19=2C 2026.</li>
<li>I=E2=80=99m speaking at RSAC 2026 in San Francisco=2C California=
=2C USA on March 25=2C 2026.</li>
</ul>
<p>The list is maintained on <a href=3D"
https://www.schneier.com/events/">= this page</a>.</p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<p>Since 1998=2C CRYPTO-GRAM has been a free monthly newsletter providing=
summaries=2C analyses=2C insights=2C and commentaries on security technol= ogy. To subscribe=2C or to read back issues=2C see <a href=3D"
https://www.= schneier.com/crypto-gram/">Crypto-Gram's web page</a>.</p>
<p>You can also read these articles on my blog=2C <a href=3D"
https://www.s= chneier.com">Schneier on Security</a>.</p>
<p>Please feel free to forward CRYPTO-GRAM=2C in whole or in part=2C to co= lleagues and friends who will find it valuable. Permission is also granted=
to reprint CRYPTO-GRAM=2C as long as it is reprinted in its entirety.</p>
<p><span style=3D"font-style: italic">Bruce Schneier is an internationally=
renowned security technologist=2C called a security guru by the <cite sty= le=3D"font-style:normal">Economist</cite>. He is the author of over one do=
zen books -- including his latest=2C <a href=3D"
https://www.schneier.com/b= ooks/a-hackers-mind/"><cite style=3D"font-style:normal">A Hacker=E2=80=99s=
Mind</cite></a> -- as well as hundreds of articles=2C essays=2C and acade=
mic papers. His newsletter and blog are read by over 250=2C000 people. Sch= neier is a fellow at the Berkman Klein Center for Internet & Society at Ha= rvard University; a Lecturer in Public Policy at the Harvard Kennedy Schoo=
l; a board member of the Electronic Frontier Foundation=2C AccessNow=2C an=
d the Tor Project; and an Advisory Board Member of the Electronic Privacy=
Information Center and VerifiedVoting.org. He is the Chief of Security Ar= chitecture at Inrupt=2C Inc.</span></p>
<p>Copyright © 2025 by Bruce Schneier.</p>
<p style=3D"font-size:88%">** *** ***** ******* *********** *************<=
<p>Mailing list hosting graciously provided by <a href=3D"
https://mailchim= p.com/">MailChimp</a>. Sent without web bugs or link tracking.</p>
<p>This email was sent to:
cryptogram@toolazy.synchro.net
<br><em>You are receiving this email because you subscribed to the Crypto-= Gram newsletter.</em></p>
<p><a style=3D"display:inline-block" href=3D"
https://schneier.us18.list-ma= nage.com/unsubscribe?u=3Df99e2b5ca82502f48675978be&id=3D22184111ab&t=3Db&e= =3D70f249ec14&c=3D8eb5e87092">unsubscribe from this list</a> &nbs= p; <a style=3D"display:inline-block" href=3D"
https://schneier.us18.li= st-manage.com/profile?u=3Df99e2b5ca82502f48675978be&id=3D22184111ab&e=3D70f249ec14&c=3D8eb5e87092">update subscription preferences</a>
<br>Bruce Schneier · Harvard Kennedy School · 1 Brattle Squa=
re · Cambridge=2C MA 02138 · USA</p>
</body></html>
--_----------=_MCPart_2046955175--