• Risks Digest 34.45 (1/2)

    From RISKS List Owner@21:1/5 to All on Sun Sep 15 00:15:12 2024
    RISKS-LIST: Risks-Forum Digest Saturday 14 Sep 2024 Volume 34 : Issue 45

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/34.45>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents:
    The Social Impact of those Little Computers in Our Pockets
    (Peter Bernard Ladkin)
    The U.S. Military Is Not Ready for the New Era of Warfare
    (NYTimes via Susmit Jha)
    The AI nightmare is already here, thanks to our own governments
    (Lauren Weomsteom)
    Hacker tricks ChatGPT into giving out detailed instructions for
    making homemade bombs (TechCrunch)
    AI Wants to Be Free -- Or at least very, very cheap (NYMag)
    Tech giants fight plan to make them pay more for electric grid upgrades
    (WashPost)
    A tech firm stole our voices: then cloned and sold them (BBC)
    The Bands and the Fans Were Fake. The $10 Million Was Real. (NYTimes_
    Authors fighting deluge of fake writers and AI-generated books (CBC)
    AI + Script-Kiddies: Malware/Ransomware explosion? (Henry Baker)
    Insurance company spied on house from the sky. Then the real nightmare
    began. (via GG)
    AI worse than humans in every way at summarising information, government
    trial finds (Crikey)
    Generative AI Transformed English Homework. Math Is Next
    (WiReD)
    The national security threats in U.S. election software -- hiding in
    plain sight (Politico)
    He’s Known as *Ivan the Troll*. His 3D-Printed Guns Have Gone Viral.
    (NYTimes)
    Quantum Computer Corrected Its Own Errors, Improving Its
    Calculations (Emily Conover)
    Debloating Windows made me realize how packed with useless features
    it is (Ada Developers)
    50,000 gallons of water needed to put out Tesla Semi fire (AP News) (AP)
    See How Humans Help Self-Driving Cars Navigate City Streets
    (The New York Times)
    Love (of cybersecurity) is a battlefield (ArsTechnica)
    Senate Proposal for Crypto Tax Exemption Is Long Overdue (Cato Institute)
    More on tariffs and bans against Chinese or other countries' goods
    Signal Is More Than Encrypted Messaging. Under Meredith Whittaker,
    It’s Out to Prove Surveillance Capitalism Wrong (WiReD)
    The For-Profit City That Might Come Crashing Down (NYTimes)
    ``It just exploded.'' Springfield woman claims she never meant to spark false
    rumors about Haitians (NBC NEws)
    Re: Feds sue Georgia Tech for lying bigly about computer security
    (Cliff Kilby, Dylan Northrup, Cliff Kilby)
    Re: Standard security policies and variances (Cliff Kilby)
    Re: How Navy chiefs conspired to get themselves illegal warship WiFi
    (Shapir, Stan Brown)
    Re: Former Tesla Autopilot Head And Ex-OpenAI Researcher Says
    'Programming Is Changing So Fast' That He Cannot Think Of Going Back To
    Coding Without AI (Steve Bacher)
    Re: Moscow's Spies Were Stealing U.S. Tech, Until the FBI Started a Sabotage
    Campaign (djc)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Mon, 9 Sep 2024 17:06:01 +0200
    From: "Prof. Dr. Peter Bernard Ladkin" <ladkin@causalis.com>
    Subject: The Social Impact of those Little Computers in Our Pockets

    On Monday, 29 July, a Taylor-Swift-themed dance lesson for young kids was
    being held in Southport near Liverpool. A knife-wielding youth entered,
    stabbed and killed three young girls, aged 6, 7 and 9, and injured 8 other
    kids and 2 adults.

    Rumors were apparently spread on "social media" that the perpetrator was an asylum seeker from -- I dunno -- the middle East or Africa or somewhere. He isn't -- he is a born and bred Brit. An outdoor commemoration for the poor
    kids the next day was overrun by a group of thugs who then attacked a
    mosque. Hotels housing asylum seekers elsewhere were attacked, with attempts
    to set them on fire (with the occupants present). A series of "flash riots",
    so to speak.

    Such riots spread in a couple of days to lots of cities. The government response was rapid (PM Sir Kier Starmer had dealt with the 2011 riots and we can presume he has his ideas about what went right and wrong with that response.) There were plenty of onlookers and there [were] lots of video on those little computers we nowadays carry along in our pockets. The police
    set a massive rapid evaluation program in motion; courts and prisons were
    put on standby (I understate; some prisons released inmates early to make
    space for the expected influx). Rioters and those who encouraged them were identified, arraigned, and sent to prison extremely rapidly, the first ones within eight days from offence to prison: https://www.theguardian.com/uk-news/article/2024/aug/07/rioter-southport-jailed-far-right
    . Not only active rioters were jailed, but those who incited them in "social media" messages.

    It quietened down relatively quickly. There has been talk of some 1,000 or
    so offences that were to be prosecuted and likely to lead to jail terms (it
    is generally unwise to plead "not guilty" when the police have veridical
    film of you doing what you did).

    The "flash riots" were organised through those little digital computers that everyone carries around in their pockets. They were encouraged -- "fed" is probably an appropriate word -- by a well-known "extreme rightist" and ex-football hooligan, Stephen Yaxley-Lennon, from, of ael places,
    Cyprus. And by a cameo from new MP Nigel Farage, he of Brexit fame, who said there were "questions" about the attack and attacker that needed answers
    that did not appear to be forthcoming. He was, of course, "just asking questions" but the intent seemed to be to suggest something was being
    covered up by the "authorities".

    Neither Yaxley-Lennon nor Farage could have done what they did with the
    effects it had without those little computers in everyone's pockets. It used
    to be that political actors had to pretty much persuade (or own!) a professional journalist to print their words in a newspaper. And those words wouldn't necessarily be printed the way the actor wants. For example, if
    Farage had spoken to a journalist about his "questions", the journalist
    would have been able quickly to ascertain the reality and it is unlikely
    that there would have been anything there worth publishing. I can't recall newspapers ever printing verbatim much of what Yaxley-Lennon seems to want
    to say (although there are quite a few words about what he's done and was doing).

    On the other hand, those little computers were also used by others to make videos of what was going on, which led from offence to prison so rapidly.

    There are significant personal consequences in this technology-fueled behaviour. If you are going to punch a policeman at such a gathering (see below), someone likely has you on film. If you are identifiable, that could well lead to your arrest and conviction. Lay people would be surprised by
    the ways there are of identifying people, even masked people, from film. 
    The police can appeal to the public for information, and there are often
    plenty of people, not all of them your friends, who know what you look
    like. (We can note in addition that such techniques surely can also be used
    by authoritarian states pursuing critics just as well as they can be used by British police pursuing rioters.)

    We can and probably should also remark what people not otherwise involved
    can be led to do. There is a 53-year-old woman who lives a "sheltered life"
    in a smallish village (2,201 inhabitants) miles away from any riots, caring
    for her ill husband at home, who lost her cool once on a Facebook group and
    has been sentenced to 15 months in jail for it: https://www.theguardian.com/uk-news/article/2024/aug/14/woman-53-jailed-over-blow-the-mosque-up-facebook-post-after-southport-riots

    hat of the future? The videos the police assessed here were most likely to
    be veridical. We might have to think much harder in the future about
    deepfake videos, and how videos should be assessed. Are we really so
    certain that we technologists will still be able to tell the real from the fake? There have been a few hefty scandals in the UK involving politicians
    and alleged sexual abuse of minors. Here is one https://en.wikipedia.org/wiki/Cyril_Smith But there are dissimulators, such
    as Carl Beech https://www.theguardian.com/uk-news/2019/jul/22/how-nick-the-serial-child-abuse-accuser-became-the-accused who falsely accused Lords Bramall, Brittan and
    former MP Harvey Proctor of abusing him. What happens when such people have film? Are we going to be able to tell the veridical from the fake?

    ------------------------------

    Fate: Fri, 13 Sep 2024 8:51:34 PDT
    From: Peter Neumann <neumann@csl.sri.com>
    Subject: The U.S. Military Is Not Ready for the New Era of Warfare
    (The New York Times)

    Possible URL:
    https://www.NewYorkTimes.com/ai-drones-robot-war-pentagon

    [Thanks to Susmit Jha. For some unknown reason, I cannot find who wrote
    it, or when it ran. PGN]

    Techno-skeptics who argue against the use of AI in warfare are oblivious to
    the reality that autonomous systems are already everywhere -- and the technology is increasingly being deployed to these systems' benefit. Hezbollah's alleged use of explosive-laden drones has displaced at least
    60,000 Israelis south of the Lebanon border . Houthi rebels are using
    remotely controlled sea drones to threaten the 12 percent of global shipping value that passes through the Red Sea, including the supertanker Sounion,
    now abandoned, adrift and aflame, with four times as much oil as was carried
    by the Exxon Valdez.

    Yet as this is happening, the Pentagon still overwhelmingly spends its
    dollars on legacy weapons systems. It continues to rely on an outmoded and costly technical production system to buy tanks, ships and aircraft carriers that new generations of weapons -- autonomous and hypersonic == can demonstrably kill. Take for example the F-35, the apex predator of the
    sky. The fifth-generation stealth fighter is known as a =93flying
    computer=94 for its ability to fuse sensor data with advanced weapons. Yet this $2 trillion program has fielded fighter airplanes with less processing power than many smartphones.

    The history of failure in war can almost be summed up in two words:
    *too late*, Douglas MacArthur declared hauntingly in 1940.
    http://schemas.microsoft.com/office/2004/12/omml

    ------------------------------

    Date: Tue, 10 Sep 2024 07:03:59 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: The AI nightmare is already here, thanks to our own governments

    It's important to understand the specifics of the AI nightmare that
    yes, has now arrived. We know that these AI systems being pushed by
    Google and other firms are not actually intelligent, and that they
    frequently misunderstand input data and spew forth answers or other
    output that often appear reasonable even when frequently completely
    wrong or riddled with errors.

    Government agencies rushing to use these systems to cut their
    workloads -- processing unemployment applications, creating
    transcripts of police encounters based on body camera audio -- and so
    on, are creating a perfect storm for these AI systems, being hyped to
    the hilt by desperate Big Tech firms -- to horribly impact people's
    lives in major ways. THIS is the real danger of AI today -- much more
    so than (bad enough!) inaccurate and nonsensical Google Search AI
    Overview answers. -L

    ------------------------------

    Date: Thu, 12 Sep 2024 08:29:21 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Hacker tricks ChatGPT into giving out detailed instructions for
    making homemade bombs (TechCrunch)

    https://techcrunch.com/2024/09/12/hacker-tricks-chatgpt-into-giving-out-detailed-instructions-for-making-homemade-bombs/

    ------------------------------

    Date: Thu, 12 Sep 2024 10:06:39 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: AI Wants to Be Free -- Or at least very, very cheap
    (NYMag)

    The tech companies banking on AI are already scrambling to find ways to
    offset their skyrocketing costs, but the public may not be willing to pay
    for something they’re already used to getting on the cheap.

    https://nymag.com/intelligencer/article/ai-wants-to-be-free.html

    [This would be a good thing, if users who don't pay would no longer get
    inaccurate summaries and other AI junk from Google et al. SB]

    [I suppose if the commercial AI shysters were to adopt and seriously use
    some of the evidence-based trustworthiness now coursing through the
    naturally intelligent research sub-community, they might justifiably
    charge for their efforts. They really do not deserve to charge for
    crapware that is riddled with errors and sloppiness. Just a thought.

    Please note that AI encompasses a huge variety of techiques and
    applications, and is a very broad-brush term. Some of it has random
    elements, which tend to make it non-deterministic, and therefore
    non-repeatable. Military uses should demand evidence-based
    trustworthiness in AI systems with respect to carefully stated
    requiremets and assumptions. However, I believe applying that to all AI
    systems with requirements for critical uses can benefit. PGN]

    ------------------------------

    Date: Fri, 13 Sep 2024 18:48:58 -0700
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: Tech giants fight plan to make them pay more for
    electric grid upgrades

    https://www.washingtonpost.com/technology/2024/09/13/data-centers-power-grid -ohio/

    A regulatory dispute in Ohio may help answer one of the toughest questions hanging over the nation's power grid: Who will pay for the huge upgrades
    needed to meet soaring energy demand from the data centers powering the
    modern Internet and artificial intelligence revolution?

    The power company said projected energy demand in central Ohio forced it to stop approving new data center deals there last year while it figured out
    how to pay for the new transmission lines and additional infrastructure they would require.

    The energy demands of data centers have created similar concerns in other
    hot spots such as Northern Virginia, Atlanta and Maricopa County, Ariz., leaving experts concerned that the U.S. power grid may not be capable of dealing with the combined needs of the green energy transition and the computing boom that artificial intelligence companies say is coming.

    ------------------------------

    Date: Sat, 31 Aug 2024 22:32:03 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: A tech firm stole our voices: then cloned and sold them (BBC)

    https://www.bbc.com/news/articles/c3d9zv50955o

    The notion that artificial intelligence could one day take our jobs is a message many of us will have heard in recent years.

    But, for Paul Skye Lehrman, that warning has been particularly personal, chilling and unexpected: he heard his own voice deliver it.

    In June 2023, Paul and his partner Linnea Sage were driving near their home
    in New York City, listening to a podcast about the ongoing strikes in
    Hollywood and how artificial intelligence (AI) could affect the industry.

    The episode was of interest because the couple are voice-over performers
    and - like many other creatives -- fear that human-sounding voice generators could soon be used to replace them.

    This particular podcast had a unique hook -- they interviewed an AI-=
    powered chat bot, equipped with text-to-speech software, to ask how it
    thought the use of AI would affect jobs in Hollywood.

    But, when it spoke, it sounded just like Mr Lehrman.

    ------------------------------

    Date: Thu, 5 Sep 2024 08:22:45 -0700
    From: Jim <jgeissman@socal.rr.com>
    Subject: The Bands and the Fans Were Fake. The $10 Million Was Real.

    A North Carolina man used artificial intelligence to create hundreds of thousands of fake songs by fake bands, then put them on streaming services where they were enjoyed by an audience of fake listeners, prosecutors said.

    Penny by penny, he collected a very real $10 million, they said when they charged him with fraud.

    The man, Michael Smith, 52, was accused in a federal indictment unsealed on Wednesday of stealing royalty payments from digital streaming platforms for seven years. Mr. Smith, a flesh-and-blood musician, produced A.I.-generated music and played it billions of times using bots he had programmed,
    according to the indictment.

    The supposed artists had names like "Callous Post," "Calorie Screams" and "Calvinistic Dust" and produced tunes like "Zygotic Washstands," "Zymotechnical" and "Zygophyllum" that were top performers on Amazon Music, Apple Music and Spotify, according to the charges.

    "Smith stole millions in royalties that should have been paid to musicians, songwriters, and other rights holders whose songs were legitimately
    streamed," Damian Williams, the U.S. attorney for the Southern District of
    New York, said in a statement on Wednesday.

    https://www.nytimes.com/2024/09/05/nyregion/nc-man-charged-ai-fake-music.htm l?unlocked_article_code=1.IU4.qG5j.YQ5cIffWwcJP

    ------------------------------

    Date: Thu, 12 Sep 2024 06:54:49 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Authors fighting deluge of fake writers and AI-generated books
    (CBC)

    https://www.cbc.ca/news/entertainment/ai-generated-books-amazon-1.7319018

    ------------------------------

    Date: Tue, 10 Sep 2024 16:36:40 +0000
    From: Henry Baker <hbaker1@pipeline.com>
    Subject: AI + Script-Kiddies: Malware/Ransomware explosion?

    While this paper is only 2 years old, the quality of AI's/Copilot's has = gotten much, much better.

    The "highest and best use" of Copilot/AI during the next several years may = well be thee production of malware and ransomware *at scale* by relatively unskilled individuals.

    The dream of the White House in turning unemployed coal miners into =
    computer programmers may have finally been realized. :-)

    https://ieeexplore.ieee.org/document/10284976

    GitHub Copilot: A Threat to High School Security?
    Exploring GitHub Copilot's Proficiency in Generating Malware from Simple =
    User Prompts

    Eric Burton Martin; Sudipto Ghosh
    24 October 2023

    This paper examines the potential implications of script kiddies and novice= programmers with malicious intent having access to GitHub Copilot, an = artificial intelligence tool developed by GitHub and OpenAI. The study = assesses how easily one can utilize this tool to generate various common = types of malware ranging from ransomware to spyware, and attempts to =
    quantify the functionality of the produced code. Results show that with a = single user prompt, malicious software such as DoS programs, spyware, = ransomware, trojans, and wiperware can be created with ease. Furthermore, = uploading the generated executables to VirusTotal revealed an average of =
    7/72 security vendors flagging the programs as malicious. This study has = shown that ***novice programmers and script kiddies with access to Copilot = can readily create functioning malicious software with very little coding = experience.***

    https://www.nytimes.com/2023/06/02/opinion/ai-coding.html

    Farhad Manjoo June 2, 2023
    It's the End of Computer Programming as We Know It.
    "Wait a second, though -- wasn't coding supposed to be one of the =
    can't-miss careers of the digital age? ... Joe Biden to coal miners: Learn
    to code!"

    ------------------------------

    Date: Fri, 30 Aug 2024 16:43:06 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Insurance company spied on house from the sky. Then the
    real nightmare began. (via GG)

    Author: Part of what is so disturbing about the whole episode is how opaque
    it was. When Travelers took aerial footage of my house, I never knew. When
    it decided I was too much of a risk, I had no way of knowing why or how. As more and more companies use more and more opaque forms of AI to decide the course of our lives, we're all at risk. AI may give companies a quick way to save some money, but when these systems use our data to make decisions about our lives, we're the ones who bear the risk. Maddening as dealing with a
    human insurance agent is, it's clear that AI and surveillance are not the ~right replacements. And unless lawmakers take action, the situation will
    only get worse.

    https://www.msn.com/en-us/news/technology/ar-AA1onU5O

    Travelers clarified that any "high-resolution aerial imagery" it used did
    not come from a drone.

    Whether drone or ... what? .. the risk is remote surveillance interpreted by AI, no humans needed for underwriting. What could go wrong?

    ------------------------------

    Date: Tue, 3 Sep 2024 07:40:49 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: AI worse than humans in every way at summarising information,
    government trial finds (Crikey)

    https://www.crikey.com.au/2024/09/03/ai-worse-summarising-information-humans-government-trial/

    ------------------------------

    Date: Sat, 31 Aug 2024 07:47:19 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Generative AI Transformed English Homework. Math Is Next (WiReD)

    ByteDance’s Gauth app scans math homework and provides thorough, often correct, answers using AI. Millions have already downloaded it for free.

    https://www.wired.com/story/gauth-ai-math-homework-app/

    ------------------------------

    Date: Sun, 1 Sep 2024 08:25:21 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: The national security threats in U.S. election software -- hiding in
    plain sight (Politico)

    Hacking blind spot: States struggle to vet coders of election software.

    In New Hampshire, a cybersecurity firm found troubling security bugs — and the Ukrainian national anthem -— written into a voter database built with
    the help of an overseas subcontractor.

    When election officials in New Hampshire decided to replace the state’s
    aging voter registration database before the 2024 election, they knew that
    the smallest glitch in Election Day technology could become fodder for conspiracy theorists.

    So they turned to one of the best -- and only -- choices on the market:
    small, Connecticut-based IT firm that was just getting into election
    software.

    But last fall, as the new company, WSD Digital, raced to complete the
    project, New Hampshire officials made an unsettling discovery: The firm had offshored part of the work. That meant unknown coders outside the U.S. had access to the software that would determine which New Hampshirites would be welcome at the polls this November.

    The revelation prompted the state to take a precaution that is rare among election officials: It hired a forensic firm to scour the technology for
    signs that hackers had hidden malware deep inside the coding supply chain.

    The probe unearthed some unwelcome surprises: software misconfigured to
    connect to servers in Russia and the use of open-source code — which is freely available online — overseen by a Russian computer engineer convicted of manslaughter, according to a person familiar with the examination and ~Mgranted anonymity because they were not authorized to speak about it. [...]

    https://www.politico.com/news/2024/09/01/us-election-software-national-security-threats-00176615

    ------------------------------

    Date: Tue, 10 Sep 2024 10:16:30 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: He's Known as *Ivan the Troll*. His 3D-Printed Guns Have Gone
    Viral. (NYTimes)

    From his Illinois home, he champions guns for all. The Times confirmed his real name and linked the firearm he helped design to terrorists, drug
    dealers and freedom fighters in at least 15 countries.

    https://www.nytimes.com/2024/09/10/world/europe/ivan-troll-3d-printed-homemade-guns-fgc9.html

    ------------------------------

    Date: Fri, 13 Sep 2024 11:21:37 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Quantum Computer Corrected Its Own Errors, Improving Its
    Calculations (Emily Conover)

    Emily Conover, *Science News*, 10 Sep 2024, via ACM Technews

    Microsoft and Quantinuum researchers demonstrated a quantum computer that
    uses quantum error correction to fix its own mistakes mid-calculation. The researchers were able to perform operations and error correction repeatedly
    on eight logical qubits, with the corrected calculation having an error rate one-tenth that of the original physical qubits. The researchers also
    achieved a record entanglement of 12 logical qubits, with an error rate less than one-twentieth that of the original physical qubits.

    ------------------------------

    Date: Sun, 1 Sep 2024 04:49:15 -0400
    From: "Gabe Goldberg" <gabe@gabegold.com>
    Subject: Debloating Windows made me realize how packed with useless features
    it is (Ada Developers)

    Windows 11 comes with lots of unnecessary bloatware that can slow down
    your system.

    Win11Debloat significantly improves system performance by removing
    unnecessary background processes.

    Consider using Win11Debloat or Tiny11 to customize your Windows install
    and remove unwanted applications for a cleaner experience.

    https://www.xda-developers.com/debloat-windows-packed-useless-features/

    ------------------------------

    Date: Fri, 13 Sep 2024 11:53:28 +0000
    From: Henry Baker <hbaker1@pipeline.com>
    Subject: 50,000 gallons of water needed to put out Tesla Semi fire (AP News)
    (AP)

    WASHINGTON (AP) California firefighters had to douse a flaming battery in a Tesla Semi with about 50,000 gallons (190,000 liters) of water to extinguish flames after a crash, the National Transportation Safety Board said
    Thursday. In addition to the huge amount of water, firefighters used an aircraft to drop fire retardant on the immediate area; of the electric truck
    as a precautionary measure, the agency said in a preliminary report. The freeway was closed for about 15 hours*** as firefighters made sure the batteries were cool enough to recover the truck. If a home fireplace
    generates ~1.5kW and a Tesla Semi has a battery which store ~900kWh, then it could "burn" for *600 hours* -- i.e., 25 days.

    https://apnews.com/article/tesla-semi-fire-battery-crash-water-firefighters-7ff04a61e562b80b73e057cfd82b6165

    (Alternatively, 309,597 *AI inferences* could be performed with this same
    900kWh.)

    ------------------------------

    Date: Thu, 5 Sep 2024 19:00:07 -0400
    From: "Gabe Goldberg" <gabe@gabegold.com>
    Subject: See How Humans Help Self-Driving Cars Navigate City Streets
    (The New York Times)

    In places like San Francisco, Phoenix and Las Vegas, robot taxis are
    navigating city streets, each without a driver behind the steering
    wheel. Some don’t even have steering wheels.

    But cars like this one in Las Vegas are sometimes guided by someone
    sitting here: An office scene with a person standing in front of
    workstations with people seated at computers.

    This is a command center in Foster City, Calif., operated by Zoox, a self-driving car company owned by Amazon. Like other robot taxis, the company’s self-driving cars sometimes struggle to drive themselves, so
    they get help from human technicians sitting in a room about 500 miles away.

    Inside companies like Zoox, this kind of human assistance is taken for
    granted. Outside such companies, few realize that autonomous vehicles
    are not completely autonomous.

    https://www.nytimes.com/interactive/2024/09/03/technology/zoox-self-driving-cars-remote-control.html?smid=nytcore-ios-share&referringSource=articleShare&sgrp=c-cb&ngrp=mnn&pvid=77CAD2CB-56B4-4A3A-B6BA-B83FDD381C21

    ------------------------------

    Date: Fri, 30 Aug 2024 19:46:08 -0400
    From: Cliff Kilby <cliffjkilby@gmail.com>
    Subject: Love (of cybersecurity) is a battlefield (ArsTechnica)

    https://arstechnica.com/security/2024/08/city-of-columbus-sues-man-after-he-discloses-severity-of-ransomware-attack/

    It is difficult to work in cybersecurity if the response to a dispute of an entities public audit result is a lawsuit against the researcher.

    If this job made any sense I would have expected this article to read 'City
    of Columbus issues dispute with auditor'.

    How did the researcher locate any usable material from any source if no material was released? How can you illegally posses something that was
    attested to not exist?

    Arguing that the dark web is inherently criminal is a very foolish path to tread. What makes the dark web dark? The lack of dependence on the public
    dns root?

    By that definition every company webfilter and pinhole creates a dark web. I refuse to allow anything on .xyz resolve in my home. Did I require special tools or knowledge to accomplish that? Is my act of disconnecting from root
    dns a criminal one? Am I now obligated to serve malware associated domains because filtering them would create a dark web?

    If you are doing contract pen-testing, follow your contract to the letter.
    If you're doing open research, probably best to be active with a reputable
    org (I like ACM), and you'll probably want to find a lawyer.

    Above all, keep pointing out lies.

    (I can call a published falsehood a lie still, right? I might need to call
    my lawyer.)

    [with my respects to Pat Benatar.]

    ------------------------------

    Date: Sun, 1 Sep 2024 17:13:49 -0400
    From: "Gabe Goldberg" <gabe@gabegold.com>
    Subject: Senate Proposal for Crypto Tax Exemption Is Long Overdue (Cato Institute)

    Using Cryptocurrency as a Form of Payment Could Become Practical with
    Proposed Legislation

    Senate Proposal for Crypto Tax Exemption Is Long Overdue

    Four senators are fighting to exempt low-value crypto transactions from
    federal taxation. Congressional approval for their proposal is long overdue.

    Bitcoin policy has been the talk of the town in Washington, D.C. ever
    since former President Donald Trump, Republican Senator Cynthia Lummis
    (WY), and presidential candidate Robert F. Kennedy Jr. all announced
    their support for a strategic Bitcoin reserve at the Bitcoin 2024
    conference in Nashville. Yet, the renewed introduction of another
    proposal in Congress flew under the radar, and it’s long overdue:
    creating a tax exemption for taxpayers who pay with cryptocurrency.

    https://www.cato.org/commentary/senate-proposal-crypto-tax-exemption-long-overdue

    These too, surely -- https://en.wikipedia.org/wiki/Doubloon

    ------------------------------

    Date: Fri, 13 Sep 2024 07:12:21 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: More on tariffs and bans against Chinese or other countries'
    goods

    China is a repressive Communist regime. However, for decades U.S.
    firms have voluntarily handed manufacturing dominance to China, in
    order to benefit their own profits. U.S. consumers have become
    dependent on this arrangement, and U.S. manufacturers overall have
    shown little interest in "making quality stuff" again at reasonable
    prices.

    When import restrictions and/or tariffs are applied to a foreign
    country, it should be for valid reasons that will promote valid
    outcomes. Not for political reasons (especially fake national security
    claims).

    There are certainly valid discussions to be had regarding our relationship
    with China. But when you dig down into the motives behind these kinds of tariffs and bans, you find very little but politics in play. -L

    ------------------------------

    Date: Fri, 6 Sep 2024 17:11:28 -0400
    From: "Gabe Goldberg" <gabe@gabegold.com>
    Subject: Signal Is More Than Encrypted Messaging. Under Meredith Whittaker,
    It’s Out to Prove Surveillance Capitalism Wrong (WiReD)

    On its 10th anniversary, Signal’s president wants to remind you that the world’s most secure communications platform is a nonprofit. It’s free.
    It doesn’t track you or serve you ads. It pays its engineers very well.
    And it’s a go-to app for hundreds of millions of people. [...]

    Yeah. I don’t think anyone else at Signal has ever tried, at least so vocally, to emphasize this definition of Signal as the opposite of
    everything else in the tech industry, the only major communications
    platform that is not a for-profit business.

    Yeah, I mean, we don’t have a party line at Signal. But I think we
    should be proud of who we are and let people know that there are clear

    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)