From Newsgroup: comp.ai.philosophy
On 30 Sep 2025, Baxter <
bax02_spamblock@baxcode.com> posted some news:10bgs7f$3otvb$
2@dont-email.me:
Klaus Schadenfreude <klaus.schadenfreude.Zwergent%ter.@gmail.com>
wrote in news:cihndktbkfpbs6hc80r824udj2o66nuofk@Rudy.Canoza.is.a.forging.cocksu
ckin g.dwarf.com:
Until the late 19th century, there wasn't any such thing as "illegal"
or "legal" immigration to the United States. That's because before you
can immigrate somewhere illegally, there has to be a law for you to
break.
Being denied a visa or denied entry is not immigration.
============
AI Overview
An illegal immigrant is a term for an individual who is present in a
country without legal authorization, which can be due to entering
without official inspection or overstaying a temporary visa.
- Note: "present in a country"
============
AI Overview
How often AI is wrong depends on the model, the task, and the data it was trained on, but studies consistently show significant error rates,
especially for generative AI. Many AI models are trained to prioritize answering a query over admitting a lack of knowledge, which causes them to "hallucinate" incorrect but plausible-sounding information.
Study findings on AI inaccuracies
AI search tools: Popular AI search tools, including ChatGPT and Gemini,
gave incorrect or misleading information over 60% of the time in a March
2025 study. Another study from March 2025 reported AI search engines
invented sources for about 60% of queries.
AI assistants: When generative AI tools were asked about the world's elections, even the most accurate models got 1 in 5 responses wrong,
according to a 2025 study.
News-related queries: A February 2025 research found that 90% of AI
chatbot responses about news contained some inaccuracies, with 51%
containing "significant" inaccuracies.
Bias: A 2022 study by USC researchers found that biased "facts" made up
3.4% to 38.6% of the data used by some AI systems. Algorithms sometimes exaggerate these biases.
--- Synchronet 3.21a-Linux NewsLink 1.2