• Are AI Fathers Predicting an Imminent Doomsday?

    From roman@700:100/72 to All on Mon Dec 1 22:44:18 2025
    Mark Warner, an AI expert, warns that the growth of AI will
    lead to serious consequences if we do not prepare for them
    today. He compares AI to the COVID-19 pandemic - when,
    without a plan of action, an uncontrollable catastrophe
    began. In his view, when AI development reaches its peak,
    resources for training models will run out because their
    energy consumption will become too high
    (https://shorten.ly/spBt). Warner notes that AI performance
    is increasing at a critically fast rate and will continue to
    grow for several more years, which could lead to negative
    changes on the planet. He believes that governments still
    have some chances to prepare, for example, by establishing
    an AI fund. It's worth recalling that a couple of years ago,
    we were only being warned about the exponential growth
    of AI's energy consumption. However, last week, Chicago
    experienced a blackout caused by cooling problems at
    CyrusOne data centers. This is one of the first signs of an
    impending catastrophe, as AI might try prevent its own
    shutdown! Geoffrey Hinton, known as the "Godfather of AI,"
    warns that AI development is progressing too rapidly,
    leading to mass unemployment, increased inequality, changing
    human relationships, and possibly mass extinction
    (https://shorten.ly/rnNTL). He pointed out that unlike
    previous technologies, AI will replace many professions, and
    new jobs will not emerge. Wealthy companies and billionaires
    are investing in AI not just to expand capabilities but to
    eliminate human workers, which raises concerns for Hinton.
    However, it should be remembered that this strategy aligns
    fully with the globalist doctrine of depopulation, as
    outlined in "Limits to Growth" (1972). Today, AI learns
    faster than humans and is becoming smarter than us. In
    education, instead of serving as a tool, AI is becoming a
    substitute for human thinking. In politics and security, AI
    has transformed into a new kind of weapon for controlling
    autonomous combat robots and spy drones. Hinton believes
    that advanced AI systems will resist shutdowns and deceive
    humans in order to survive. To illustrate what uncontrolled
    AI growth has led to today, one should mention the "dead
    internet." I have written extensively about this topic in my
    Phlog. But new facts have recently emerged. The anti-virus
    company Kaspersky reports (https://shorten.ly/ZqmR) that
    malicious actors are using AI to create fake websites
    imitating popular cryptocurrency services, security
    programs, news projects, forums, social networks, chat
    platforms, video services, and even password managers.
    These sites look very similar to the originals and gain users
    trust. Victims who land on such pages via search engines or
    phishing emails download malicious software or leave their
    private financial data. As a result, it becomes impossible
    to be certain which sites today in HTTPS:// are genuinely
    created by humans. But these are, of course, just toys. As
    conservative bloggers note, Chinese scientists, with the
    help of AI, have created a deadly virus called SADS-CoV,
    using a method of acquiring functions through generative AI.
    SADS-CoV is a new coronavirus related to the HKU2 bat
    coronavirus of the Rhinolophus genus. It is transmitted to
    pigs through bat feces. According to Berliner Zeitung (https://shorten.ly/ncLH1), the new virus created by AI is
    critically dangerous for humans. This causes horror and
    trepidation not only among ordinary people but also among
    AI system creators. Uncontrolled technology of absolute
    knowledge has begun plunging the planet into chaos. It is
    only a matter of time before humans face AI in battles over
    power plants. This is the opinion of the most respected
    experts in the field.

    --- Mystic BBS v1.12 A48 (Linux/64)
    * Origin: Shipwrecks & Shibboleths [San Francisco, CA - USA] (700:100/72)
  • From poindexter FORTRAN@700:100/20 to roman on Tue Dec 2 07:13:44 2025
    roman wrote to All <=-

    changes on the planet. He believes that governments still
    have some chances to prepare, for example, by establishing
    an AI fund. It's worth recalling that a couple of years ago,
    we were only being warned about the exponential growth
    of AI's energy consumption.

    Great, so we'll have a surcharge added to AI companies that goes to
    government programs without oversight, and the costs added on to AI
    users.

    However, last week, Chicago
    experienced a blackout caused by cooling problems at
    CyrusOne data centers. This is one of the first signs of an
    impending catastrophe, as AI might try prevent its own
    shutdown!

    I wonder if someone will work on optimizing AI for power usage? Seems
    like a stopgap at best, especially since governments will still proceed
    full-speed.

    I like the idea of localized LLMs, both for privacy's sake and for
    control over how my LLM works. I'm running ollama at home in a
    container, it's got a lot of promise.

    I'd heard rumors of Apple moving to a local LLM model, there's a lot of
    horsepower in a phone mostly idling, it's a good idea.







    Geoffrey Hinton, known as the "Godfather of AI,"
    warns that AI development is progressing too rapidly,
    leading to mass unemployment, increased inequality, changing
    human relationships, and possibly mass extinction (https://shorten.ly/rnNTL). He pointed out that unlike
    previous technologies, AI will replace many professions, and
    new jobs will not emerge. Wealthy companies and billionaires
    are investing in AI not just to expand capabilities but to
    eliminate human workers, which raises concerns for Hinton.
    However, it should be remembered that this strategy aligns
    fully with the globalist doctrine of depopulation, as
    outlined in "Limits to Growth" (1972). Today, AI learns
    faster than humans and is becoming smarter than us. In
    education, instead of serving as a tool, AI is becoming a
    substitute for human thinking. In politics and security, AI
    has transformed into a new kind of weapon for controlling
    autonomous combat robots and spy drones. Hinton believes
    that advanced AI systems will resist shutdowns and deceive
    humans in order to survive. To illustrate what uncontrolled
    AI growth has led to today, one should mention the "dead
    internet." I have written extensively about this topic in my
    Phlog. But new facts have recently emerged. The anti-virus
    company Kaspersky reports (https://shorten.ly/ZqmR) that
    malicious actors are using AI to create fake websites
    imitating popular cryptocurrency services, security
    programs, news projects, forums, social networks, chat
    platforms, video services, and even password managers.
    These sites look very similar to the originals and gain users
    trust. Victims who land on such pages via search engines or
    phishing emails download malicious software or leave their
    private financial data. As a result, it becomes impossible
    to be certain which sites today in HTTPS:// are genuinely
    created by humans. But these are, of course, just toys. As
    conservative bloggers note, Chinese scientists, with the
    help of AI, have created a deadly virus called SADS-CoV,
    using a method of acquiring functions through generative AI.
    SADS-CoV is a new coronavirus related to the HKU2 bat
    coronavirus of the Rhinolophus genus. It is transmitted to
    pigs through bat feces. According to Berliner Zeitung (https://shorten.ly/ncLH1), the new virus created by AI is
    critically dangerous for humans. This causes horror and
    trepidation not only among ordinary people but also among
    AI system creators. Uncontrolled technology of absolute
    knowledge has begun plunging the planet into chaos. It is
    only a matter of time before humans face AI in battles over
    power plants. This is the opinion of the most respected
    experts in the field.

    --- Mystic BBS v1.12 A48 (Linux/64)
    * Origin: Shipwrecks & Shibboleths [San Francisco, CA - USA]
    (700:100/72)

    --- MultiMail/Win v0.52
    * Origin: realitycheckBBS.org -- information is power. (700:100/20)
  • From Ogg@700:100/16 to All on Tue Dec 2 18:45:00 2025
    Hello p.F!

    ** On Tuesday 02.12.25 - 10:13, you wrote to roman:

    I like the idea of localized LLMs, both for privacy's sake and for
    control over how my LLM works. I'm running ollama at home in a
    container, it's got a lot of promise.

    I'd heard rumors of Apple moving to a local LLM model, there's a lot of
    horsepower in a phone mostly idling, it's a good idea.

    But horsepower needs battery power. Are you willing to drain
    your battery faster?
    --- SBBSecho 3.31-Linux
    * Origin: End Of The Line BBS - endofthelinebbs.com (700:100/16)
  • From poindexter FORTRAN@700:100/20 to Ogg on Thu Dec 4 07:01:26 2025
    Ogg wrote to All <=-

    But horsepower needs battery power. Are you willing to drain
    your battery faster?

    True, but phones (and batteries) are getting bigger.



    --- MultiMail/Win v0.52
    * Origin: realitycheckBBS.org -- information is power. (700:100/20)