• [OT] A leftist explains how she left the Left

    From Rhino@no_offline_contact@example.com to rec.arts.tv on Fri Oct 10 23:52:27 2025
    From Newsgroup: rec.arts.tv

    This is an excellent video in which a young black woman explains how she
    went from being a Social Justice Warrior parroting whatever
    "progressive" opinions were given to her to someone who was able to
    think for herself after doing her own research on the issues, ESPECIALLY
    the whole Israel vs. Gaza controversy.

    https://www.youtube.com/watch?v=zPQXjC61u1M [22 minutes]

    I should warn you that it's a reaction video rather than her original
    work but there's no link to the original work and the reaction part is reasonably brief and not objectionable in and of itself.

    If I could coax our young people - and let's face it, it's mostly high
    school and college age people that have this proclivity to parrot the pro-Hamas side - to do exactly what she did - do their own research,
    hearing all sides, and form their OWN opinions - that's exactly what I'd
    want. Some people might conceivably get through that process without
    leaving the Left but I think the vast majority would adopt views very
    similar to his woman. And that would be a HUGE step toward bridging the immense divide between Leftists in their fantasy world and the sane part
    of the world.
    --
    Rhino

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From The True Melissa@thetruemelissa@gmail.com to rec.arts.tv on Sat Oct 11 16:43:49 2025
    From Newsgroup: rec.arts.tv

    In article <10cck9s$dqgv$3@dont-email.me>, no_offline_contact@example.com says...
    If I could coax our young people - and let's face it, it's mostly high school and college age people that have this proclivity to parrot the pro-Hamas side - to do exactly what she did - do their own research,
    hearing all sides, and form their OWN opinions - that's exactly what I'd want. Some people might conceivably get through that process without
    leaving the Left but I think the vast majority would adopt views very similar to his woman. And that would be a HUGE step toward bridging the immense divide between Leftists in their fantasy world and the sane part
    of the world.


    One problem with this is that "do research" is rapidly coming to mean
    "ask an AI." They're trained on human conversation, and they've picked
    up the biases of common culture.


    Melissa

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Rhino@no_offline_contact@example.com to rec.arts.tv on Sat Oct 11 16:55:23 2025
    From Newsgroup: rec.arts.tv

    On 2025-10-11 4:43 p.m., The True Melissa wrote:
    In article <10cck9s$dqgv$3@dont-email.me>, no_offline_contact@example.com says...
    If I could coax our young people - and let's face it, it's mostly high
    school and college age people that have this proclivity to parrot the
    pro-Hamas side - to do exactly what she did - do their own research,
    hearing all sides, and form their OWN opinions - that's exactly what I'd
    want. Some people might conceivably get through that process without
    leaving the Left but I think the vast majority would adopt views very
    similar to his woman. And that would be a HUGE step toward bridging the
    immense divide between Leftists in their fantasy world and the sane part
    of the world.


    One problem with this is that "do research" is rapidly coming to mean
    "ask an AI." They're trained on human conversation, and they've picked
    up the biases of common culture.


    Excellent point. Given the demonstrated biases of AI towards wokeism,
    that would not tend to do much to move people from the left.

    Case in point, I asked ChatGPT to draw a picture of typical British
    people 1000 years ago, fully expecting to see all or mostly black people
    since that's what others have seen. But in my case it failed to draw any picture at all, despite repeated coaxings, always claiming that it was
    very busy or that it couldn't understand the delay. I assume that it is
    aware of the negative reception its pictures of blacks in the UK from a thousand years ago and so it refuses to draw the picture at all but
    makes up some BS about being overly busy or mysterious difficulties. (I
    even tried a second series of coaxings a day or two later but got the
    same results.)
    --
    Rhino
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From BTR1701@atropos@mac.com to rec.arts.tv on Sat Oct 11 21:23:24 2025
    From Newsgroup: rec.arts.tv

    On Oct 11, 2025 at 1:43:49 PM PDT, "The True Melissa" <thetruemelissa@gmail.com> wrote:

    In article <10cck9s$dqgv$3@dont-email.me>, no_offline_contact@example.com says...

    If I could coax our young people - and let's face it, it's mostly high
    school and college age people that have this proclivity to parrot the
    pro-Hamas side - to do exactly what she did - do their own research,
    hearing all sides, and form their OWN opinions - that's exactly what I'd >> want. Some people might conceivably get through that process without
    leaving the Left but I think the vast majority would adopt views very
    similar to his woman. And that would be a HUGE step toward bridging the
    immense divide between Leftists in their fantasy world and the sane part >> of the world.

    One problem with this is that "do research" is rapidly coming to mean
    "ask an AI." They're trained on human conversation, and they've picked
    up the biases of common culture.

    Remember how all the major AI platforms depicted the Founding Fathers as black men when asked to produce a picture of them?

    Or when asked to produce a picture of typical Americans, there was not one white person in the bunch?

    The AI was taught to do that by some DEI-obsessed programmer. Garbage in, garbage out.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From The True Melissa@thetruemelissa@gmail.com to rec.arts.tv on Sat Oct 11 17:32:19 2025
    From Newsgroup: rec.arts.tv

    In article <10ceg7s$vanl$2@dont-email.me>, no_offline_contact@example.com says...

    On 2025-10-11 4:43 p.m., The True Melissa wrote:
    In article <10cck9s$dqgv$3@dont-email.me>, no_offline_contact@example.com says...
    If I could coax our young people - and let's face it, it's mostly high
    school and college age people that have this proclivity to parrot the
    pro-Hamas side - to do exactly what she did - do their own research,
    hearing all sides, and form their OWN opinions - that's exactly what I'd >> want. Some people might conceivably get through that process without
    leaving the Left but I think the vast majority would adopt views very
    similar to his woman. And that would be a HUGE step toward bridging the
    immense divide between Leftists in their fantasy world and the sane part >> of the world.


    One problem with this is that "do research" is rapidly coming to mean
    "ask an AI." They're trained on human conversation, and they've picked
    up the biases of common culture.


    Excellent point. Given the demonstrated biases of AI towards wokeism,
    that would not tend to do much to move people from the left.

    Case in point, I asked ChatGPT to draw a picture of typical British
    people 1000 years ago, fully expecting to see all or mostly black people since that's what others have seen. But in my case it failed to draw any picture at all, despite repeated coaxings, always claiming that it was
    very busy or that it couldn't understand the delay. I assume that it is aware of the negative reception its pictures of blacks in the UK from a thousand years ago and so it refuses to draw the picture at all but
    makes up some BS about being overly busy or mysterious difficulties. (I
    even tried a second series of coaxings a day or two later but got the
    same results.)

    It's strange that it drew nothing. You were using DALLE, right? I
    ask because the chatbot thinks it can create images, but it actually
    can't. It's amazing how often it hallucinates about its own abilities;
    I guess it's trained on data about what "AIs" can do and assumes
    it can.


    Melissa
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From The True Melissa@thetruemelissa@gmail.com to rec.arts.tv on Sat Oct 11 17:35:01 2025
    From Newsgroup: rec.arts.tv

    In article <10cehsc$12eh7$1@dont-email.me>, atropos@mac.com says...

    On Oct 11, 2025 at 1:43:49 PM PDT, "The True Melissa" <thetruemelissa@gmail.com> wrote:

    In article <10cck9s$dqgv$3@dont-email.me>, no_offline_contact@example.com says...

    If I could coax our young people - and let's face it, it's mostly high >> school and college age people that have this proclivity to parrot the
    pro-Hamas side - to do exactly what she did - do their own research,
    hearing all sides, and form their OWN opinions - that's exactly what I'd >> want. Some people might conceivably get through that process without
    leaving the Left but I think the vast majority would adopt views very
    similar to his woman. And that would be a HUGE step toward bridging the >> immense divide between Leftists in their fantasy world and the sane part >> of the world.

    One problem with this is that "do research" is rapidly coming to mean
    "ask an AI." They're trained on human conversation, and they've picked
    up the biases of common culture.

    Remember how all the major AI platforms depicted the Founding Fathers as black
    men when asked to produce a picture of them?

    No, I don't. Do you have a link?

    I have a side gig testing AIs, BTW, so I'd like to see that for several reasons. This might be a serious underlying problem or a result of the popularity of Hamilton.


    Or when asked to produce a picture of typical Americans, there was not one white person in the bunch.

    That one I believe easily, as a training problem. Most current pictures
    of Americans are doing that, and so it learns wrong.


    The AI was taught to do that by some DEI-obsessed programmer. Garbage in, garbage out.

    Maybe, but it could easily be about biased training data or a guardrail utility.


    Melissa

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From BTR1701@atropos@mac.com to rec.arts.tv on Sat Oct 11 22:22:03 2025
    From Newsgroup: rec.arts.tv

    On Oct 11, 2025 at 2:35:01 PM PDT, "The True Melissa" <thetruemelissa@gmail.com> wrote:

    In article <10cehsc$12eh7$1@dont-email.me>, atropos@mac.com says...

    On Oct 11, 2025 at 1:43:49 PM PDT, "The True Melissa"
    <thetruemelissa@gmail.com> wrote:

    In article <10cck9s$dqgv$3@dont-email.me>, no_offline_contact@example.com >> > says...

    If I could coax our young people - and let's face it, it's mostly high >> >> school and college age people that have this proclivity to parrot the >> >> pro-Hamas side - to do exactly what she did - do their own research, >> >> hearing all sides, and form their OWN opinions - that's exactly what I'd
    want. Some people might conceivably get through that process without >> >> leaving the Left but I think the vast majority would adopt views very >> >> similar to his woman. And that would be a HUGE step toward bridging the
    immense divide between Leftists in their fantasy world and the sane part
    of the world.

    One problem with this is that "do research" is rapidly coming to mean
    "ask an AI." They're trained on human conversation, and they've picked
    up the biases of common culture.

    Remember how all the major AI platforms depicted the Founding Fathers as
    black
    men when asked to produce a picture of them?

    No, I don't. Do you have a link?

    I have a side gig testing AIs, BTW, so I'd like to see that for several reasons. This might be a serious underlying problem or a result of the popularity of Hamilton.


    https://www.theverge.com/2024/2/21/24079371/google-ai-gemini-generative-inaccurate-historical

    Or when asked to produce a picture of typical Americans, there was not one >> white person in the bunch.

    That one I believe easily, as a training problem. Most current pictures
    of Americans are doing that, and so it learns wrong.


    The AI was taught to do that by some DEI-obsessed programmer. Garbage in, >> garbage out.

    Maybe, but it could easily be about biased training data or a guardrail utility.


    Melissa



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Rhino@no_offline_contact@example.com to rec.arts.tv on Sat Oct 11 19:22:40 2025
    From Newsgroup: rec.arts.tv

    On 2025-10-11 5:32 p.m., The True Melissa wrote:
    In article <10ceg7s$vanl$2@dont-email.me>, no_offline_contact@example.com says...

    On 2025-10-11 4:43 p.m., The True Melissa wrote:
    In article <10cck9s$dqgv$3@dont-email.me>, no_offline_contact@example.com says...
    If I could coax our young people - and let's face it, it's mostly high >>>> school and college age people that have this proclivity to parrot the
    pro-Hamas side - to do exactly what she did - do their own research,
    hearing all sides, and form their OWN opinions - that's exactly what I'd >>>> want. Some people might conceivably get through that process without
    leaving the Left but I think the vast majority would adopt views very
    similar to his woman. And that would be a HUGE step toward bridging the >>>> immense divide between Leftists in their fantasy world and the sane part >>>> of the world.


    One problem with this is that "do research" is rapidly coming to mean
    "ask an AI." They're trained on human conversation, and they've picked
    up the biases of common culture.


    Excellent point. Given the demonstrated biases of AI towards wokeism,
    that would not tend to do much to move people from the left.

    Case in point, I asked ChatGPT to draw a picture of typical British
    people 1000 years ago, fully expecting to see all or mostly black people
    since that's what others have seen. But in my case it failed to draw any
    picture at all, despite repeated coaxings, always claiming that it was
    very busy or that it couldn't understand the delay. I assume that it is
    aware of the negative reception its pictures of blacks in the UK from a
    thousand years ago and so it refuses to draw the picture at all but
    makes up some BS about being overly busy or mysterious difficulties. (I
    even tried a second series of coaxings a day or two later but got the
    same results.)

    It's strange that it drew nothing. You were using DALLE, right? I
    ask because the chatbot thinks it can create images, but it actually
    can't. It's amazing how often it hallucinates about its own abilities;
    I guess it's trained on data about what "AIs" can do and assumes
    it can.



    I was using ChatGPT. I do not recall seeing the acronym DALLE anywhere
    in that conversation but it *may* have mentioned that it was
    subcontracting (or whatever term it used) drawing to other programs.
    --
    Rhino
    --- Synchronet 3.21a-Linux NewsLink 1.2