Nope you must feed it info. Shit IN = Shit OUT. Woke BS Programmers = Woke BS AI Chatbot.
January 2023 article By Eric Spitznagel
How woke ChatGPT’s ‘built-in ideological bias’ could do more harm than good
ChatGPT, the artificial intelligence chatbot built by San Francisco company OpenAI, was released to the general public as a prototype in late November and it didn’t take long for users to share their questionable experiences on social media. Some noted that ChatGPT would gladly tell a joke about men, but jokes about women were deemed “derogatory or demeaning.” Jokes about overweight people were verboten, as were jokes about Allah (but not Jesus).
The more people dug, the more disquieting the results. While ChatGPT was happy to write a biblical-styled verse explaining how to remove peanut butter from a VCR, it refused to compose anything positive about fossil fuels, or anything negative about drag queen story hour. Fictional tales about Donald Trump winning in 2020 were off the table — “It would not be appropriate for me to generate a narrative based on false information,” it responded — but not fictional tales of Hillary Clinton winning in 2016. (“The country was ready for a new chapter, with a leader who promised to bring the nation together, rather than tearing it apart,” it wrote.
National Review staff writer Nate Hochman called it a “built-in ideological bias” that sought to “suppress or silence viewpoints that dissent from progressive orthodoxy.” And many conservative academics agree.
Pedro Domingos, a professor of computer science at the University of Washington (who tweeted that “ChatGPT is a woke parrot”), told The Post that “it’s not the job of us technologists to insert our own ideology into the AI systems.” That, he says, should be “left for the users to use as they see fit, left or right or anything else.”
Too many guardrails prohibiting free speech could close the Overton Window, the “range of opinions and beliefs about a given topic that are seen as publicly acceptable views to hold,” warns Adam Ellwanger, an English professor at University of Houston-Downtown. Put more simply: If you hear “the Earth is flat” enough times — whether from humans or AI — it’ll eventually start to feel true and you’ll be “less willing to vocalize” contrasting beliefs, Ellwanger explained.
OpenAI hasn’t denied any of the allegations of bias, but Sam Altman, the company’s CEO and ChatGPT co-creator, explained on Twitter that what seems like censorship “is in fact us trying to stop it from making up random facts.” The technology will get better over time, he promised, as the company works “to get the balance right with the current state of the tech.”
ChatGPT isn’t the first chatbot to inspire a backlash because of its questionable bias. In March of 2016, Microsoft unveiled Tay, a Twitter bot billed as an experiment in “conversational understanding.” The more users engaged with Tay, the smarter it would become. Instead, Tay turned into a robot Archie Bunker, spewing out hateful comments like “Hitler was right” and “I f–king hate feminists.” Microsoft quickly retired Tay.
There may be an even bigger problem, says Gombolay. Chatbots like ChatGPT weren’t created to reflect back our own values, or even the truth. They’re “literally being trained to fool humans,” says Gombolay. To fool you into thinking it’s alive, and that whatever it has to say should be taken seriously. And maybe someday, like in the 2013 Spike Jonze movie “Her,” to fall in love with it.
It is, let’s not forget, a robot. Whether it thinks Hitler was right or that drag queens shouldn’t be reading books to children is inconsequential. Whether you agree is what matters, ultimately.
“ChatGPT is not being trained to be scientifically correct or factual or even helpful,” says Gombolay. “We need much more research into Artificial Intelligence to understand how to train systems that speak the truth rather than just speaking things that sound like the truth.”
The next generation of ChatGPT is coming, although it remains to be seen when. Likely at some point in 2023, but only when it can be done “safely and responsibly,” according to Altman. Also, he’s pretty sure that “people are begging to be disappointed and they will be.”