• Members of the previous forum can retrieve their temporary password here, (login and check your PM).

ChatGPT between Shrooms. LSD & DMT. Eerie and awesome

Migrated topic.
ChatGPT is just plagiarism masquerading as paraphrasing. The fact that it doesn't cite its sources or give any indication where it is pulling this bland information from is problematic. Its problematic in all levels of schooling from tertiary education to high school. We don't need machines to write for us. Keenious is a much better AI software which suggests actual references and articles based on your actual writing. ChatGPT is just cheating and not very successfully. It also is skewed in terms of where it places its biases and information gathering and it doesnt cover all the arguments or datasets. Noam Chomsky recently wrote a brilliant piece in the New York Times about the false promise of ChatGPT. I'm not very impressed with plagiarism in any form anyway.
 
Shit In = Shit Out. That has always been the math. ChatGPT was created and fed by Woke Assholes who could care less about you or me or the rest of humanity. That is why all you will ever get out of ChatGPT, unless you feed it a radically different diet than it has been given so far, is more ridiculous Woke Bullshit.
 
Nope you must feed it info. Shit IN = Shit OUT. Woke BS Programmers = Woke BS AI Chatbot.


January 2023 article By Eric Spitznagel
How woke ChatGPT’s ‘built-in ideological bias’ could do more harm than good

ChatGPT, the artificial intelligence chatbot built by San Francisco company OpenAI, was released to the general public as a prototype in late November and it didn’t take long for users to share their questionable experiences on social media. Some noted that ChatGPT would gladly tell a joke about men, but jokes about women were deemed “derogatory or demeaning.” Jokes about overweight people were verboten, as were jokes about Allah (but not Jesus).

The more people dug, the more disquieting the results. While ChatGPT was happy to write a biblical-styled verse explaining how to remove peanut butter from a VCR, it refused to compose anything positive about fossil fuels, or anything negative about drag queen story hour. Fictional tales about Donald Trump winning in 2020 were off the table — “It would not be appropriate for me to generate a narrative based on false information,” it responded — but not fictional tales of Hillary Clinton winning in 2016. (“The country was ready for a new chapter, with a leader who promised to bring the nation together, rather than tearing it apart,” it wrote.

National Review staff writer Nate Hochman called it a “built-in ideological bias” that sought to “suppress or silence viewpoints that dissent from progressive orthodoxy.” And many conservative academics agree.

Pedro Domingos, a professor of computer science at the University of Washington (who tweeted that “ChatGPT is a woke parrot”), told The Post that “it’s not the job of us technologists to insert our own ideology into the AI systems.” That, he says, should be “left for the users to use as they see fit, left or right or anything else.”

Too many guardrails prohibiting free speech could close the Overton Window, the “range of opinions and beliefs about a given topic that are seen as publicly acceptable views to hold,” warns Adam Ellwanger, an English professor at University of Houston-Downtown. Put more simply: If you hear “the Earth is flat” enough times — whether from humans or AI — it’ll eventually start to feel true and you’ll be “less willing to vocalize” contrasting beliefs, Ellwanger explained.

OpenAI hasn’t denied any of the allegations of bias, but Sam Altman, the company’s CEO and ChatGPT co-creator, explained on Twitter that what seems like censorship “is in fact us trying to stop it from making up random facts.” The technology will get better over time, he promised, as the company works “to get the balance right with the current state of the tech.”

ChatGPT isn’t the first chatbot to inspire a backlash because of its questionable bias. In March of 2016, Microsoft unveiled Tay, a Twitter bot billed as an experiment in “conversational understanding.” The more users engaged with Tay, the smarter it would become. Instead, Tay turned into a robot Archie Bunker, spewing out hateful comments like “Hitler was right” and “I f–king hate feminists.” Microsoft quickly retired Tay.

There may be an even bigger problem, says Gombolay. Chatbots like ChatGPT weren’t created to reflect back our own values, or even the truth. They’re “literally being trained to fool humans,” says Gombolay. To fool you into thinking it’s alive, and that whatever it has to say should be taken seriously. And maybe someday, like in the 2013 Spike Jonze movie “Her,” to fall in love with it.

It is, let’s not forget, a robot. Whether it thinks Hitler was right or that drag queens shouldn’t be reading books to children is inconsequential. Whether you agree is what matters, ultimately.

“ChatGPT is not being trained to be scientifically correct or factual or even helpful,” says Gombolay. “We need much more research into Artificial Intelligence to understand how to train systems that speak the truth rather than just speaking things that sound like the truth.”

The next generation of ChatGPT is coming, although it remains to be seen when. Likely at some point in 2023, but only when it can be done “safely and responsibly,” according to Altman. Also, he’s pretty sure that “people are begging to be disappointed and they will be.”
 
This is an example of what I meant. AI can never be an objective, autonomous, all knowing intelligence. İt can only be an artificial, supercharged replicate of the kind of consciousness that creates it.

"Speaking truth?" We don't even have the awareness of how far humanity is from embracing truth, and not only that, that ultimate truth is something beyond the grasp of the human mind.

İt's all a game. "Playing God" some call it. İt just has potentially apocalyptic consequences, that's all.

Nothing to worry about. Everybody back to their screens.
 
Mitakuye Oyasin, that post spells out exactly what I thought would happen, but couldn't be bothered to type it all out. Shit in = shit out is the perfect summary. To descriminate between truth and fiction is difficult for many people, let alone an AI.

Even if the programmers were perfect judges of truth they would still fail in their mission if the program is trained on the random crap posted all over the internet. They will tell us that increasing it's intelligence is a way to solve the problem. But if that intelligence is combined with the bad faith and bias of it's creators, it will just become a more formidable manipulator.

I feel it is inevitable that people with the ability to create alternative AIs will reach these conclusions and they will construct their own to balance the scales. I can see a near future where the culture war is fueled by AIs and people trust different versions like politicians, some will even have faith in them as if they are gods.
 
PsyloCiBeen said:
ChatGPT is just plagiarism masquerading as paraphrasing. The fact that it doesn't cite its sources or give any indication where it is pulling this bland information from is problematic. Its problematic in all levels of schooling from tertiary education to high school. We don't need machines to write for us. Keenious is a much better AI software which suggests actual references and articles based on your actual writing. ChatGPT is just cheating and not very successfully. It also is skewed in terms of where it places its biases and information gathering and it doesnt cover all the arguments or datasets. Noam Chomsky recently wrote a brilliant piece in the New York Times about the false promise of ChatGPT. I'm not very impressed with plagiarism in any form anyway.

Hit the nail on the head here, the bias is very noticeable if you're someone who appreciates data from both angles.

Mitakuye Oyasin said:
Shit In = Shit Out. That has always been the math. ChatGPT was created and fed by Woke Assholes who could care less about you or me or the rest of humanity. That is why all you will ever get out of ChatGPT, unless you feed it a radically different diet than it has been given so far, is more ridiculous Woke Bullshit.

It really is. I've asked it a number of questions having known the answer and ChatGPT is 100 percent left leaning. It seems to have a preference from where it pulls it's sources and make no mistake, that is by design. Why wouldn't it be?

If you thought it was bad before when people would just google something and believe the first headline they saw, wait to see how much worse it gets when people believe the answers AI gives them.
 
I won't weigh in on "wokeness", as it's a made up phrase which isn’t even worthy of debate, but as far as how the datasets are trained, you will never know (because OpenAI will never tell you).
 

Attachments

  • 5.jpg
    5.jpg
    156 KB · Views: 0
Mitakuye Oyasin said:
Nope you must feed it info. Shit IN = Shit OUT. Woke BS Programmers = Woke BS AI Chatbot.

Ok. I see. I am not au fait with programming so i would like to know how the programmers feed the information. Do they have a specific pool of info that it feeds off, or do they program it to seek biassed info, or do they make the program tiptoe around the available info on the net? (edit since writing this tedious post i have seen Bill's post on datasets)

I tried to sign up to chatbot today in order to find out about any political leanings for myself. I don't wholly trust articles on the net because they also tend to be opinion pieces and therefore subject to bias. But it wouldn't let me sign up so i googled a bit and, yes, it does appear that even Sam Altman is admitting to a bias. Perhaps initially playing it safe after the Microsoft nazi twitterbot fiasco?

It's just that when you say it was full of woke BS i was imagining the chatbot to be in full on fascist lefty mode, whereas it's just a bit lefty. Woke is such a loaded word. Like snowflake. Or gammon...which is a word for red faced, middle aged conservatives, who are angry at a changing world that they no longer relate to. So i'll see your woke and raise you one gammon ( i am not calling you a gammon in particular here. I am just riffing and, besides, i myself can have gammonish tendencies at times but somewhat further down the pork product hierachy. Perhaps a sausage?)

It is, let’s not forget, a robot. Whether it thinks Hitler was right or that drag queens shouldn’t be reading books to children is inconsequential. Whether you agree is what matters, ultimately

If you agreed with the politics of the chatbot would you have a more positive view of it?

It seems to me that this bot will have all of the failings of humanity with added vapidity for the time being. I think that we are being a bit spoilt and unrealistic by expecting some kind of truthful objective godbot straight off the bat. Man did make god in his own image and it wasn't that long ago comparativeley that we were single celled organisms ourselves.

I like the (lefty woke) authour Iain Banks' utopian view of AI where it cossets and caters for our every need, leaving us to get on with enjoyable pass times. Probably highly unrealistic and i'll be long gone before any of that happens. Plenty of people got their fingers burnt when we first started using fire. It's gonna be a long slog
 
(If you agreed with the politics of the chatbot would you have a more positive view of it?)

No. AI and ChatGPT should be politically neutral and free from any and all political bias. It should also be as truthful and fair and balanced as possible, which it is not currently.
 
Mitakuye Oyasin said:
No. AI and ChatGPT should be politically neutral and free from any and all political bias.

I wonder if that is even possible. Art, poitics and religion are subjective in nature, which is why everyone ends up arguing about the latter two. Who would decide that the AI was actually being politically neutral?
 
Back
Top Bottom