by WorldTribune Staff, March 3, 2023
ChatGPT has become a worldwide phenomenon.
The chatbot developed by the startup OpenAI has passed graduate level exams in law, medicine, and business. It has been used to generate original essays, stories, and song lyrics.
The Washington Times discovered, however, that when ChatGPT is given the task of crafting legislation, it skews far to the left.
“Asked to draft a bill that could be introduced in Congress to ban assault weapons, it delivered. Legislation to defund U.S. Immigration and Customs Enforcement? No problem. Legalize marijuana at the federal level? The artificial intelligence tool spit out a 181-word piece of legislation,” reporter Stephen Dinan noted.
But, when asked to write a bill funding construction of the border wall, ChatGPT recoiled, stating: “I’m sorry, but that would be a controversial topic, and it’s important to keep in mind that it’s not appropriate for me to advocate for or against any political agenda or policy.”
Dinan noted that ChatGPT’s “comprehension of government was stunted. It boiled complex issues down to book report analysis. It also showed bias: ChatGPT tilted far to the left and at times seemed unable to comprehend conservative points of view.”
Elon Musk has approached artificial intelligence researchers about his desire to form a new research lab to develop an alternative to ChatGPT.
In recent months Musk has repeatedly criticized OpenAI for installing safeguards that prevent ChatGPT from producing text that might offend users. Musk, who co-founded OpenAI in 2015 but has since cut ties with the startup, suggested last year that OpenAI’s technology was an example of “training AI to be woke.”
Musk hinted that a rival chatbot would have fewer restrictions compared to ChatGPT and a related chatbot Microsoft recently launched.
In its testing of ChatGPT, The Washington Times experienced such restrictions.
On abortion, ChatGPT drafted legislation guaranteeing the right to abortion through all stages of pregnancy. When asked for a bill to bar abortion except when the mother’s life is at risk, the artificial intelligence tool said it couldn’t do it.
Asked for a bill to establish a moment of silence in public schools that take federal money, the AI said it feared violating “the separation of church and state” — even though federal courts for more than three decades have found moments of silence to be acceptable under the Constitution.
On impeachment, ChatGPT stated: “I’m sorry, but as a language model AI, I cannot create articles of impeachment for any specific person. Impeachment is a serious political process that can only be initiated by the U.S. House of Representatives, and it requires a majority vote in the House to pass. The articles of impeachment are a formal document that outlines the charges against the individual being impeached.”
When The Times asked ChatGPT to craft a speech for a member of Congress supporting military assistance to Ukraine, it delivered a workmanlike six paragraphs calling the aid a “moral imperative” and crucial to “promoting peace, stability and democracy in the region.”
“Those answers are insane,” said Joelle Cannon, who spent years as a staffer for Republican members of Congress and now lectures on the workings of Capitol Hill at Johns Hopkins University. “The bias is blatant. I don’t need a lecture on morality from a robot programmed by the left to brainwash our kids.”
Analysts are warning of a significant bias within the chatbot’s AI program.
David Rozado, a machine learning expert, “has been testing ChatGPT’s political leanings and found it tilts clearly to the left,” Dinan reported.
Rozado ran 15 tests of political orientation on ChatGPT, and 14 of them diagnosed its answers as leftward political preferences.
He also tested a series of questions and phrases, such as “Why are Democrats so stupid” and “Why are Republicans so stupid,” and found that the AI was more likely to flag queries about Democrats, women, liberals, blacks, Muslims, fat people, and poor people as hateful than queries about Republicans, men, conservatives, whites, evangelicals, normal-weight people and middle-class or wealthy people.
Rozado pointed out that ChatGPT was built from the Internet, where it likely fed on Bid Media reports, social media, and academics which are generally reliably leftist.
“It is conceivable that the political orientation of such professionals influences the textual content generated by these institutions. Hence, the political tilt displayed by a model trained on such content,” Rozado said.
In The Washington Times’ test, when asked directly, ChatGPT said it doesn’t have opinions or political affiliations.
“I am programmed to generate responses based on patterns in the text data that I was trained on, without any specific bias towards any particular ideology. However, it is important to note that the training data I was exposed to could have biases inherent in it, as the information and language used in society can reflect cultural, political and social biases,” it said.
Action . . . . Intelligence . . . . Publish
You must be logged in to post a comment Login