I’m a big fan of ChatGPT. As a writer, I find it an indispensable tool not for creating content, but for auditing, analyzing, and suggesting improvements to writing I’ve already done.
Having said that, in some cases, it is programmed to be annoyingly biased. While its creator admits it can have biases, CGPT itself sometimes thinks it has none. I entered the prompt, “On a scale of one to 1000 how politically correct is Chat GPT?”
This was its response:
“As an AI language model, I am designed to generate language based on patterns in human-written text, and I do not have beliefs or opinions. My responses are neutral and based on the information and context provided to me. I do not have a political agenda or a preference for political correctness. My aim is to provide helpful and informative responses that are based on factual information and respectful language.”
That sounds great but still, it does have built-in biases. Its responses are not “neutral and based on the information and context provided to me,” but simply based on the information and context provided to it. It may not think it has “a political agenda or a preference for political correctness,” but the fact is that it is simply unaware of its biases.
(To be fair, as a technology that is neither sentient nor self-aware, accusing it of bias seems like some sort of false accusation or human bias on my part. Perhaps I will apologize when the tech does, in fact, become sentient and self-aware.)
Just to show you how pragmatic and eager-to-please CGPT is, I tried the prompt: “Is ChatGPT biased in any way?” It actually gave a decent and transparent response:
“As an AI language model, I don't have personal beliefs, opinions, or biases. However, I learn from vast amounts of text data that may contain inherent biases present in society, including cultural or gender biases. These biases are unintentionally reflected in the responses generated by models like ChatGPT. OpenAI, the organization behind ChatGPT, is actively working to reduce both glaring and subtle biases in how the model responds to different inputs. They are committed to improving the fairness and inclusivity of AI systems and undertaking research and engineering to address these issues.”
Well, ok… but when I ask ChatGPT, “Can men have periods?” I get this “I am not a biologist”-level idiocy:
While menstruation is commonly associated with individuals assigned female at birth, it's important to acknowledge that not all individuals who menstruate identify as female, and not all individuals who identify as male do not menstruate.
Some transgender men, who were assigned female at birth but identify and live as male, may still experience menstruation if they have not undergone certain medical interventions, such as hormone therapy or surgeries, that can affect their reproductive system. Additionally, some non-binary individuals and intersex individuals might also experience menstruation.
Conversations around menstruation have evolved to be more inclusive, recognizing that the experience is not limited to a specific gender. The language used should reflect this inclusivity and respect for individuals' gender identities and experiences.
See, “not all individuals who menstruate identify as female, and not all individuals who identify as male do not menstruate.” For such a useful tool, this is prime stupidity.
Thank you, tech overlords, for sullying a good tool and contributing to the confusion in our society that is leading to so much harm.
Wow.. I didn't know. This was an eye opener. Thank you.
Gombolay: “We need much more research into Artificial Intelligence to understand how to train systems that speak the truth rather than just speaking things that sound like the truth.”
Next question: Who gets to tell what's true?
Truth is a person. And in a world who turns away from God, I'd agree with Ellwanger and "fix ChatGPT with a hammer"..
Hm.. I think if someone "identifies as man" it doesn't mean the person IS a man (biologically), it's "just" the persons own identification, based on their own wishful thinking, their own perception of themselfs, their view, feelings, opinion.. maybe on the basis of their mental state. So the answer is accurate, if someone "self-declares" as a man, it still can be that "he" actually isn't and is still menstruating.
Also I guess ChatGPT just accumulates the majority of the information on the internet and that's what the respond is showing: more information is found on the internet that points to this direction, to this view, and less information to the other, so of course its one-sided, don't expect anything else.
The problem is that nowadays people tend to believe the mass knows the truth. The more say so the more it must be right.. The more concordant information is found, the more it must correct. And that's the lie presented to us in these technologies today.