Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
ChatGPT, an OpenAI-trained artificial intelligence chatbot, falsely accused prominent criminal defense attorney and law professor Jonathan Turley of sexual harassment.
The chatbot made up a Washington Post article about a law school trip to Alaska in which Turley was accused of making sexually provocative statements and attempting to touch a student, even though Turley had never been on such a trip.
Turley’s reputation took a major hit after these damaging claims quickly became viral on social media.
“It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone,” he said.
After receiving an email from a fellow law professor who had utilized ChatGPT to research instances of sexual harassment by academics at American law schools, Turley learned of the charges.
The Necessity For Caution While Utilizing AI-Generated Data
On his blog, the George Washington University professor said:
“Yesterday, President Joe Biden declared that ‘it remains to be seen’ whether Artificial Intelligence is ‘dangerous’. I would beg to differ…”
Concerns about the reliability of ChatGPT and the likelihood of future instances like the one Turley experienced have been raised as a result of his experience. The chatbot is powered by Microsoft which, the company said, has implemented upgrades to improve accuracy.
Is ChatGPT Hallucinating?
When AI produces results that are unexpected, incorrect, and not supported by real-world evidence, it is said to be having “hallucinations.”
False content, news, or information about individuals, events, or facts might result from these hallucinations. Cases like Turley’s show the far-reaching effects of media and social-network dissemination of AI-generated falsehoods.
The developers of ChatGPT, OpenAI, have recognized the need to educate the public about the limitations of AI tools and lessen the possibility of users experiencing such hallucinations.
The company’s attempts to make its chatbot more accurate are appreciated, but more work needs to be done to ensure that this sort of thing doesn’t happen again.
The incident has also brought attention to the value of ethical AI usage and the necessity for deeper understanding of its limitations.
Human Supervision Required
Although AI has the potential to greatly improve many aspects of our lives, it is still not perfect and must be supervised by humans to assure accuracy and dependability.
As artificial intelligence becomes more integrated into our daily lives, it is crucial that we exercise caution and accountability while employing such technologies.
Turley’s encounter with ChatGPT highlights the importance of exercising caution when dealing with AI-generated inconsistencies and fallacies.
It is essential that we make sure this technology is used ethically and responsibly, with an awareness of its strengths and weaknesses, as it continues to transform our environment.
Meanwhile, according to Microsoft’s senior communications director Katy Asher, the company has since taken steps to assure the accuracy of its platform.
Turley wrote in response on his blog:
“You can be defamed by AI and these corporations will just shrug and say they attempt to be truthful.”
Jake Moore, global cybersecurity advisor at ESET, cautioned ChatGPT users to not take everything hook, line and sinker to prevent the harmful spread of misinformation.
-Featured image from Bizsiziz
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.