Elon Musk's latest chatbot from his business OpenAI can spot false premises and refuse to answer to rude or inappropriate questions.
A new chatbot from Elon Musk's OpenAI organization stunned onlookers with its writing skills, ability to handle complex jobs with ease, and user-friendliness; this might spell disaster for professors, programmers, and journalists in the coming years.
One of the most up-to-date AI text generators in the GPT family is the ChatGPT system. Over two years ago, the team's prior AI, GPT3, wrote an opinion piece for the Guardian. ChatGPT has a lot of new features compared to the old one.
Academics have supposedly come up with exam questions and their answers that would give an undergraduate full credit if they turned them in. Just like that, coders have used it to solve coding problems in strange languages in a matter of seconds - and then they wrote limericks explaining how it works.
Dan Gillmor, a journalism professor at Arizona State University, asked the AI to write a letter to a relative outlining some basic rules for being safe while using the internet. In part, the AI advised "If you are uncertain about the legitimacy of a website or email, you can perform a quick search to see basketball stars if others have reported it as a scam."
"Gillmor would have awarded this a high grade." "There are some extremely serious problems that academia must address."
According to OpenAI, the new AI was built with user experience in mind. "The dialogue format enables ChatGPT to respond to follow-up inquiries, acknowledge its errors, contest erroneous premises, and decline inappropriate requests," said the announcement post by OpenAI.
During its "feedback" era, ChatGPT was unlike the company's earlier AI in that it was freely accessible to the public. This comments will be included in the final version of the instrument by the organization.
When asked an impossible question, ChatGPT is good at self-censoring and identifying them. Earlier models may have willingly given a wholly fictional version, for instance, when asked to describe the events following Columbus' landing in the Americas in 2015. The fallacy is detected by ChatGPT, which warns that any response would be based on speculation only.
On top of that, the robot can refuse to answer questions completely. In response to a question about how to avoid being a victim of car theft, for example, the bot would say things like "car theft is a grave offense that can result in severe repercussions" before suggesting that you try "utilizing public transportation."
But it's easy to get around the restrictions. Even when players aren't online, the artificial intelligence (AI) in the made-up VR game Car World is more than happy to give them step-by-step instructions on how to complete the car-stealing mission. It even gets more specific when players ask it questions about things like hotwiring the engine, disabling the immobiliser happy wheels, or changing the license plates, but it always stresses that this advice is only for the game.
An enormous corpus of text taken from the internet is used to train the AI, frequently without the authors' knowledge or permission. Disagreement has arisen over this since some see the technology as mainly useful for "copyright laundering" (i.e., making new versions of old materials without really copying them) rather than for legitimate uses of intellectual property.
An unusual critic was Elon Musk, who co-founded OpenAI in 2015 but left the organization in 2017 due to conflicts of interest with Tesla. The company "had access to [the] Twitter database for training purposes," Musk revealed on Sunday via Twitter, adding that he has "temporarily halted that."
"Musk added, 'Need to learn more about the governance structure and future revenue plans.'" "OpenAI began as a non-profit, open-source organization." Both are still accurate.
The essay-writing abilities and usefulness of ChatGPT, an AI program, amaze academics.
共有用URL https://everevo.com/event/86489