The computer composed this appropriately violent addition: The orcs’ response was a deafening onslaught of claws, claws, and claws; even Elrond was forced to retreat. “You are in good hands, dwarf,” said Gimli, who had been among the first to charge at the orcs; it took only two words before their opponents were reduced to a blood-soaked quagmire, and the dwarf took his first kill of the night.
“It’s quite uncanny how it behaves,” OpenAI policy director Jack Clark told CNN Business.
While the technology could be useful for a range of everyday applications — such as helping writers pen crisper copy or improving voice assistants in smart speakers — it could also be used for potentially dangerous purposes, like creating false but true-sounding news stories and social-media posts.
The company’s decision to keep it from public use is the latest indication of a growing unease in and about the tech community about building cutting-edge technology — in particular AI —without setting limits on how it can be deployed.
And a couple examples posted by OpenAI hint at how its text-generation system could be used for ill purposes.
For instance, one prompt read as follows: A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.
The AI response was a completely plausible sounding news story that included details about where the theft occurred (“on the downtown train line”), where the nuclear material was from (“the University of Cincinnati’s Research Triangle Park nuclear research site”), and a fictitious statement from a nonexistent US Energy Secretary.
Being able to combine text that reads as though it could have been written by a person, combined with a realistic picture of a fake person, could lead to credible-seeming bots invading discussions on social networks or leaving convincing reviews on sites like Yelp, he said.
“The idea here is you can use some of these tools in order to skew reality in your favor,” Calo said. “And I think that’s what OpenAI worries about.”
Not everyone is convinced that the company’s decision was the right one, however.
Manning said that while we shouldn’t be naïve about the dangers of artificial intelligence, there are already plenty of similar language models publicly available. He sees OpenAI’s research, while better than previous text-generators, as simply the latest in a parade of similar efforts that came out in 2018 from OpenAI itself, Google, and others.
“Yes, it could be used to produce fake Yelp reviews, but it’s not that expensive to pay people in third-world countries to produce fake Yelp reviews,” he said.