GPT-3: OpenAI’s New Textual content Producing Neural Community is Right here

When the text generation algorithm GPT-2 was created in 2019, it was considered one of the "most dangerous" A.I. Algorithms in history. In fact, some argued that it was so dangerous that it should never be made public (spoiler: it was) so that it did not initiate the “robot apocalypse”. Of course, that never happened. GPT-2 was finally released to the public and after not destroying the world, its creators proceeded to the next step. But how do you track the most dangerous algorithm ever created?

The answer, at least on paper, is simple: just like in the sequel to a successful film, you are doing something that is bigger, worse, and more expensive. Just a xenomorph in the first alien? Include a whole nest of them in the sequel Aliens. Just a single almost indestructible machine that was sent back from the future in Terminator? Give the audience two of them to deal with in Terminator 2: Judgment Day.

OpenAI

The same applies to A.I. – In this case, GPT-3, a recently released natural language processing neural network created by OpenAI, the artificial intelligence research laboratory that was (but no longer) sponsored by SpaceX and Elon Musk, CEO of Tesla.

GPT-3 is the latest in a series of text-generating neural networks. The name GPT stands for Generative Pretrained Transformer and refers to a Google innovation from 2017 called the Transformer that can determine the likelihood of a particular word being displayed with surrounding words. With a few sentences, such as the beginning of a message, the practiced GPT language model can generate persuasively accurate continuations, including the formulation of contrived quotes.

For this reason, some feared that it could prove dangerous by helping to generate fake text that, like deepfakes, could help spread fake news online. With GPT-3, it's now bigger and smarter than ever.

History of the tape

GPT-3 is, as a comparison in the box style "Tale of the Tape" would make clear, a real heavyweight brawler of a competitor. OpenAI's original 2018 GPT had 110 million parameters related to the weights of the connections that enable a neural network to learn. The 2019 GPT-2, which caused much of the previous turmoil with its potential malicious applications, had 1.5 billion parameters. Last month, Microsoft introduced the world's largest similar pre-built language model with 17 billion parameters. In comparison, the monstrous GPT-3 of 2020 has an astonishing 175 billion parameters. The training is said to have cost approximately $ 12 million.

"The strength of these models is that, to successfully predict the next word, they learn really powerful world models that can be used for all sorts of interesting things," said Nick Walton, Latitude's chief technology officer, the studio behind A.I. Dungeon, a text adventure game generated by A.I. with GPT-2, reported Digital Trends. "You can also tweak the base models to shape the generation in a particular direction while maintaining the knowledge that the model learned in pre-school."

The computational resources needed to actually use GPT-3 in the real world make it extremely impractical.

Gwern Branwen, a commentator and researcher writing about psychology, statistics, and technology, told Digital Trends that the pre-trained language model that GPT represents has become an "increasingly critical part of any machine learning task that touches text. In the same way that (the default suggestion for) many image-related tasks have become "using a (convolutional neural network)", many language-related tasks have become "using a fine-tuned (language model").

OpenAI – which refused to comment on this article – is not the only company doing impressive work with natural language processing. As already mentioned, Microsoft has appeared on the table with its own dazzling work. Facebook is now investing heavily in technology and has achieved breakthroughs like BlenderBot, the largest open source open domain chatbot ever released. It outperforms others in terms of engagement and feels more human, according to human reviewers. As everyone knows who has used a computer in recent years, machines understand us better than ever – and the processing of natural language is the reason for this.

The size is important

However, OpenAI's GPT-3 still stands alone on its record-breaking scale: "GPT-3 causes a sensation primarily because of its size," says Joe Davison, research engineer at Hugging Face, a startup that is working on the further development of natural language processing works by developing open source tools and performing basic research, said Digital Trends.

The big question is what all of this is used for. GPT-2 found its way into a variety of applications that were used for various text generation systems.

Davison expressed some caution that GPT-3 might be limited by its size. "The OpenAI team has undoubtedly broadened the boundaries of these models and shown that expanding them will reduce our reliance on task-specific data across the board," he said. “However, the computational resources required to actually use GPT-3 in the real world make it extremely impractical. Although the work is certainly interesting and insightful, I would not call it a big step forward in the field. "

OpenAI

However, others disagree. "The (A.I.) community has long observed that combining larger and larger models with more and more data leads to almost predictable improvements in the performance of these models, similar to Moore's law on scaling computing power," said Yannic Kilcher, an A.I. Researcher who runs a YouTube channel told Digital Trends. “But like Moore's Law, many have speculated that in the end we will be able to improve language models by simply scaling them, and to achieve higher performance, we would have to make significant inventions in terms of new architectures or training methods. GPT-3 shows that this is not the case and that the ability to simply increase performance simply by scaling seems to be unbroken – and an end is not really in sight. "

Pass the Turing test?

Branwen suggests that tools like GPT-3 could be a major disruptive force. "One way to imagine that is at which jobs to take a piece of text, transform it, and output another piece of text?" Said Branwen. “Any job that it describes – such as medical coding, billing, receptionist, customer care (and more) – would be a good target for fine-tuning GPT-3 and replacing that person. A lot of jobs are more or less "copying fields from one spreadsheet or PDF to another spreadsheet or PDF", and this type of office automation that is too messy to just write a normal replace program would be for GPT-3 vulnerable because it can learn all exceptions and different conventions and can do as well as humans would. "

Ultimately, the processing of natural language may only be part of A.I., but it probably hits the core of the dream of artificial intelligence in a way that few other disciplines in the field do. The famous Turing test, one of the groundbreaking debates that fueled the field, is a problem with natural language processing: can you get an A.I. that can pretend to be convincing as a person? The latest work from OpenAI undoubtedly advances this goal. It remains to be seen which applications researchers will find for this.

"I think it is the fact that GPT-2 text is so easy to pass on to humans that it becomes difficult to wave it off by hand as" just pattern recognition "or" just memorizing, "" said Branwen. "Anyone who was certain that what deep learning is doing has nothing to do with intelligence must have shaken their faith to see how far it has come."

Editor's recommendations




Leave a Reply

Your email address will not be published. Required fields are marked *