One of the biggest trends in machine learning right now is text generation. AI systems learn by absorbing billions of words scraped from the internet and generating text in response to a variety of questions. It sounds simple, but these machines can be used for a variety of tasks – from making fiction, to writing bad code, to letting you chat with historical figures.
The best-known AI text generator is OpenAI’s GPT-3, which the company recently announced is now being used in more than 300 different apps, by “tens of thousands” of developers, producing 4.5 billion words per day. It is a a lot of robot verbiage. This can be an arbitrary milestone for OpenAI to celebrate, but it is also a useful indicator of the growing scale, impact and commercial potential of AI text generation.
OpenAI started life as a non-profit organization, but in recent years it has tried to make money with GPT-3 as its first marketable product. The company has an exclusivity agreement with Microsoft that gives the technology giant unique access to the program̵
Since OpenAI is concerned with advertising, hundreds of companies are now doing just that. A startup called Viable uses GPT-3 to analyze customer feedback, and identifies “topics, feelings and emotions from surveys, helpdesk tickets, logs, live chat, reviews and more”; Fable Studio uses the program to create dialogue for VR experiences; and Algolia uses it to improve its web search products which it in turn sells to other customers.
All of this is good news for OpenAI (and Microsoft, whose Azure cloud computing platform drives OpenAI’s technology), but not everyone in the startup is concerned. Many analysts have noticed the folly of building a company on technology you do not actually own. Using GPT-3 to make a startup is ridiculously easy, but it will be ridiculously easy for your competitors. And while there are ways to differentiate your GPT startup through branding and user interfaces, no company stands to gain as much as on the use of technology like OpenAI itself.
Another concern about the increase in text-generating systems is related to print quality. Like many algorithms, text generators have the capacity to absorb and amplify harmful interference. They are also incredibly stupid. In tests of a medical chatbot built with GPT-3, the model responded to a “suicide patient” by encouraging them to kill themselves. These problems are not insurmountable, but they are certainly worth noting in a world where algorithms already create erroneous arrests, unfair schooling and biased medical bills.
However, as OpenAI’s latest milestone suggests, the GPT-3 will just keep talking, and we need to be ready for a world filled with robotic chat.