Artificial intelligence is introducing a new class of algorithms, capable of writing text that seems like it was written by a human.
In a previous article I wrote about the revolutionary game ‘AI Dungeon’. In this game you are the guiding hand to an AI that is inventing a story, where you decide what the character will do, say, or what will happen next. This game was built on top of the then revolutionary language model GPT 2, made by OpenAI; however, since then (June 20th, 2020 to be exact), the bigger, better and more accurate GPT 3 model has taken the front stage, with a staggering 175 billion parameters (for reference, the human brain has on average just 85 billion neurons).
Since then, the above mentioned game and many other applications have developed with the GPT 3 engine at their core. This lucrative AI market soon attracted two other giants, who came out guns blazing to try and create even larger and better models. The first, non other than Google, released a paper detailing a Trillion-Parameter AI Language Model Switch Transformer, which can scale to 1.6 trillion parameters, 10 times larger than GPT 3.
Two weeks ago Google presented their applications for this technology LaMDA and MUM, two AIs that will revolutionize chatbots and the search engine, respectively. The just a few days later, on the 1st of June, the Beijing Academy of Artificial Intelligence (BAAI) conference presented Wu Dao 2.0, which holds the record of being the largest of all with a striking 1.75 trillion parameters.
The chinse model adds some more capabilities to its stack like natural language processing, text generation, image recognition, and image generation tasks. We will have to wait an see how it works when applications start using it to solve real world problems.
Be the first to comment on "GPT 3, WuDao 2.0, & GST: the transformers have arrived!"