The Rise of Metahuman Artificial Intelligence

By Denis Rothman

Transformer models in artificial intelligence have opened a new era of machine intelligence. 175-billion parameter Transformer models such as OpenAI GPT-3 trained on trillion-word datasets can exceed human baselines for many tasks. 

This article defines these models, then determines their scope, and finally how to implement ethical AI. 

Defining Metahuman Foundation Models 

The paradigm shift of transformer models has taken the AI world by surprise. Few still understand their architecture, scope, and applications.

When you browse, purchase items, listen to music online and watch videos, you’re activating metahuman AI functions that include transformers.

Transformer’s architecture is a giant step forward. Instead of having many multiple different-sized layers, the models are identical.

Think of a transformer model as the 100-year-old V8 engine. Each cylinder is the same size. In a transformer model, these cylinders are the attention “heads.”

You can now visualize each head running on its own in parallel, like for a V8 engine.

The industrial structure of Natural Language Processing(NLP) transformers allowed them to train on trillion-word datasets, with 175 billion parameters(soon a trillion) on supercomputers with 10,000 GPUs and tens of thousands of CPUs.

The result can be summed up in two well-defined concepts by the Center for Research on Foundation Models (CRFM) at the Stanford Institute for Human-Centered Artificial Intelligence (HAI): emergence and homogenization.

Emergence allows transformer models that qualify as foundation models to perform hundreds of tasks they were not trained for.

Homogenization allows foundation models to transfer their knowledge to many domains and become vision transformers such as ViT, CLIP, and DALL-E.

The performance of foundation models has opened an era of metahuman or superhuman machine intelligence

The Scope of Transformer Models

The scope of transformers models needs to be clarified. The topic is recent. Few researchers and AI specialists realize the tremendous scope and the exponential limits of transformers.

Transformer-driven AI can perform many tasks even better than many humans.

However, machine intelligence has nothing to do with human intelligence. Human intelligence, in turn, has nothing to do with the way birds, insects, and plants use their intelligence to solve their problems.

Comparing machine intelligence to human intelligence makes no sense. We are biological beings with consciousness, self-awareness, beliefs. Above all, we can produce something new, not in a dataset or the past.

Machine intelligence is merely another form of pragmatic mathematical intelligence that solves repetitive problems. The repetition is reflected in datasets. The machine learns many statistics and then makes predictions based on probabilities.

The metahuman or superhuman AI goal is what I call Economic-Artificial General Intelligence or E-AGI. E-AGI is limited to micro-tasks that are profitable in supply chains. I define supply chains as any form of human activity where one party provides something to another party. This includes products, services, and anything one person provides to another.

The global economy requires speed. 

Our era is that of the extinction of waiting time. Nobody wants to wait for anything anymore. 

Everything must be immediate anywhere at any time. Consumers want to listen to music, watch videos, and read resources anywhere, 24/7. Corporations want their products to be transported as quickly as possible. Online platform consumers expect continuous service.

The impact is that billions of micro-decisions must be made every day and that there are not enough humans in the world available 24/7 to perform these trillions of tasks per year.

Without automation(bots/robots), AI-driven micro-decisions, we would find no food in our supermarkets and nothing that makes the comfort of our lives.

But AI abruptly stops there. Our emotions such as love, hate, empathy, jealousy, generosity, and greed remain in the realm of humans. Humans can use their abilities to create and destroy behind the reach of machines, however efficient they are.

AI can be thus perceived as a gigantic pocket-calculator with which humans can calculate the medication per month for a patient or how to aim missiles at other humans.

We have now defined transformers and understood their scope.

Ethical and Inclusive AI

One of the main challenges of AI is not in AI anymore. Transformers have changed the paradigm. Yet, even those who speak about AI don’t realize they now have the tools to control AI in-depth. It is as if they are still living in the 2010s, not the disruptive 2020s.

If you wish to build an AI solution, it is now possible to deploy it in an ethical, inclusive, and sustainable way. Consider a trained transformer model as a child that just learned a language. It is ridiculous to use it without integrating it into an ethical pipeline in two steps:

  1. Content filtering. Run 100% of the input content through a content filter. It is a transformer model that controls the content and returns its safety level. You can add a dictionary to your pipeline and classically filter words. You can fine-tune transformer engines with your criteria. Also, you must apply the same method to the output before it reaches the user.
  2. Model-Agnostic Explainable AI(XAI). Include a model agnostic XAI algorithm to the output that will detect and explain every output. Model-agnostic means this applies to any model, AI or not, based on the inputs and outputs.

Once you realize the wide range of tools available to produce ethical, inclusive, and sustainable AI, you ask yourself the following question:

If the metahuman artificial intelligence described in this article can exceed humans to a certain degree and be monitored, why isn’t AI under control?

The answer to that question goes beyond the scope of artificial intelligence and into the history of humanity, the conflicting motivations of humans, and the one thing that machines cannot predict: human behavior.

Recent publications

Transformers for Natural Language Processing: Build innovative deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, RoBERTa, and more Paperback – January 29, 2021

Hands-On Explainable AI (XAI) with Python: Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI apps Paperback – July 31, 2020

Artificial Intelligence By Example: Acquire advanced AI, machine learning, and deep learning design skills, 2nd Edition Paperback – February 28, 2020

Articles for Amazon:

YouTube Channel:

Original patents and cutting-edge implementations:

Picture of Denis Rothman

Denis Rothman

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, writing one of the very first word2matrix embedding solutions. He began his career authoring one of the first AI cognitive natural language processing (NLP)chatbots applied as a language teacher for Moët et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an advanced planning and scheduling (APS) solution used worldwide. Denis is the authors of artificial intelligence books such as Transformers for Natural Language Processing.