One of the prominent August news became Jack Clark’s “manifesto” called “Big models: What has happened, where are we going, and who gets to build them.” He delivered it on August 23rd at the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
Jack Clark is a co-founder of Anthropic, an AI safety and research company; co-chair of the AI Index; co-chair of the OECD’s working group on classifying and defining AI systems. Previously he was the Policy Director of OpenAI, an AI deployment company. Before that, he worked as the world’s only neural network reporter at Bloomberg and the world’s only distributed systems reporter at The Register. Clark is one of the most knowledgeable AI experts in the world.
He writes on his site that he believes “the greatest challenge of the 21st century is to make an increasingly fast-moving technical world ‘legible’ to many people. My belief is that by solving these information asymmetries, we will naturally build the infrastructures necessary to maintain stability in an era of great change and possibility. Things will be weird. Be not afraid.”
Fundamental theses of Clark’s report are:
· the mainstream development trend of AI leads to the fact that material and intangible benefits unseen in the history of humanity will be received only by the wealthiest businessmen, high government officials, and leading AI developers;
• the overwhelming majority of people simply will not get these overvalued benefits.
For those who are not familiar with the state of affairs in AI development, this forecast seems like a dystopian bogeyman. Ordinary people rely on the commoditization of AI applications, which, in their opinion, will lead over time to drastically reduce the cost of applications and make them available to most (such as modern free applications for translation, text generation, and all kinds of games with photos of themselves and others).
In contrast, Clark claims that no amount of commoditization will save.
(Commoditization is the process of converting products or services into standardized, marketable objects. This process tends to strip away unique or identifying qualities of the commodity in favor of identical, lower-cost items that can be interchanged with one another).
The logic behind Clark’s “manifesto” presented at Stanford HAI is: why should big businesses share super-good things? After all, until now, the monopolists of BigTech have not done this. They only increase their capitalization by tens and hundreds of billions a year.
What does Clarke suggest?
A) The further development of AI decisively depends on increasing and improving “Big Models.” This is how Clark calls basic models trained based on scalable “big data” and can be adapted (reconfigured) for a wide range of lower-level tasks – for example, BERT, GPT-3, CLIP.
B) Fine-tuning and training of Large Models are very expensive. And that is why this is done now only by BigTech corporations. They take the academic developments of such models and improve them without the “academicians.”
C) “Academicians” are further out of the game. And the gamers of BigTech, albeit with a creak, are still forced to share the achievements of the Big Models with the government.
D) If everything is left as it is, the primary beneficiaries of the development of AI will remain the BigTech corporations and high officials. And most people will only slurp the negative consequences of introducing more and more powerful applications (such as means of total control, etc.).
E) Only one thing can change the state of affairs. Large models should be trained and improved not by BigTech, but in the academic environment (and Clark writes how to go about this).
Most likely, BigTech’s corporations will ignore Clark’s “manifesto”– they will not notice it or will laugh (as was the case with the film “Social Dilemma”).
(BigTech enrichment rate).