OpenAI Making Significant Advances in AGI

Categories

OpenAI was founded in San Francisco in 2015 with a mission “to ensure that artificial general intelligence benefits all of humanity”. Its founders, who include Elon Musk and entrepreneur and ex-Y Combinator President, Sam Altman, set up the organization partly motivated by fears over the existential risk posed by artificial general intelligence (AGI). The organization, which makes its patents and research open to the public, does so in order to “freely collaborate” with other AI researchers and investors.

The mission in full reads, “OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.”

The company’s investors include Reid Hoffman’s charitable foundation and Khosla Ventures. Musk resigned his board seat in 2018, citing “a potential future conflict [of interest]” with Tesla’s plans to build autonomous cards, but remains a donor.

What is AGI?

OpenAI defines AGI as a highly autonomous system ultimately able to outperform humans at most tasks. AGI differs from the more common concept of artificial intelligence, which has a narrower set of skills, for example, able to predict the probability of outcomes or recommend content in Facebook’s News Feed, etc. True AGI instead possesses intelligence that is broad and adaptable. Despite the centrality of this idea to the field of AI, researchers differ over when this feat might actually be achievable.

Breakthroughs

OpenAI has recently made some impressive breakthroughs with its products, which look set to bring the goal of AGI considerably closer. Its already long list of research papers vary widely, from “Neural MMO: A massively multiplayer game environment for intelligent agents” to “Learning Dexterous In-Hand Manipulation”. Some of its most significant recent breakthroughs include the following:

A Controversial New Language Model

OpenAI has been hitting the press lately for its new natural language model, which claims to “generate coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training”. TechCrunch recently reported that the new model is so good at generating “convincing, well-written text” that the company is concerned about potential abuse.

The language model, called GPT-2, has been trained to predict the next word in samples of 40GB of Internet text. This leads the system to build text, which “adapts to the style and content of the conditioning text”, enabling the user to “generate realistic and coherent continuations about a topic of their choosing”. Compared to its first iteration, this version is a major improvement able to produce longer text with greater coherence. OpenAI released a series of samples illustrating its ability to generate “over a page or more of text”. The company found that GPT-2 was capable of generating “reasonable samples” around 50% of the time, and its success was dependent on how familiar the model was with the context. OpenAI anticipated the positive kinds of systems GPT-2 could be used to create in the near-term, including AI writing assistants, more capable dialogue agents, unsupervised translation between languages, and better speech recognition systems.

It was less successful with highly technical and/or esoteric content. OpenAI also found that the model was capable of bad applications, such as impersonating others online, generating fake news and automating the production of abusive or faked content on social media. This was its response to the prompt, “Recycling is good for the world, no, you could not be more wrong”:

“Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources.”

As a result of its concerns over large language models being used for misleading purposes, OpenAI said it would only be releasing part of the language model to the wider public and would not be releasing the dataset, training code, or GPT-2 model weights. It deferred to its charter, which specifies that “safety and security concerns will reduce our traditional publishing in the future”. It was a controversial decision, attracting criticism for not immediately releasing its findings to the public. The company was aware it would be so, noting that the decision was an experiment that it was not entirely sure was “the right decision today”, but wanted to give the AI community the opportunity to have “more time to have a discussion about the implication of such systems”. They also noted their suggestion that governments “should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems”.

A Winning Gaming Bot

Last year, the world’s best Dota 2 players were still able to fend off OpenAI Five, OpenAI’s cutting-edge AI bot.  This month, however, Five went back out to battle and this time in a best-of-three exhibition match, was able to beat five players from OG, a veteran team of gamers that had previously won Valve’s 2018 International. Reports on the game highlighted Five’s ability to win being partly due to the deep learning system picking aggressive and unconventional methods, including selecting valuable heroes and performing instant revivals for heroes in the early stages of the game. OpenAI also revealed the way in Five was able to play alongside humans and learn from their style of play. However, it largely won because its initial bets paid off and it is definitely better at short-term strategy than long-term decision making, largely because it is limited to playing by a narrow set of rules. Nonetheless, the victory shows that powerful AI in other games is likely only a matter of time, and the techniques displayed here could also apply to robotics in non-gaming tasks.

A Spam-Detecting Robot

One of OpenAI’s flagship products is a robotics system entirely trained in simulation and deployed on a physical robot, capable of learning a new task after seeing it performed only once. One of the tasks it has shown success in is successfully flagging a can of Spam for removal, a task performed by its vision system, followed by physical movement to grasp and remove the Spam. An earlier iteration of the robot was trained using domain randomization i.e. showing it simulated objects with a range of color, backgrounds, and textures, without the use of real imagery. In its most recent version, OpenAI has developed a new algorithm using one-shot imitation learning, in which a human demonstrates how to perform a task using VR. Following just one demonstration, the robot is capable of solving the same task from a random starting configuration.

Block-Sparse GPU Kernels

In late 2017, OpenAI released a set of highly-optimized GPU kernels specifically for “an underexplored class of neural network architectures: networks with block-sparse weights”. The kernels are capable of running orders of magnitude faster than cuBLAS or cuSPARSE, depending on the selected sparsity. They are used in text sentiment analysis and generative modeling of text and imagery.

Until recently, the development of algorithms and model architectures within deep learning have been limited by the availability of efficient GPU implementations of elementary operations, in particular an efficient GPU implementation for sparse linear operations. Sparsity enables training of neural networks that are far deeper and wider than might be possible otherwise with the same budget and computational budget, for instance, LSTMs with tens of thousands of hidden units.

The Launch of OpenAI LP

Just last month, OpenAI launched OpenAI LP, a spin-off for-profit company, which will be owned and controlled by its board of directors. Its mission remains the same while its goal for the “capped-profit” company is to “rapidly increase our investments in compute and talent while including checks and balances to actualize our mission”.

The new company aims to raise and invest billions of dollars in the coming years, and take advantage of the ability to raise more capital more quickly while still serving their mission.

The company is a new kind of legal model. The starting round of investors, which include LinkedIn cofounder Reid Hoffman and Khosla Ventures, will receive 100 times the amount they’ve invested from any profit gleaned by OpenAI LP. Anything in excess will be transferred to the non-profit side. OpenAI describes it as a kind of “hybrid of a for-profit and nonprofit”.

Its CEO will be Sam Altman with Greg Brockman as CTO and Ilya Sutskever as chief scientist. Brockmain will also act as the chair of the OpenAI nonprofit’s board of directors.

“We’ve experienced firsthand that the most dramatic AI systems use the most computational power in addition to algorithmic innovations, and decided to scale much faster than we’d planned when starting OpenAI,” the blog post reads. “We’ll need to invest billions of dollars in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers.”

The nonprofit wing continues to exist (now called OpenAI Nonprofit), however, it is now much smaller as most of the hundred or so employees have moved across to the commercial side.

Speculation is already growing around the unusual nature of a capped-profit company. The Register speculated, for instance, that, “In order to pay back these early investors, and then some, OpenAI LP will have to therefore find ways to generate fat profits from its technologies”.  Several machine-learning experts also told the paper that they were upset by OpenAI’s decision, which has until now stood out among other machine learning organizations for its nonprofit status and concentration on growing machine-learning knowledge outside of profit and product incentives, plus its sharing of open-source research.

Daniel Lowd, an associate professor at the department of computer and information science at the University of Oregon said, “A profit incentive is a conflict of interest.” Rachel Thomas, , co-founder of fast.ai and an assistant professor at the University of San Francisco’s Data Institute, agreed. “I already see OpenAI as similar to the research labs at major tech companies: they hire people from the same backgrounds; focus on publishing in the same conferences; are primarily interested in resource-intensive academic research. There is nothing wrong with any of this, but I don’t really see it as ensuring that AI is beneficial to humanity, nor as democratizing AI, which was their former mission statement. To me, forming OpenAI LP is just one more step to being indistinguishable from any other major tech company doing lots of academic, resource-intensive research.”

OpenAI has already tried to address concerns that it will err from its primary mission by insisting the for-profit company is controlled by the nonprofit’s board. This means that whatever the for-profit side wants to do, the nonprofit board will guarantee it abides by the charter, which states it will “build safe and beneficial artificial general intelligence,” and have the final say on any major decision.

“The mission comes first even with respect to OpenAI LP’s structure,” it stated. “We may update our implementation as the world changes. Regardless of how the world evolves, we are committed — legally and personally — to our mission.”

At any rate, it is hard to dispute the breakthroughs in AGI OpenAI has already achieved, and looks only set to continue.

Scroll to Top