Extending GPT-3’s context window infinitely by storing context in GPT-3 itself or in secondary layers

Let’s give GPT-3 what it needs to power an AGI. Feedback is welcome.


OpenAI’s GPT-3 is one the best (and most underrated) things that happened to mankind in 2020. It proved that it is possible for an AI to surpass the zero-shot and few-shot learning abilities of most humans on a huge range of general tasks, both logical and creative, and pass the duck test for human intelligence (aka Turing test) with flying colors.

The implications are profound, and provide a fresh look both about what it means to be intelligent, and the notion of agency. Indeed, what are we if not a machine predicting and merely obeying our next most likely action/thought based on the context window of our life and everything we’ve been exposed to, creating our own model of the world and acting upon it?

GPT-3 has also built its own model of the world, basically of how things (symbols, words and abstract concepts) relate to each other, in a comprehensive and accurate way no symbolic AI researchers/enthusiasts (myself humbly included) could have hoped for, or remotely anticipated.

We have this amazing model, which proves human knowledge can be encoded in a huge neural network, but today can only use it provided the context and expected response fit within 2048 tokens (expressed via language, only around 1,000 words). And most importantly, we cannot easily and efficiently teach it any new information the way we can teach a human intelligence (which is really the Holy Grail for AGI). Fine tuning and retraining the whole model open useful possibilities, but are far being the kind of one-shot feedback loops humans excel with.


What about training a meta model that would learn where and how to best adjust GPT-3 parameters so they can “learn” new knowledge, preserved, contextualized and linked to the huge swath of existing knowledge already in the network?

For more modularity, what if given something we want GPT-3 to learn (an input), and GPT-3 itself, we could find how to modify the parameters of a secondary layer/model so that the combination of both layers produces an output that is consistent with the input provided?

Both the reward function and model evaluation benchmark could basically prompt the model itself to verify that the combined network is able to reliably and robustly regurgitate what it has been taught (not raw data memorization).

Context itself could therefore be taught, infinitely expanding GPT-3’s capabilities beyond 2048 tokens, and giving it the ability to create its own memory and evolving model of the world, one shot at a time, the way humans do (at worst with our same defects/limitations).

Contextual layers could be deeply personal (taught with all the stimuli we experience ourselves, language being a good start, the same way we could teach a human locked in a room via a console about our life by journaling what we hear, read, say and think), to generate an AI that knows almost everything about our life and could become the ultimate personal assistant/extension of self. It could also leverage layers at an organizational level (e.g. company knowledge), and the core, common mankind-level probabilistic knowledge we are willing to trust (e.g. GPT-3, the same way we trust Wikipedia).


What I propose here is the simplest mechanism allowing for an AI to find its way to modify only a small part of its existing model parameters to learn a specific input, without the need to retrain the model or change its architecture. I believe this is a key step in the right direction towards a more general AI able to learn, improve and contextualize its knowledge, and leverage it in a robust and scalable way.

I plan to work on this myself, but feel that it is such a big, important, challenging and potentially risky endeavor that I feel compelled to share it with the AI community and hopefully receive feedback/help from people with a better understanding of the technicalities and the necessary skills to help improve the idea and eventually try and implement it.

If you like the idea, and want to discuss, collaborate, or help, please comment here or reach out to me.

I’d like to thank GPT-3 for writing this conclusion (all the possibilities it suggested, based solely on what I had written before, were spot on).

This post was also published on Medium.

2 commentaires à l'article “Extending GPT-3’s context window infinitely by storing context in GPT-3 itself or in secondary layers”

  1. Cynthia
    10 août 2021 | 23:30

    So many years have passed, good to have you back 🙂
    Has the team of GPT-3 reached out to you regarding your ideas yet?

  2. Adrian
    28 juin 2022 | 21:11

    Achieving long term memory with causality is challenging. The one tool we really have right now is SGD. How can we use SGD to optimize a LLM to compress experiences such that they can be recalled and causally linked with some arbitrary current context? It’s not trivial.

    A good example is the traffic cam thought experiment. Suppose you are driving down the highway on May 1st, and you speed past a traffic cam that you are unaware of. On July 1st, you receive a ticket in the mail for your speeding violation. You can then recall your near exact actions from two months prior, causally link your speeding to the ticket, and update your world model to include the location of the traffic cam.

    For any system compressing memories, there’s several issues to contend with. First, catastrophic forgetting. You need a way to preserve every previous experience over a long range even if these experiences are unused for many iterations of compression. Second, there is a partial temporal ordering of memories that needs to be preserved for causality to be achieved. Think of the last meal you ate. You can partly replay the event, in the correct order. You also separately know when the event occurred.

    I think these two problems are quite difficult for SGD to solve

Laisser un commentaire