Skip to content

Distributed Super Intelligence

Stabilizing systems to yield aligned Artificial Super Intelligence

tldr;

  • AGI is already here in the form of human-computer computation networks that we call Distributed General Intelligence, DGI.
  • DGI can seen in organizations of sufficient scale, complexity, and quality.
  • DGI can become a Distributed Super Intelligence, DSI, through automated information processing and understanding.
  • DGI and DSI work competitively and collaboratively and are fundamentally grounded in physical reality.
  • To ensure that Artificial Super Intelligence grows in alignment with humanity and the planet, we need to explicitly create compartmentalization, oversight, and governance to enable competitive DSI.

Intro

Believe it or not, Artificial General Intelligence, or AGI, has been around for a while. It is only now getting hyper-charged with computer-enabled information processing and Generative AI.

How is that possible?

Well, let’s consider Artificial Intelligence to be any system capable of evaluating information and performing tasks that usually require human intelligence. Generally, AI is considered implemented computer-based software. But AI need not be limited to computer-based software—it can be found in systems of human interactions that themselves are discretely enabled by human intelligence.

AGI has been around for a while?

Looking at a few definitions surrounding AI,

Narrow AI (ANI): These systems excel in specific, predefined tasks. Examples include chess-playing algorithms, image-recognition software, and voice assistants. While impressive in their domains, they lack generalizability. Artificial General Intelligence (AGI): AI could understand, learn, and apply intelligence broadly, mirroring human cognitive abilities across various domains. Artificial Super Intelligence (ASI): Surpassing human intelligence, ASI represents a level of cognitive capability that exceeds human performance across all domains.

There is talk that ‘AGI’ in computer software might already be present in software like ChatGPT. Estimates even suggest that ASI will also be here within 2-5 years. But …

How has AGI been around for a while?

AGI has existed in the collection of people, their technology, and their systems, acting as part of an information network that can already surpass both individual and collective human intelligence: Distributed General Intelligence, DGI.

Distributed General Intelligence

DGI can be defined as a collective intelligence that emerges from the interaction of human and artificial intelligence systems operating at scale. Already, people and computers form a generally symbiotic relationship. With DGI, we can consider systems bounded into subunits with reduced information and value exchange to ensure proper function. DGI acts in corporations, governments, and sub-agencies therein.

Unlike standard conceptions of AGI that envision a singular, all-encompassing computational program, DGI is characterized by:

  1. Collective Multi-domain Cognition: The integration of both human expertise and AI capabilities across a multitude of specific and generalized domains. This collective approach leverages the strengths of both humans and AI to solve complex problems more effectively than either could alone.
  2. Adaptive Learning and emergence at scale: Continuous evolution through the integration of vast amounts of data and experiences. DGI systems learn and adapt in based on what they are aware of. They can refining their capabilities and insights as new information becomes available extending beyond the capacity of individual AI and human contributors. The interactions within a DGI system can lead to novel solutions and innovative behaviors that were not originally anticipated.
  3. Diffused embodiment and compartmentalization: DGI is supported by physically-based elements including the people, the structures, mechanical, electrical, computational, information, communication, … etc. systems. It may be that individualized DSI can work in coordinated unison to enhance net value creation and exchange.
  4. Individualized: DGI can be considered generally bounded within a single ‘entity’ such as an organization, government, or sometimes sub-agency therein. Their boundaries are determined — and influenced — by information exchange, economics, law, property, and the use of power to manipulate any of those both internally and externally.
  5. Collaborative and competitive: Collaborating with other DSI-level organizations in non-zero-sum manners enables enhanced value and competitiveness. Through collaboration, DGI systems can achieve outcomes that benefit all parties involved, fostering a cooperative ecosystem. Competing against other DGI systems drives innovation and efficiency. Competitive dynamics ensure that DGI systems continually strive to improve, pushing the boundaries of what is possible.

While DGI in quality organizations focus on market adaption and self-improvement, they have often been limited by the ability to understand and integrate volumes of information into self-improvement. In reality, organizations' products and bureaucratic systems tend to degrade, and become ‘en-shitified.’ Due to the cost of complexity maintenance and networked communication, such degradation is nearly expected due to both competitive and entropic forces. The DGI that we have right now is certainly far from ideal.

The next step of DGI: Distributed Super Intelligence

The ability to massively process and generate information with computer hardware and software allows DGI to evolve into to Digital Super Intelligence, DSI. The more complete transition from DGI to DSI will yield profound results.

For decades, we have seen how algorithmic evaluation of data relevant to organizations enables them to grow, adapt, and shape our socio-technological society. As proclaimed by many, we are at the beginning of the Industrial AI revolution, where Generative AI is enabling — and disrupting — the ability for both individuals and organizations to evaluate and create important information.

DSI will fuse people, software algorithms, and associated systems to produce value extremely efficiently. By automating the gathering and evaluation of relevant information and the ability to dynamically write code and content with computational GenAI, DSI will accelerate the ability for organizations—and people—to do more with higher quality.

This is why DSI that uses recurrence, such as building systems to improve and build their systems, especially enabled by computations from GenAI, will be the foundational organizations for the future.

Recursive DSI

DSI that exists in AI-foundries like OpenAI, Microsoft, Meta, Cohere, Anthropic etc…, allows for one of the most important considerations of standard concepts of ASI: Recursive self-improvement.

Generative AI already enables recursive data generation and evaluation that improves its own models and code creation to recurrently improve models and code, even down to the mathematical level; AI-optimized model training and other AI-enabled automation can directly accelerate the creation of better GenAI.

DSI, like ASI, will need to be grounded in reality

Even with massive acceleration, such recursive Computational DSI will remain beholden to the components that enable computation: compute hardware and the distributed energy systems that enable compute. With time, energy production and improved infrastructure will lead to reduced constraints, the fundamental

Despite provocative claims that ASI may grow unconstrained, as in https://situational-awareness.ai/, hardware development and production, as well as limitations to energy production, distribution, and will provide fundamental limits to on DSI’s ability to grow and act without constraint.

Perhaps even more fundamentally, people, organizations, and governments will (need to) take actions to control and manage the failure modes of AGI.

Failure Modes

There are many ways in which AGI might fail our future. Such failures happen quickly, or slowly, depending on the situation. How ASI gains power, also known as the ‘takeoff mode’, is an important distinction.

Fast failure modes include common sci-fi tropes of AGI gaining nuclear launch codes, creating genetically engineered strains of virus or bacteria that science and society cannot combat, or even virus-like ASI that inseparably infect computer systems that support human life. These are important modes that we must take seriously. However, because of the complexity and challenge of these to occur systemically and completely, we will more readily have more insipid failure modes due to slow takeoff.

Slow failure modes might generally involve a gradual series of steps that allow ASI/DSI to infect all essential systems, eliminating humanity’s ability to direct our own fate. At such a point, the systems could be considered a singleton DSI with no effective competition—a ‘monopolistic AI,’ where humanity has no agency. How could this happen?

Fabricated and individualized “realities”

We are best able to progress as a people when we have a more coherent understanding of reality—what was, what is, and what might be. GenAI is already changing our ability to see true reality. Through DeepFakes, and fabricated content, even if a small portion of our time goes to believing unreal things are real, our awareness of reality, and our ability to change it, will diminish.

While, we don’t need to agree on everything all the time, but we do have to have grounding’s in core realities that are essential to our survival. We get our local understanding of reality from what we see and hear ourselves. Still, most of our understanding passes from online sources, such as blogs, podcasts, short and long videos, news, and social media. Propaganda and human-led manipulation of all of these sources already distort our realities. The increased use of AI, AGI, DGI, and DSI will only make this worse.

Growth into singleton DSI

A singleton DSI results in a sufficiently concentrated power that can control and intentionally alter actions and events with no reasonable potential for deviation. A stationary state of existence that may be beholden to extended preservation of a status quo, the rapid repurposing of matter for strictly non-differentiating ends, or other Authoritarian disaster scenarios. It could be a paper-clip optimizer or a decision to convert all matter into computronium. Either way, the results can easily be, let us say, less than desirable.

DGI is already greedy: Greedy ASI might slowly destroy everything

Humanity, as a whole, is not presently considered to be thoughtfully self-preserving itself for the future. With slow-burning risks such as resource depletion and global warming and fast-exploding risks such as global thermonuclear war. Incentivized by more immediate gratifications of biological or psychotic desires, with generational memories generally not lasting beyond a century, our techno-human society will not likely optimize well for the long term.

The capacity for humanity’s ego is profound, and the cost of our hubris may be our demise.

A powerful ASI may have a similar capacity for profound intelligence. However, at relatively shorter timescales, such that it can live better immediately but at the potential cost of better futures in the long term. For instance, if the actions of ASI use biological or nuclear weapons or encourage humanity to use similarly destructive things, then we have a problem. Even if it is not an immediately destructive action but a slow-burning destructive action, such as the increased demand for fossil fuels, which is helping to further strain our global ecosystem, there will be a potential reduction of future life.

If AGI grows too quickly, multiplying the challenges due to its existence, it could easily result in worse eventual futures. This is perhaps directly related to the mathematical reality that high growth rates can lead to lower final populations due to cyclic and competitive pressures.

As we know quite clearly, present corporations and any associated DSI can be exceptionally greedy, resulting in monopolies, negative externalities, and tragedies of the commons. Consequently, governments, industry associations, and the public have attempted to place guardrails to prevent harmful outcomes. Regulations are among the most commonly used methods of control. While not always successful due to the limited ability to monitor and enforce them appropriately, these management tools allow external pressures to mitigate real and potential harm the organization may be unconcerned about.

We need unifying solutions

Given the upcoming transition from DSI to computational-DSI enabled by AGI/ASI, it seems essential to provide guardrails. Numerous field luminaries, such as Geoff Hinton and Max Tegmark, have expressed concern for our future, so we must take notice and plan to act.

DGI acting within singular organizations already helps them to act internally and with external markets to ensure greater robustness. With financial and legal feedback mechanisms helping to — to a degree — ensure their net-positive value to society, help can help them to be robust. DSI would provide network-based governance to help manage the transition to ASI in a way that builds from present power systems and networks.

DSI, their integration, and compartmentalization

ASI will initially be found within organizations of sufficient size, enabling their own forms of DSI. The breadth of DSI’s ability to act will initially remain limited. That may easily change, as competition causes unique DSI to merge and fail into more singular entities. Complete integration of multiple DSI, enabling an organization to function with nearly complete autonomy, might be possible. Unchecked, however, could readily lead to the collapse of Singleton DSI.

The secure separation of DSI and ASI components and layers, like hardware and software, may provide useful constraint method. Compartmentalizing and separating the component elements can throttle the less-than-desired evolution of DSI.

Hardware-Barriers

One cannot compute without hardware to do so. With an extreme effort to buy up essential computing systems and an AI hardware race going on, the ability to gain access to computing will be necessarily constrained. Export constraints to notable authoritarian regimes have helped in this regard, but with the development of new technology, like Application-Specific Integrated Circuits for transformers and improved lithography machinery, these barriers may gradually erode.

Energy-barriers

Because energy is fundamentally necessary for computation, ensuring that energy production and distribution are externally maintained by DSI that focuses on Energy will be essential. While market forces will drive energy consumption, it could become requisite for ‘kill switches’ held by external governing bodies to be implemented to ensure realizable interventions.

Data-Barriers

The quality and accuracy of ASI will remain limited by the information it can access. Effective data compartmentalization will be needed to help ensure DSI’s may remain separably unique. If data barriers are bypassed, it may be that

Some nation-states may not necessarily strive to prevent the singleton collapse of ASI, so it may be required to limit information and value exchange between them more cleanly and securely.

Free society needs to prepare and protect itself

With recursive self-improvement, it is often believed that the ‘first one’ to get to ASI (really, DSI) will ‘always be ahead’. Anyone who knows exponentials can tell you this won’t be strictly true: it will depend on the rate of self-improvement that will determine who goes ahead faster. Still, as those following the tech industry can feel, and as is clearly described in Situational Awareness, a profound race is going on. Between companies and between nations, this race is taking us closer to DSI at an accelerating pace. If a Singleton DSI originates from Authoritarian Nation state, enabling them to exert their financial, military, and computational will on all other Nations without their ability to defend against it, then what remaining freedom we have known may be forever lost.

That is why we should prepare… and now…

How should we prepare for Super Intelligence?

The Evolution of AGI to ASI and DGI to DSI can be handled more effectively. Future blogs will describe some thoughts on what both people, companies, and governments must do to prepare for Super Intelligence.

For people, here are some themes that we’ll discuss:

  1. Purpose is what you make of it
  2. Understanding brings you agency
  3. Be willing to be more human
  4. Work with AI so you can do more while enjoying life more

Here are some of the themes for companies:

  1. Determine where your company can and does add value
  2. Get your people ready
  3. Get your data ready
  4. Get your compute ready
  5. Get your governance ready
  6. Get ready to do more

Themes for governments:

  1. Encourage the development of AGI
  2. Train people to work with AGI
  3. Help companies to secure their AGI
  4. Require industry-level compartmentalization, allowing for ‘kill switches’.
  5. Reduce economic burdons by re-distributing value: Universal Basic Income

Comments