The Singularity Will Not Be a Problem

The Singularity Will Not Be a Problem

There are concerns that a Technological Singularity and Artificial General Super Intelligence (AGSI) will change the rate of progress goes beyond human comprehension. There are concerns that humans will be dominated and be at the mercy of true artificial superintelligence.

I will say that things are not on track for key details of AI becoming insanely powerful or a huge problem.

Super Powerful Computers are Not Enough for Humanity Going the Way of Neanderthal Scenario

Ray Kurzweil described the Technological Singularity when Strong synthetic general intelligence becomes one billion times the intelligence of all humans. He said this would be in the decades of the 2040s and that this would be the start when synthetic intelligence would significantly start to accelerate faster than humans could keep up.

However, it is not just the billion times the compute power applied to general (ie broadly capable intelligence).

Ray calls “fast thinking” AI intelligence – “weak superhumanity”. Such a “weakly superhuman” entity would probably burn out in a few weeks of outside time.


“Strong superhumanity” would be more than cranking up the clock speed on a human-equivalent mind. Imagine running a dog mind at very high speed. Would a thousand years of doggy living add up to any human insight? If you gave one human a thousand years to come up with a solution then would they be able to beat a regular person with one day or one hour all the time?

In chess, a Grandmaster loses two hundred points of capability when they are given very little time or if they are blindfolded. But the Grand Master is 2000-2500 points better than regular people.

I could play any superintelligence in a game of tic-tac-toe and stalement it every time. There are many classes of problems that do not improve beyond a certain level of intelligence.

The AGI needs to be far beyond human in its algorithms and capabilities and not just a billion times faster.

We are already moving to Exaflop computing. It is becoming tougher and tougher to find the problems that we could not solve with the prior generation of supercomputers.

Power Does Not Scale With Intelligence and Intelligence Does Not Scale With Compute

We have already had a trillionfold increase in computing since 1960 and a billion fold since 1976. This was not in comparison to humans but against the computing hardware.

The Supercomputers of today at 200 Petaflops are one trillion times the computing power of a 1960 Univac or IBM computer with 229,000 instructions per second.

The supercomputers of today are one billion times the power of the Cray 1 in 1976 (100 Mflops).

We had the Apollo missions in the 1960s. SpaceX just redeveloped US manned flight after nearly a decade. Vaccines and antibiotics were developed without computers.

Mass production was developed in the 1920s.

China grew its economy by 100 times from 1970 to 2019 with minimal growth dependent upon computing.

The major technology companies (Google, Facebook, Amazon, Alibaba, Microsoft) are able to leverage tens of thousands of smart people and exaflops of computing to become trillion-dollar companies. However, there is a $2 trillion company which does not depend on computing for its value. This company is Saudi Aramco.

There has been one million times more compute power devoted to the protein folding problem and drug discovery. There is some improvement but it has not been a monstrous acceleration.

There are many people who are able to make many millions and even billions of dollars just by properly managing real estate, stocks and bonds. Wealth creation is not tightly correlated to computing. The big technology companies are exceptions and they needed to grind out superior performance over a decade or two.

Control of weapons (nuclear bombs, fighter planes and tanks) should be broadly impervious to people being fooled. Humanity should not get super-trolled into a Skynet situation. The military has human in the loop security because they do not trust that control systems cannot be hacked by China or Russia.

Fine Print of the Singularity – AGI Scenarios

It is not just a lot of computing. It is cracking all of the secrets of the human brain as well. Kurzweil proposed that we would get very good at unraveling the brain, neurons and how they worked. He also believed that we would get full molecular nanotechnology in the 2020s and apply that capability for 20 years to get us to unraveling the entirety of brain intelligence and getting the computing for Singularity Strong AGI.

There are other assumption. There is an assumption that important problem spaces would have huge gains in capabilities and solutions that were only achievable with AGI. There are no gains from greater intelligence for tic-tac-toe or checkers. Those are completely solved. There should not be any level of trolling or trickery where people give ownership or control of property to a super-intelligence. If you are gullible enough for super-intelligence, then you were probably gullible enough for regular intelligence.

The molecular nanotechnology assumption is that an AGI could leverage that to vastly speed up the bootstrapping of its performance. It could constantly and rapidly rewrite itself. Currently, the entire IT industry needs two years for each new generation of chips and needs ten years to broadly re-architect new chips and systems that leverage new generations of systems.

We do not need superintelligence to solve molecular nanotechnology, nuclear fusion, climate change or interstellar space capabilities. It could help and things could speed up some but those defined problems can be solved without AGSI.

Timing

Getting the required understanding of the brain or finding other strong AI algorithms seems likely to delay strong AGI into 2060-2100.

There will be a lot of narrow super-intelligence for self-driving cars and for specific problems and applications. The faster and more capable at human AGI seems likely. This would weakly superhuman by Kurzweil’s definition.

We get very useful narrow superintelligence first. The self-driving car superintelligence by 2025. Billions will be spent to crack high-value narrow superintelligence problems.


Turing chatbots and systems with memory and good knowledge graphs will appear.

This will be followed by broader intelligence and portfolios of narrow systems.

The problem spaces do not necessarily show that breakthroughs would be out of reach of humans leveraging weak superintelligences. There will be systems that advise individuals, groups and companies and governments.

In chess, Alpha Zero was able to train itself in hours to master chess, shogi, and go and beat the best existing software. AlphaZero was dominant up to 10-to-1 odds. Stockfish only began to outscore AlphaZero when the odds reached 30-to-1 odds.

Handicaps (or “odds”) in chess are variant ways to enable a weaker player to have a chance of winning against a stronger one. There are a variety of such handicaps, such as material odds (the stronger player surrenders a certain piece or pieces), extra moves (the weaker player has an agreed number of moves at the beginning of the game), extra time on the chess clock, and special conditions (such as requiring the odds-giver to deliver checkmate with a specified piece or pawn). Various permutations of these, such as “pawn and two moves”, are also possible.

While 30 to 1 odds are big in chess. In the real World, people can easily accumulate advantages in money and information (secrets) that provide far more than 30-1 odds. Life is vastly unfair. Many competitions and situations are vastly stacked against newcomers.

An Open-source version of AlphaZero was created and it has an ELO of 4000. It is called Leela Chess Zero.

What will the World be like in 2040 or 2060?


We will likely have molecular nanotechnology. We will have zettaflop or yottaflop computing or beyond.

The first generation of embryos selected for intelligence would be young adults.

Narrow superintelligence will be everywhere.


Sped up regular general intelligence will exist at some levels.

I do not see how people who are fully accessing molecular nanotechnology, narrow superintelligences, massive quantum computing, accelerated but not strong general AI get crushed by true AGSI superintelligence. They can lose in business and in multi-trillion markets but how would people get utterly dominated? Corporations and nations would have massive surveillance systems. How would people get caught flat-footed and not notice that groups making progress and the cusp were making the key breakthroughs. Why would things not be structured to be resistant to an intelligence gap?

SOURCES – Kurzweil, wikipedia, chess.com, Brian Wang analysis


Written By Brian Wang, Nextbigfuture.com

Read More

Leave a Reply