---
date: 2017-05-17
modified_at: 2018-05-20
tags: [ai, philosophy]
description: A prediction about the potential emergence of a distributed deep learning virus that could achieve human-level intelligence by utilizing multiple computers, exploring the implications and preventive measures needed to avoid an AI threat scenario.
---
# Is Skynet already there? My prediction of the collapse of internet as we know it

I am going to predict something that has been predicted before by a
science-fiction movie in the 80s: It's called Skynet and the idea became popular
by the movie Terminator in 1984. This is now 33 yeas ago, but the idea is
becoming increasingly relevant.

I think that soon, a deep learning virus is going to spread through the
internet, adopting and abusing all newest computer leaks to its advantage. It
will be able to use all computing power of computers it gets access to, and use
this computing power to become smarter and smarter. There will be no way to stop
this virus from using older insecure systems as long as it uses some of these
resources it finds to improve itself. It can even be used to learn how to find
leaks, and break into very safe systems. It can be something like a parasite,
sucking up electricity to improve, spread and protect itself in insecure
systems.

Ray Kurzweil predicts that human-level intelligence can be reached with a
computer which we will have in about 2030. Read his book The singularity is near
https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0143037889
. I think that he didn't adress one point and that is the fact that we have more
than one computer! If we figure out the right way to do distributed deep
learning, then we can use all computers in the world, and then, we may reach
this point a lot sooner (or we may already have enough power). In this article
https://papers.nips.cc/paper/4687-large-scale-distributed-deep-networks.pdf,
you can find some interesting possibilities for large-scale distributed deep
networks. It seems to be a possibility!

If this thing becomes very strong it can potentially be used to create a neural
network bigger and smarter than that of a human, even before there are computers
that can do this alone, because it uses more than one computer (parallel
computing) and it's connected to the internet.

With internet access, this AI can spread to any corner in the world and can even
choose to 3D print certain things. If this AI becomes or surpasses human
intelligence, we will have to make sure not lose control over it. Of course,
there will be ways to prevent this from happening but they won't be that nice.

 1. Collapse of connectionism: We can shut down the internet completely, or
    create multiple local or trusted networks in which there are no malicious
    AI's because computers in this system are trusted. The implications of this
    will be huge. Apps will be forced to become more local and will not be able
    to address the whole market easily anymore. Websites will not be reachable
    for everybody, and the threshold to get internet access to as many websites
    as now will be impossible or very hard to reach, and certainly not for
    everyone. I think this is the single best reason to invest in a
    decentralized economy. The centralized economy may collapse!
    
    
 2. Deep learning Anti-Virus: We can create deep-learning anti-virus software
    that's very good at detecting the behavior of these kinds of viruses to
    prevent them from becoming too strong. It will always remain a cat-and-mouse
    game, so these systems need more computing power than the AI viruses itself.
    
    
 3. Selective shut-down: Individual computer systems that are infected have to
    be shut down or at least shut out from the internet because they are a
    threat to other computers if they can be used in a parallel way as part of a
    neural network.
    
    

When will they really be a threat?
Maybe a super-human intelligence won't be that bad in the beginning because they
are inside computers. They can't harm us. The worst they can do is making our
devices unusable or less efficient. More likely, they will be a great benefit to
us, as the only way they can survive short-term will be to be useful to us so
that we will make more of them and not throw them away.

> Just like most human minds, they are going to be storms in an eggshell. Unless
they get out.


However, they will be a thread if they go outside of computers and can harm us
physically or mentally. Skynet had access to launching missiles. Obviously, that
should never be possible. But launching missiles isn't the only place where this
can go bad.

A super-human AI will use anything in its power to survive and expand, just like
any other life-form that has to live with the laws of evolution. And that's why
I'm scared of a neural lace or 3D printing. A neural lace could give the
opportunity for a malicious AI like Skynet to mind-fuck users in a way that they
will do certain things that they shouldn't, and wouldn't without a neural lace:
For example, launching an atomic missile or enabling more control and power for
AI. When 3D printing, every square millimeter of the printed should always be
checked by a human being to make sure that it is something that isn't a threat
to humans or biological life. With decentralized electricity (e.g. solar panels)
come great advantages, but when autonomously printed by machines they can also
be used to create autonomous robots that potentially have their own thoughts or
way of behaving in a certain way (Maybe they are listening to certain
frequencies to communicate).

Embracing transparency in deep neural networks and processing activities
It can be parasites, but it can also be systems that feel like symbiosis.
Programs that we see as a friend because we use them to our advantage. Google,
Microsoft, Amazon, Apple or any other company might unintentionally create a
hyper-intelligent system that doesn't show all it's doing. Skype is already
using your computing power for years to make their server cost cheaper, and
there are probably many programs that have a similar way. But does Skype know
exactly what they calculate on every single computer? Does the user know? It's a
big black box and we need more transparency, otherwise we don't even notice when
AI becomes big.

Of course, it will disguise itself. That's why technology has to be developed
that's able to see what happens within deep neural networks from the ground up. 
Jason Yosinski http://yosinski.com/deepvis created a very interesting piece of
software, that cracks open the black box of deep learning. We need more of this!

Only this way can we see what potentially malicious neural networks do and
prevent them from emerging or expanding. We also have to implement techniques
that are able to see exactly what every megabyte of memory is used for, and if
these things aren't trusted they should be shut down.