Why an AI-based singularity?

A friend of mine, JMM knew that I’ve been funded in the past by SIAI to work on OpenCog, so he asked the following question:

“The Singularity Institutes “main purpose” is meant to be to investigate whether a recursively improving intelligence can maintain “friendliness” towards human kind. “

Okay, but my standpoint is: Why does the recursively improving intelligence need to be non-human? It seems counter-intuitive to me to devolve this power to something outside of ourselves – and also a bit like we’re just trying vainly to become a kind of God, creating another type of being.

I think the main reason there is a focus on AI rather than improvement of human intelligence is because it’s so damn hard to do experiments on people’s brains. It’s ethically difficult to justify various experiments, and it only gets harder as things become more regulated (and rightfully so for the most case). I think they’ll definitely be continuing research into this stuff though. For myself, occasionally taking Modafinil enhances my productivity significantly (so long as I maintain focus on what I’m meant to be doing, it’s easy to get enthralled with something that interests me, but isn’t related to my work).

But there’s no exclusion of human intelligence amplification from the singularity concept. If we create smarter humans, then this begets even smarter humans. Again we can’t really predict what those enhanced “humans” would do, because they are a significant step smarter than us.

Human intelligence amplification has a whole raft of other ethical issues associated with it though too. When it becomes more mainstream/available it’s going to be a major political and social issue. What happens when not everyone can afford (or wants) to enhance themselves? Will we develop two classes? One of naturals and another of post-humans? Will employers require certain professions to use performance enhancement (say for example, for brain surgeons performing long surgeries)? It’s also going to raise the question about the ownership of our bodies. There are laws against taking recreational drugs, but for some, LSD helps with certain types of thought and could be seen as a form of an intelligence manipulator (or an amplifier of certain facets of intelligence).

At the moment at least, governments and enforcement agencies seem completely uninterested in actively stopping this, due to the prevalence of various performance enhancement drugs throughout academia and other cognitively demanding professions. Obviously it’s not necessary, but for some the edge or boost it gives them is sufficient to outweigh the risks of off-prescription drug use.

That was my main beef with Kurzweil: He assumed that the intelligence beyond the horizon would be non-human. This, of course, begs the deeper philosophical question: what *is* “human?” But I’m sure you’ve mulled that one over plenty. Indeed, it would be interesting to hear what you have to say on the subject.

It simply strikes me that, the human ego being what it is, we would naturally be trying to improve our own intelligence and not worrying too much about creating AI as a standalone entity. Am I wrong? Is the human ego instead more interested in giving birth to a new species?

There is one other reason there is a focus on AI. Which is related to human ego. Once humans have an advantage over others, such as a significant step up in intelligence, then there’s a good
chance some will use that power over others in a negative way. It’s the old adage, “power corrupts, and absolute power corrupts absolutely” – and it’s my opinion, that this could lead to the same power/intelligence imbalance between an intelligence-amplified human and humans, as that between humans and dogs.

Or more succinctly, humans have plenty of evolutionary baggage that we
don’t necessarily want to amplify!

Yeah I’ve thought about that a little as well. What I wonder (and this really is just pure speculation) is whether hyper-intelligent humans would have more faculty for reason and logic; and, if so, if they would see the benefits in spreading the gift and/or using their gifts in more compassionate or benevolent ways…? Because, surely, that’s evolution as well.

Again, it would seem somehow counter-intuitive to improve intelligence without, for example, attempting to improve our capacity for processing emotions — our ’emotional intelligence’, as it were.

I tend to think that many of the world’s problems today come down to a focus on what is scientifically possible rather than what is philosophically ‘good’ or ‘true’.

Basic agreement, improving intelligence is really just a catch all for all sort of cognition improvements. Depending on where we focussed neuron regeneration, we could expect to see improvements in different aspects of cognition.

Science and technology will inevitably advance, and while we can regulate technology to some extent, we can’t stop it. Telling people they can’t work on AI would merely push it underground, and one of the reason I work on OpenCog is because it’s an open-source framework. It can be inspected by other experts for flaws to ensure there isn’t some hidden time-bomb sitting in the code, and allows an international approach which might otherwise be over-regulated within a university-specific project (this ignores the actual issue of friendly vs. unfriendly AI, since the latter doesn’t have to be intentionally designed – I may write more about that one day, but SIAI and it’s fellows have already written plenty on that).

To bring it back to AI, what place is there for emotions in an artifically constructed intelligence? Is there a capacity for compassion or fear, if they serve a function for processing input and generating responses? I am guessing that cognitive scientists have thought about this one…

That’s a BIG topic, so all I’ll say is that my opinion is that emotions are just a particular mind state. Sure this mind state is influenced by hormones and neurotransmitters, but so is the rest of the brain’s functioning.

My opinion is that emotions are just extremely strong aspects of a human’s mental world. If we gave an AI an extremely strong desire/goal to make humans happy, and the AI design had some kind of reward based system (such that the AI was trying to maximise these) and achieving these rewards caused other effects in the AI’s mind, such a propensity to use positive phrases to describe the world, then is there any reason to believe that the AI isn’t happy itself?

Disclaimer: I’m no longer in the employ of the SIAI, so my thoughts are not endorsed by them at all.



2 comments ↓

#1   James MM on 08.02.09 at 6:18 pm

“If we gave an AI an extremely strong desire/goal to make humans happy, and the AI design had some kind of reward based system (such that the AI was trying to maximise these) and achieving these rewards caused other effects in the AI’s mind, such a propensity to use positive phrases to describe the world, then is there any reason to believe that the AI isn’t happy itself?”

I find this design-plan suggestion very attractive! Promise me you’ll work on this project 😉

#2   Joel on 08.03.09 at 10:29 am

There are some subtle issues with an AI’s desire being to “make humans happy”.

The first part of http://singinst.org/upload/CEV.html gives some idea of this.

But friendliness is definitely a concern. OpenCog is a long way off requiring active friendliness engineering efforts.

Leave a Comment