Why an AI-based singularity?

A friend of mine, JMM knew that I’ve been funded in the past by SIAI to work on OpenCog, so he asked the following question:

“The Singularity Institutes “main purpose” is meant to be to investigate whether a recursively improving intelligence can maintain “friendliness” towards human kind. “

Okay, but my standpoint is: Why does the recursively improving intelligence need to be non-human? It seems counter-intuitive to me to devolve this power to something outside of ourselves – and also a bit like we’re just trying vainly to become a kind of God, creating another type of being.

I think the main reason there is a focus on AI rather than improvement of human intelligence is because it’s so damn hard to do experiments on people’s brains. It’s ethically difficult to justify various experiments, and it only gets harder as things become more regulated (and rightfully so for the most case). I think they’ll definitely be continuing research into this stuff though. For myself, occasionally taking Modafinil enhances my productivity significantly (so long as I maintain focus on what I’m meant to be doing, it’s easy to get enthralled with something that interests me, but isn’t related to my work).

But there’s no exclusion of human intelligence amplification from the singularity concept. If we create smarter humans, then this begets even smarter humans. Again we can’t really predict what those enhanced “humans” would do, because they are a significant step smarter than us.

Continue reading →