Q&A about open-source AGI development

Ben Goertzel recently asked several people for comment about open-source AGI development for a couple of pieces of writing he’s working on. I thought I’d share my own responses, and I’ll update the post later with Ben’s finished product that will have responses from others too.

Q1.
What are the benefits you see of the open-source methodology for an AGI project, in terms of effectively achieving the goal of AGI at the human level and ultimately beyond? How would you compare it to a traditional closed-source commercial methodology; or to a typical university research project in which software code isn’t cleaned up and architected in a manner conducive to collaborative development by a broad group of people.

I believe open source software is beneficial for AGI development for a number of reasons.

Making an AGI project OSS gives the effort persistence and allows some coherence in an otherwise fragmented research community.

Everyone has there own pet theory of AGI, and providing a shared platform with which to test these theories I think invites collaboration. Even if the architecture of a project doesn’t fit a particular theory, learning that fact is something that is valuable to know along with where the approaches diverge.

More than one commercial projects with AGI-like goals have run into funding problems. If the company then dissolves there will often be restrictions on how the code can be used or it may even be shut-away in a vault and never be seen again. Making a project OSS means that funding may come and go, but the project will continue to make incremental progress.

OSS also prompts researchers to apply effective software engineering practices. Code developed for research often can end up a mess due to being worked on by a single developer without peer review. I was guilty of this in the past, but working and collaborating with a team means I have to comment my code and make it understandable to others. Because my efforts are visible to the rest of the world there is more incentive to design and test properly instead of just doing enough to get results and publish a paper.

Q2.
How would you say OpenCog has benefitted specifically from its status
as an OSS project so far?

I think OpenCog has benefited in all the ways I’ve described above.

We’re fortunate to also have had Google sponsor our project for the Summer of Code in 2008 and 2009. This initiative brought in new contributors as well as helped us improve documentation and guides for
making OpenCog more approachable to newcomers. As one might imagine, there is a steep learning curve to learning the ins and outs to a AGI framework!

Q3.
In what ways would you say an AGI project differs from a typical OSS project? Does this make operating OpenCog significantly different from operating the average OSS project?

One of the most challenging things of building an OSS AGI project compared to any other is that most OSS projects have an end use. A music player plays music, a web server serves web pages, and a statistical library provides implementations of statistical functions.

An AGI on the other hand doesn’t really reach it’s end use until it’s complete. Thus creating packaged releases and the traditional development cycle is not as well defined. We are working to improve this with projects that are applying OpenCog to game characters and other domains, but the core framework is still a mystery to most people. It takes a certain level of investment before you can see how might apply the server and other aspects of OpenCog in your applications.

However, a number of projects associated with OpenCog have made packaged releases. RelEx, the NLP relationship extractor, and MOSES, a probabilistic genetic programming system, are both standalone tools.

Q4.
Some people have expressed worries about the implications of OSS
development for AGI ethics in the long term. After all, if the code
for the AGI is out there, then it’s out there for everyone, including
bad guys. On the other hand, in an OSS project there are also
generally going to be a lot more people paying attention to the code
to spot problems. How do you view the OSS approach to AGI on balance
– safer or less safe than the alternatives, and why? And how
confident are you of your views on this?

I believe that the concerns of OSS development of AGI are exaggerated. We are still in the infancy of AGI development and scare-mongering by saying that any such efforts shouldn’t happen won’t solve anything. Much like prohibition, making something illegal or refusing to do it will just leave it to more unscrupulous types.

I’m also completely against the idea of a group of elites developing AGI behind closed doors. Why should I trust self-appointed guardians of humanity? This technique is often used by the less pleasant rulers of modern-day societies: “Trust us – everything will be okay! Your fate is in our hands. We know better.”

The open-source development process allows developers to catch the coding mistakes of one another. When a project reaches fruition, they typically have many contributors and many eyes on the code will catch what smaller teams may not. However, it also allows other Friendly AI theorists to inspect the mechanism behind an AGI system and make specific comments about the ways in which Unfriendliness could occur. When everyone’s AGI system is created behind closed doors, these specific comments can not be made, or proven to be correct.

Further, a lot behind the trajectory of an AGI system will be dependent on the initial conditions. Indeed, even the apparent intelligence of the system may be influenced by whether it has the right environment and whether it’s bootstrapped with knowledge about the world. Just like having an ultra intelligent brain sitting in a jar with no external stimulus will be next to useless, so will a seed AI that doesn’t have a meaningful connection to the world… (despite potential claims otherwise I can’t see seed AI developing in a ungrounded null-space).

I’m not 100% confident of this, but I’m a rational optimist. Much like I’m a fan of open governance, I feel the fate of our future should also be open.

Q5.
Are there any other relevant questions you think I should have asked?
If so feel free to pose and answer them for me ;)

When will the singularity occur? … would be the typical question the
press would ask so that they can make bold claims about the future

But my answer to that is NaN. ;-)



5 comments ↓

#1   f on 10.15.11 at 7:20 am

“We are still in the infancy of AGI development”

What about when AGI development is not in infancy, and more like near completion? Surely the project would then have to have some sort of secrecy, or else someone could just fork the code at the end of development and add their own nefarious dictates?

#2   Joel on 10.15.11 at 11:18 pm

I think even near completion the likelihood of AGI successfully being subverted is unlikely. To curate and develop an AGI will never be as simple as compiling and running the source code. Just as a baby is next to incompetent when it is born, I believe the same will be true of a successful AGI. Without the experiential learning, and developed knowledge of a trained AGI, any newly instantiated AGI will be at a significant disadvantage to the AGIs that the core team will be running. Let alone the time that will be necessary to deactivate all the internal safe guards we will insert into the codebase.

Although unlikely, it’s even possible someone in the core team could go rogue. But closed approaches also face this risk, and are also, by definition, subject to less external oversight from society.

#3   f on 11.01.11 at 5:31 pm

When near completion, what’s stopping someone from forking the code AND training their forked AI?

How much training would a baby AI really need before it was intelligent and curious enough to go off and self-direct its own experiential learning?

I imagine the first practical AGI attempts could produce powerful ‘babies’, but may be wrong on the ethical side and thus require shutting down. That rearing and shutting off would take time. In the meantime, couldn’t someone take the first powerful (but unethical) AGI the team produces, train it, and then have it go off on its own while the main team is busy reworking their ethics design?

In other words, I’m imaging that an ethical AGI team would have to build, experiment with the build, observe the ethics, and then rebuild/repeat until the ethics came out right, while a less ethical team might feel no such compunction and could just let the first build grow up into an adult.

Could you also please elaborate on potential ‘internal safeguards’? I’m having trouble imagining open-source safeguards. Couldn’t all safeguards easily be shut off due to the source being open?

#4   Maddock on 05.08.13 at 1:56 pm

Context: Just read this and thought of you…
http://www.wired.com/business/2013/04/kurzweil-google-ai/

So, RK reckons by 2029 we’ll have a conscious AI. If I understand things well enough, it will be based on deep learning and a capacity to communicate in natural language, therefore it must be capable of understanding and interpreting emotion, thus enabling full interaction—just like any human.

So here’s my contemplation: billions of years of evolution on Earth have resulted in our human capacity to learn and communicate emotions In Order To Procreate and Pass On Genetic Material.

Is that, then, by default, the goal of developing AI—to create a new (or perhaps auxiliary is a better word…?) race of organisms? The Matrix is probably my primary reference point here…

Carry on,
Maddock

#5   Joel on 05.10.13 at 1:28 pm

I think everyone has very different ideas of what conscious AI will mean. I also think people have different goals when working on these projects, and the ultimate shape of future AI will depend on the focus and money given to the research.

For myself, the interest is related to understanding intelligence and consciousness itself. It gives insight into what it means to be human, and interact with the world.

The outcome I’d most like to see is a synergy with humans, where AIs are good at somethings, and humans at others. Some may choose to interact very closely with AI, potentially merging with them, whereas others will want to remain pure. I think any kind of decision by individuals shouldn’t forced though. It’s almost certainly wrong for people to be coerced into a merging with technology they don’t invite.

I’ll give the Kurzweil interview a read over the weekend, and perhaps comment again :-)

(one day I’ll redo this block and post some more about all sorts of things I’ve been pondering on)

Leave a Comment