Entries Tagged 'opencog' ↓
March 30th, 2010 — ideas, mind, opencog
Note this post is not to condone racism or sexism, merely as an explanation of how it might come about from embodied experience and probabilistic reasoning, as well as how we might protect against it.
Things like racism or sexism, or over-generalising on a class of people is one of the more socially inappropriate things you can do. However, depending on how your logic system works, it’s not an entirely unreasonable method of thinking (the word “unreasonable” chosen purposefully) – and for any other subject, where the things being reasoned about are not humans, we wouldn’t particularly care. In fact, certain subjects like religion and spirituality are held to less strict standards of reasoning… there’s actually more defense in being racist/sexist then being a practitioner of certain religions. Perhaps this is why these occasionally go hand in hand.
So what do I actually mean by this? I’m going to use two methods of reasoning, deduction and induction, and then explain them in terms of uncertain truth. Nothing in this world is ultimately absolute and so it behooves us to include probabilistic uncertainty in to any conclusion or relationship within our logic set.
Continue reading →
December 26th, 2009 — mind, opencog
Recently I’ve been contemplating a number of potential directions for creating a start-up based on application of OpenCog to a problem or field.
One evening, I had an interesting discussion in bed with my partner that was related to licensing such technology. OpenCog is open source, which is my personal preference for the development of AGI, even if the jury on whether it’s beneficial or needless reckless is still out. As open source software, it means that if we sold expert systems based on OpenCog to end-users, we’d have to also provide the source. Even though we can license it under different terms from SIAI, this isn’t entirely needed since my current viewpoint is that the real value will be in data within the system.
As an analogy, the biological design for the brain isn’t what makes us unique or what encompasses our knowledge and experience of the world. The pattern that’s formed during our childhood and education is what is really valuable, otherwise we’d make no distinction between twins and not particularly care if one twin passed away.
So, the digital mind’s pattern would be the important part that we’d license or sell. However dealing with a dynamic system makes that interesting, since the pattern that was sold/licensed would inevitably change. Learning software could well have the valuable part (the “identity” if you will) morph and change beyond the original deployment. In fact, the software could learn new things which makes the individual deployment “smarter” than the original or any other deployment.
In that case, who owns those improvements? Should we get
the rights, since it was our software that altered itself, or does it belong to the license-holder since the AI learnt the improvement in their environment?
I’m sure with sufficiently rigorous legal-work one could protect towards one view over another, but I’m more interested in what seems right.
August 2nd, 2009 — ideas, mind, opencog
A friend of mine, JMM knew that I’ve been funded in the past by SIAI to work on OpenCog, so he asked the following question:
“The Singularity Institutes “main purpose” is meant to be to investigate whether a recursively improving intelligence can maintain “friendliness” towards human kind. “
Okay, but my standpoint is: Why does the recursively improving intelligence need to be non-human? It seems counter-intuitive to me to devolve this power to something outside of ourselves – and also a bit like we’re just trying vainly to become a kind of God, creating another type of being.
I think the main reason there is a focus on AI rather than improvement of human intelligence is because it’s so damn hard to do experiments on people’s brains. It’s ethically difficult to justify various experiments, and it only gets harder as things become more regulated (and rightfully so for the most case). I think they’ll definitely be continuing research into this stuff though. For myself, occasionally taking Modafinil enhances my productivity significantly (so long as I maintain focus on what I’m meant to be doing, it’s easy to get enthralled with something that interests me, but isn’t related to my work).
But there’s no exclusion of human intelligence amplification from the singularity concept. If we create smarter humans, then this begets even smarter humans. Again we can’t really predict what those enhanced “humans” would do, because they are a significant step smarter than us.
Continue reading →
May 5th, 2009 — opencog
Kaj Sotala has been making his notes on PLN available on LJ as he reads through the Probabilistic Logic Networks book.
April 3rd, 2009 — mind, opencog
As a kid, and even in the first few years of University, I used to have trouble understanding why things needed to be explained in detail. Essays were difficult because I’d take the point I was trying to make and think of it like a logic problem:
This interesting fact and this analysis, thus this is the point.
Except that made for very short essays that were no where near the word limit.
Continue reading →
August 16th, 2008 — fun, life, meta, opencog
I’m not a particular regular updater with this particular blog (too many things have been demanding my attention lately), but I thought I’d drop a note to say I’ll be off the radar for a week or so…
I’ll be attending Burning man. I’m immensely looking forward to this as this is the first year in several that’s actually been feasible for me to get there from New Zealand. I’ll be with an Australian theme camp called Straya that a friend of mine put me in contact with, and who’ll also be there.
As well as Burning man, I plan to hang out in Washington D.C with Ben to talk about our work on OpenCog. Then I’ll stay in San Francisco for 5-6 weeks (end of Sep till start of Nov) to attend the Singularity Summit followed by the CogDev Workshop (an OpenCog coding jam, details to be finalised, but likely to be just after the Summit).
If you’ll be at any of these events and want to chat, drop me a line 🙂
April 24th, 2008 — mind, opencog
This is my hypothesis. The mind is not a object but a process, it takes information from the outside world and transforms it into pattern. That pattern is not the mind, it’s just the way the mind sustains itself from moment to moment. That pattern still exists when you die, albeit temporarily until decay sets in, but we aren’t alive because the mind isn’t receiving any new input.
Now that doesn’t mean a consciousness can’t be revived, the pattern is still there, and if the process can be restarted then I suspect the consciousness would continue as if nothing happen. One moment about to die, the next revived. This is essentially what proponents of cryogenics expect to occur.
Did I just contradict myself, by saying that consciousness can be revived from the pattern, even though I claimed the pattern wasn’t the mind? I don’t believe so. The pattern is the painting, the mind is painter. In humans, the painter is the physiological processes that generate the electrical signals shooting through our body and that update the neuronal structure in our brain.
March 18th, 2008 — geek, opencog
SIAI and OpenCog are recruiting people for Google Summer of Code. GSoC is a program that offers student developers stipends to write code for various open source projects.
Want to work on AI/language-processing over the Northern Hemisphere summer? Here are some of the ideas for projects proposed. Applications to Google open on the 24th of March.