Entries Tagged 'mind' ↓

Your Brain, Copyright, and Lossy Compression

Last week, the New Zealand government passed a controversial copyright law related to file sharing. This was partly outrageous because of the use of urgency to pass these laws without due consultation. If you watch any of the videos from that particular debate, it will shine a light on just how clueless the majority of NZ’s politicians are. The notable exceptions are Clare Curran and Gareth Hughes. However, this isn’t a post about the politics! Instead I want to talk about the philosophy behind copyright and how as technology becomes an intrinsic part of our intelligence, the less sense it makes to challenge the personal dispersal or storage of information.

For a good introduction to the topic, read this post on the “colour of bits”. The post outlines the conflicting viewpoints on information: How computer scientists can’t academically differentiate between one copy of a copyrighted piece of data and another, but the pressure from law to try to make something up regardless (e.g. DRM). It also discusses how, if you perform a reversable mathematical transformation of the bits you are fundamentally changing the data but can restore it at any moment. If you can do that, is the transformed version copyrighted too? Given that with the right transformation you can turn any sequence of bytes into any other. That means there is only one copyright holder: the universe.
Continue reading →

Free will and chaotic brains

My personal take on free will is that it’s an illusion, as is consciousness.

The impression of free-will is very believable though as the brain probably exhibits chaotic dynamics[1]. From any given state the brain is in, a slight change, however minute, could give rise to a very different outcome later on. This means that for any system with a model that’s external to an individual brain (e.g. a brain simulation if such a thing is possible), it is impossible for that model to completely predict the behaviour of the brain… eventually the brain’s state will diverge from the model. The important point is, this can happen even if the brain is completely deterministic. So even if the rules governing our cognition are unwavering instructions, which I think is unlikely, there is still the inability for a system outside of the brain to predict it’s behaviour[2].

In addition, I believe that consciousness is due to a recursive model that represents ourselves (ala Douglas Hofstadter’s book – I am a Strange Loop). As this is a model of the epiphenomenon of our “self”, it also has incomplete knowledge of the rest of the brain – this gives our conscious minds the illusion of free will as it can’t completely predict what it/we will do next. We think we are weighing up choices based on our knowledge and then making a “decision”, but that’s because we (our conscious minds) don’t have complete knowledge of the brain’s underlying hardware which ultimately leads us to that choice. This lack of knowledge in our conscious minds is what we call “free will”.

[1] Or at least I’d expect it to, I don’t have references I’ve read over, but this looks promising.

[2] That is, assuming we exclude the almost impossible ideal of having perfect knowledge of the brain’s state which would include all neurochemistry as well as structure.

This post is taken from a comment I made to Leo Parker Dirac’s post on “Free Will and Turing-completeness of the Brain”. Turns out I think it’s a relatively succinct description of what the concept of free will actually is so I thought I’d repost it here…

Don’t become a closed system

Another post from the draft pile that I finally polished into something that isn’t a series of half formed sentences… enjoy ;-)

The human body as a closed system is not sustainable, as any closed system eventually achieves an equilibrium lacking order. Entropy would increase as the second law of thermodynamics asserts itself. Flux of energy/matter is required to maintain and build order. This is a central part of Ludwig von Bertalanffy’s paper on “general systems theory” and his theory of open systems:

“the conventional formulation of physics are, in principle, inapplicable to the living organism being open system having steady state. We may well suspect that many characteristics of living systems which are paradoxical in view of the laws of physics are a consequence of this fact.”

I think though, that a similar law applies to intelligent systems. Without stimulus the mind is not alive and eventually a lack of synaptic firing would lead to the neuronal weighting between neurons to deteriorate. This would result in a reversal to the initial states that most artificial neural networks start in (they are usually initiated with random weights)… but perhaps this reversal of weights on neurons that no longer fire isn’t a bad thing. It may lead to them being re-purposed…

As one ages, it can become more difficult to pick up new information as existing synaptic channels get reinforced and so the neuronal tributaries of our brains because less used, or require more active effort to use than taking the ready associations that come easily to our consciousness. While these tributaries may get reset to random weightings due to dis-use, this may allowed them to later get stimulated and used to store new associations.

The NY Times earlier this year posted “How to train the aging brain”:

“There’s a place for information,” Dr. Taylor says. “We need to know stuff. But we need to move beyond that and challenge our perception of the world. If you always hang around with those you agree with and read things that agree with what you already know, you’re not going to wrestle with your established brain connections.”

Such stretching is exactly what scientists say best keeps a brain in tune: get out of the comfort zone to push and nourish your brain. Do anything from learning a foreign language to taking a different route to work.

These new scenarios make the brain utilise alternative neuronal branches:

“As adults we have these well-trodden paths in our synapses,” Dr. Taylor says. “We have to crack the cognitive egg and scramble it up. And if you learn something this way, when you think of it again you’ll have an overlay of complexity you didn’t have before — and help your brain keep developing as well.”

Not only that, but if you are encouraging more interesting events in your life, especially those that push and challenge you and your preconceptions, then your perception of time expands. While in the moment it may seem like time flies, retrospectively it will seem like the past took longer. The brain collapses intervals of time where nothing much happens.

So if you don’t push your brain to learn new things, you’re cutting it off from having anything new to work with. It will also be easier to efficiently and compactly store your experiences based on what you already know. This shrinks your temporal impression of memory and, retrospectively, it will seem as though the last 5 or 10 years were but a blink. If you keep using the same arguments, and facing the same challenges, then you will become optimised and specialised at that task, but this will come at the cost of generality and breadth of understanding.

Sexism, Racism and the Ism of Reasoning

Note this post is not to condone racism or sexism, merely as an explanation of how it might come about from embodied experience and probabilistic reasoning, as well as how we might protect against it.

Things like racism or sexism, or over-generalising on a class of people is one of the more socially inappropriate things you can do. However, depending on how your logic system works, it’s not an entirely unreasonable method of thinking (the word “unreasonable” chosen purposefully) – and for any other subject, where the things being reasoned about are not humans, we wouldn’t particularly care. In fact, certain subjects like religion and spirituality are held to less strict standards of reasoning… there’s actually more defense in being racist/sexist then being a practitioner of certain religions. Perhaps this is why these occasionally go hand in hand[1].

So what do I actually mean by this? I’m going to use two methods of reasoning, deduction and induction, and then explain them in terms of uncertain truth. Nothing in this world is ultimately absolute[2] and so it behooves us to include probabilistic uncertainty in to any conclusion or relationship within our logic set.

Continue reading →

Empathy in the machine

A draft post/idea from the archives that I thought it was about time that I release. Funnily, this was entirely before I started working on NetEmpathy – maybe it’s not as disconnected as I thought from AGI after all!

It is my belief that empathy is a a prerequisite to consciousness.

I recently read Hofstadter’s I am a strange loop, whose central themes are around recursive representations of self leading to our perception of consciousness. For some, the idea that our consciousness is somewhat of an illusion might be hard to swallow – but then, quite likely, so are all the other qualia. They seem real to us, because our mind makes it real. To me, it’s not a huge hurdle to believe. I find the idea that our minds are infinitely representing themselves via self-reflection kind of beautiful in simplicity. You can get some very strange things happening when things start self-reflecting.

For example, Gödel’s incompleteness theorem originally broke Principia Mathematica and can do the same for any sufficiently expressive formal system when you force that formal system to reason about itself. One day I’ll commit to explaining this in a post, but people write entire books about the idea to make Godel’s theorem and it’s consequences easy to understand!

And as an example of self-reflection and recursion being beautiful, I merely have to point to fractals which exhibit self-similarity at arbitrary levels of recursion. Or perhaps the recursive and repeating hallucinations induced by psychedelics give us some clue about the recursive structures within the brain.

Hofstadter also later in the book delves into slightly murky mystical waters, which I find quite entertaining and not without merit. He says that, due to us modelling of the behaviour of others, we also start representing their consciousness too. The eventual conclusion, which is explained in much greater and philosophical detail in his book, is that our “consciousness” isn’t just the sum of what’s in our head but is a holistic total of ourselves and everyone’s representation of us in their heads.

I don’t think the Turing test will really be complete until a machine can model humans as individual and make insightful comments on their motivations. Ok, so that wouldn’t formally be the Turing test any more, but I think that as a judgement of conscious intelligence, the artificial agent needs to at least be able to reflect the motivations of others and understand the representation of itself within others. Lots of recursive representations!

The development of consciousness within AI via empathy is what, in my opinion, will allow us to create friendly AI. Formal proofs won’t work due to computational irreducibility of complex systems. In an admittedly strained analogy this is similar to trying to formally prove where a toy sailboat will end up after dropping it in a river upstream. Trying to prove that it won’t get caught in an eddy before it reaches the ocean of friendliness (or perhaps if you’re pessimistic and you view the eddy as the small space of possibilities for friendly AI). Sure computers and silicon act deterministically (for the most part), but any useful intelligence will interact with an uncertain universe. It will also have to model humans out of necessity as humans are one of the primary agents on the Earth that will need to interact with… perhaps not if it becomes all-powerful but certainly initially. By modelling humans, it’s effectively empathising with our motivations and causing parts of our consciousness to be represented inside it[1].

Given that machine could increase it’s computationally capacity exponentially via Moore’s law (not to mention via potentially large investment and subsequently rapid datacenter expansion) it could eventually model many more individuals than any one human does. So if the AI had a large number of simulated human minds, which would, if accurately modelled, probably bawk at killing the original, then any actions the AI performed would likely benefit the largest number of individuals.

Or perhaps the AI would become neurotic trying to satisfy the desires and wants of conflicting opinions.

In some ways this is similar to Eliezer’s Collected Extrapolated Volition (as I remember it at least… It was a long time ago that I read it. I should do so again to see how/if it fits with what I’ve said here).

[1] People might claim that this won’t be an issue because digital minds designed from scratch will be able to box up individual representations to prevent a bleed through of beliefs. Unfortunately, I don’t think this is a tractable design for AI, even if it was desirable. AI is about efficiency of computation and representation, so these concepts and beliefs will blend. Besides, conceptual blending is quite likely a strong source of new ideas and hypotheses in the human brain.

Licensing dynamic systems and AI

Recently I’ve been contemplating a number of potential directions for creating a start-up based on application of OpenCog to a problem or field.

One evening, I had an interesting discussion in bed with my partner that was related to licensing such technology. OpenCog is open source, which is my personal preference for the development of AGI, even if the jury on whether it’s beneficial or needless reckless is still out. As open source software, it means that if we sold expert systems based on OpenCog to end-users, we’d have to also provide the source. Even though we can license it under different terms from SIAI, this isn’t entirely needed since my current viewpoint is that the real value will be in data within the system.

As an analogy, the biological design for the brain isn’t what makes us unique or what encompasses our knowledge and experience of the world. The pattern that’s formed during our childhood and education is what is really valuable, otherwise we’d make no distinction between twins and not particularly care if one twin passed away.

So, the digital mind’s pattern would be the important part that we’d license or sell. However dealing with a dynamic system makes that interesting, since the pattern that was sold/licensed would inevitably change. Learning software could well have the valuable part (the “identity” if you will) morph and change beyond the original deployment. In fact, the software could learn new things which makes the individual deployment “smarter” than the original or any other deployment.

In that case, who owns those improvements? Should we get
the rights, since it was our software that altered itself, or does it belong to the license-holder since the AI learnt the improvement in their environment?

I’m sure with sufficiently rigorous legal-work one could protect towards one view over another, but I’m more interested in what seems right.

Connectedness and gift giving

It’s Christmas time, and I enjoy getting gifts for people even though I’m not religious. I’ve also been enjoying getting rid of lots of stuff I don’t use/need. This not only makes me feel like I’m clearing out mental space (I have Tyler Durden’s words echoing in my head “The things you own, end up owning you”) but also makes me feel good that other people are getting something that they want/need. Especially since I’m either giving the stuff away or selling it cheaply on TradeMe.

I googled “It’s better to give than receive.” since that’s the quote that’s automatically been ingrained into my psyche. Turns out it’s from the Bible, Acts 20:35 (King James Version):

“I have showed you all things, how that so labouring ye ought to support the weak, and to remember the words of the Lord Jesus, how he said: ‘It is more blessed to give than to receive.’”

(I guess “more blessed” translates to “better” these days.)

Never mind that giving psychologically makes us happier than spending money on ourselves. It also physiologically affects us, by releasing, not only the good old reward molecule Dopamine, but also the love neurotransmitter Oxytocin (unfortunately the mention of oxytocin isn’t in the abstract, but it’s discussed here).

There is another aspect of gift giving I want to mention, which I haven’t got any references for, but is based on my intuition on the mechanics of intelligence. When we give someone a gift, we usually have a reason for it, and when we choose a gift for them we tend to think “Will the person like this?”. The act of that means we have to emulate, model, and predict what they want and by activation it re-enforces their pattern within our mind. Does this inadvertently get us thinking of other aspects of their personality and of what other people might like too? I’ve discussed how part of love is the strong bonding of patterns, one’s self in another mind, their mind emulated in the self. This twinning makes us feel connected to the other person. To me, it makes sense that going through this process while selecting gifts for other people will inevitably make one feel more connected in general. And as mentioned above, the neurotransmitter associated with love is also released during giving.

Maybe this is why the gifting economy of Kiwiburn (and the American equivalent) is such a central part of the festivals and contributes to them being such enjoyable experiences.

“We make a living by what we get, but we make a life by what we give.”
– Winston Churchill

Things I have learned

I’m not old, a mere 27 years in fact, but there are a few things I’ve come to discover. Things that it’d be nice to have been taught in school, but that instead I’ve discovered haphazardly:

  1. The first step to doing anything is believing you can – One thing that I’ve noticed, is that some people sabotage themselves before they even try. They just believe that they can’t do something, or it’s too hard. Some people have told me I’m smart, whereas mostly I think I’m pretty average. What I do however, is have an absence of restriction. If I want to do something, the only restriction is time. This is important when you’re doing something like working on a thinking machine.
  2. You can’t do everything – You’ll notice the caveat above about time being the only restriction. When I was a kid, I wanted to read the entirety of Encyclopaedia Britannica… I got to about “Aardvark” before I realised it was mostly dull (no offense to the long-nosed beasts!). I’m still struggling with this one, I have so many things I’d like to do, that I frequently wonder if I’m overcommitted and if the more optimal path would be to obsessively focus on one thing and one thing only… but then I realised that if I tried that I’d get bored. I’m too curious and have grown up in the age of variegated knowledge at our finger tips.
  3. Emotions are cues – they give you an indication of something going on internally. Something that might not be able to be immediately expressed verbally, and if it’s a negative emotion it probably indicates something isn’t right. And by “isn’t right” I don’t mean it’s necessarily to do with the external world, it could be an indication that there’s something inside that hasn’t been resolved. However, don’t make them the focus.. since everyone likes analogies, and I’m particularly good at straining my analogies: think of emotions like the gauges on your car for temperature, fuel, etc. They are important, so that the engine doesn’t explode, or so that you don’t run out of fuel, but if you spend the whole time focusing on the gauges, you’ll miss the scenary. Anger specifically, I feel can be boiled down to “when something or someone doesn’t act the way you expect/want them to” – every time I’ve been angry, it’s because my expectations don’t match reality… so mostly it’s about having a world view that doesn’t quite match reality (or the consensus of reality, as described below).
  4. Nothing is objective – you can argue whatever view you like, but most of us reach consensus about a specific interpretation of physical reality because of shared modalities and the wet-ware for interpreting them. That doesn’t mean you’re right if you subscribe to the current scientific consensus. Humanity collectively believed silly things like the world being flat, or the Earth being the centre of the universe. Knowledge and truth are dynamic, and they’ll continue to be so. Keep an open mind. And because I like loops, link back to point 1 about believing you can… since nothing is objective, you can believe you can do anything you like[1].
  5. I’m sure there are others, but those are the ones that came to me just now. What do you wish they’d mentioned to you when in school?

    [1] If you believe you can fly, you can (buy a plane ticket, or go sky-diving). But jumping off a building is just dumb, so don’t do that ok?

Crime and punishment: existential style

Following on from other’s recent discussions of crime and punishment, I offer these completely unhelpful transhumanist thoughts:

  • A mind from the past can become completely different from the one that committed the crime. So is it fair to punish someone in the present, when their current mind state bears as much similarity to the mind that committed the crime in the past is it does to a completely separate person?
  • A body replaces most of it’s cells over the course of many years. So it’s not really someone’s body we convict, but their structure. What happens when people can upload? Supposing we can represent that structure digitally or otherwise (but in a form of easily copy-able data) what happens to the replicates of that individual? Are they convicted as well? Does it become illegal for other people to harbour that sequence of data, even if it’s in stasis and getting no processor time? (which is essentially the same as dead, but with the difference of being revivable at a moments notice)
  • Continuing from the assumption that it’s the structure of a criminal we want to punish/remove from society: Since a baby is essentially derived from the fair proportion of the parent’s structure, if the parent commits a crime, then shouldn’t the child also be considered a criminal? Even though the child takes some of it’s structure from the other, hopefully non-criminal, parent, the first point seems to imply that exact similarity isn’t required.

(Note, most of these thoughts are me just musing on a theoretical level that is not at all pragmatic. I don’t actually believe children of criminals are also guilty)

Why an AI-based singularity?

A friend of mine, JMM knew that I’ve been funded in the past by SIAI to work on OpenCog, so he asked the following question:

“The Singularity Institutes “main purpose” is meant to be to investigate whether a recursively improving intelligence can maintain “friendliness” towards human kind. “

Okay, but my standpoint is: Why does the recursively improving intelligence need to be non-human? It seems counter-intuitive to me to devolve this power to something outside of ourselves – and also a bit like we’re just trying vainly to become a kind of God, creating another type of being.

I think the main reason there is a focus on AI rather than improvement of human intelligence is because it’s so damn hard to do experiments on people’s brains. It’s ethically difficult to justify various experiments, and it only gets harder as things become more regulated (and rightfully so for the most case). I think they’ll definitely be continuing research into this stuff though. For myself, occasionally taking Modafinil enhances my productivity significantly (so long as I maintain focus on what I’m meant to be doing, it’s easy to get enthralled with something that interests me, but isn’t related to my work).

But there’s no exclusion of human intelligence amplification from the singularity concept. If we create smarter humans, then this begets even smarter humans. Again we can’t really predict what those enhanced “humans” would do, because they are a significant step smarter than us.

Continue reading →