Note this post is not to condone racism or sexism, merely as an explanation of how it might come about from embodied experience and probabilistic reasoning, as well as how we might protect against it.
Things like racism or sexism, or over-generalising on a class of people is one of the more socially inappropriate things you can do. However, depending on how your logic system works, it’s not an entirely unreasonable method of thinking (the word “unreasonable” chosen purposefully) – and for any other subject, where the things being reasoned about are not humans, we wouldn’t particularly care. In fact, certain subjects like religion and spirituality are held to less strict standards of reasoning… there’s actually more defense in being racist/sexist then being a practitioner of certain religions. Perhaps this is why these occasionally go hand in hand.
So what do I actually mean by this? I’m going to use two methods of reasoning, deduction and induction, and then explain them in terms of uncertain truth. Nothing in this world is ultimately absolute and so it behooves us to include probabilistic uncertainty in to any conclusion or relationship within our logic set.
Deduction can be summarised as inferring the specific from the general. e.g.
Aristotle is a cat.
Aristotle has four legs. (2)
Note that obviously there are exceptions to the starting generalisation (1) of four-legged cats. Poor Aristotle could have been in a car accident and had a leg amputated making (2) a false conclusion. That’s where the probability and uncertain truth comes in…
Here we’ll use two parts to truth values (TVs), strength and confidence. The first, strength, is how often the relationship holds true. E.g. with strength 0.9 for the we’d expect approximately 90% of cats we saw to have four legs. Confidence however, indicates how sure we are of the strength. The exact semantics of confidence are not important for the current discussion, but basically if the confidence is low then we’re not very sure about the number of legs cats have – perhaps because we haven’t seen many.
On the other hand induction is used to infer the general from the specific. e.g.
Muffin is a cat that has four legs.
Mr Percival is a cat that has four legs.
All cats have four legs.
In other words, if, within all our experience, all cats we’ve seen have four legs it’s sensible to assume all cats do. With such a small sample size of only 3 cats, and without any other background knowledge (like animals of a particular species generally all have the same number of legs) we’d generally not afford much confidence in the result of this deduction.
So where does the sex/race/other-isms come in? Well if you haven’t worked it out yet, it’s when you use too small a sample set to do induction or you adopt generalisations from others without using your own evidence. Or perhaps you do know lots of one sex that all fit your model, in that case the problem is using a single generalisation to deduct information about a new member of that sex that you know nothing about. Essentially sexism/racism is due to too large a bias towards sex/race determining an overall conclusion or understanding of the other.
I guess one thing should be clarified, the relationship between generalisation of sexism/racism. When does a generalisation turn into one of these categories. I’m not sure I really know, after all if you made generalisations about the basic anatomy of males/female or say something superficial like the colour of someone’s skin, then there wouldn’t be a whole lot of controversy there but I guess it’s when you use that information to fully define everyone that falls into a category. The thing that I’m grappling with, is that, in the absence of any extra information on someone it’s sensible to use what you have to start trying to represent and understand someone new. In fact, if you didn’t, you’re demonstrating you don’t care enough about the other person to think about them at all!
And then we come to insurance brokers. They make huge generalizations. Law usually prevents them from using race as a factor, but for some reason they are still allowed to use your sex to determine the premium you’ll pay. They are also allowed to generalise on your age among many other things. In fact, pretty much the entire way the insurance industry works is through generalisations.
So what makes it okay for businesses to blatantly be sexist and ageist among many other categorical assumptions? I don’t know.
Ahem… got a little side-tracked there. Sorry…
So when is this generalization okay? If every interaction you have with people that fall into a category supports the conclusion then your confidence will increase – and you’ll have no examples of any alternative. Few people actually have such a life history to conclude this. I suspect many people who display racism or sexist tendencies probably get this strong bias from another human or information source. If it’s someone they highly respect, such as a parent, or someone who’s the social leader amongst a group, then they may adopt conclusions with high confidence without having the experience to conclude things themselves.
How can we protect against broad conclusions about people?
- By making it exceedingly difficult to increase our confidence of very broad generalizations. Realise that when making any large scale conclusion that confidence must be limited – especially when reasoning about dynamic entities like humans.
- Favour the use of more specific categories when reasoning. This can be hard, as when you don’t know someone and all you have is their apparent race and sex then any generalisations you’ve made will be your sole source of information. You can argue that this shouldn’t be the case, but any pre-consolidated knowledge you have will come to the fore until you learn more about the person to develop a more complete representation of them.
Why is it hard to change the behavior or views of people who are already displaying ism-ness?
- I guess that confidence in belief generally erodes slowly, unless presented with directly conflicting evidence. And sometimes conflicting evidence just makes people stop listening to you because it radically challenges their world view.
- Perhaps it’s physically more effort/energy to update generalisations on large groups? If you have to adjust your confidence of conclusion on a large group, then they’ll be many many connections from the neuron that potentially represents that group and the instances and associations within that group. Since the brain is massively connected, larger groups probably require more effort to shift. Think of all those tens of thousands of synapses having to reconfigure themselves or die.
- Generalisations may be self-reinforcing. While someone might not fit your generalisation, that may slightly alter your confidence in it, but not enough to completely remove it from your mind.
I hope I’ve raised some questions in the readers head. I don’t like sexism or racism, but at the same time I can (kind of) understand the logical reasoning that might lead to it and I haven’t completely resolved how it’d sit in a fully functional artificial intelligence. After all, I really don’t want OpenCog to turn into a racist bigot!
 I was going to link to an appropriate example here, but couldn’t easily think of one I could back up, feel free to suggest one.
 If people think otherwise, I suggest they study enough quantum physics or Gödel’s incompleteness theorem.
 This is based on the simple truth value type of OpenCog.
 Not that accepting generalizations from others is bad thing. Everyone does it because we cannot experience everything directly.
 Although if your sum experience of the universe you live in is merely these three cats then you’d be very confident and not only that but induct that everything has four legs (and is a cat).
 Yeah yeah, I know, some people are transgendered. I’m keeping this simple okay?