Telling stories and priming the mind

As a kid, and even in the first few years of University, I used to have trouble understanding why things needed to be explained in detail. Essays were difficult because I’d take the point I was trying to make and think of it like a logic problem:

This interesting fact and this analysis, thus this is the point.

Except that made for very short essays that were no where near the word limit.


Part of that was because I was used to maths, physics, and computer science problems. These fields don’t tend to push students to writing supposition about theory, at least not at the University I went to, and it’s usually pretty concrete for the level taught to undergraduates. Then midway through my study I switched tracks into Biology and flailed around for a while. They expected you to write more than a paragraph at a time, and form coherent arguments in exams without the benefit of cut-and-paste! The horror. After while I got over my initial shock, and actually got reasonably good at writing (my recent resurgence of posts on this blog is an attempt to practice those skills which are getting rusty from too much code and analysis of experimental results). Even though I managed to get pretty good, I still didn’t quite understand. I thought it was a waste of time when I could construct the argument for my 2000 word essay in a paragraph.

I got over that though. I saw that one has to also explain their reasoning, because any conclusions about the real world based on science usually contain assumptions. It’s important to not only be explicit about these assumptions, but to also explain why these are the assumptions chosen.

Even more recently, I’ve worked on economic attention allocation (ECAN) in OpenCog, which controls the flow of attention within an artificial mind. Why is this necessary? Because the mind’s resources are finite – based on the computers available memory and processing ability, and we obviously also have a finite capacity for focussed thought too (well, yes, it’s finite because we only have so many neurons, but I’m referring more to the limited number of concepts and relations we can consciously work with at any one time). Much of how the attention allocation works in OpenCog is based on importance diffusion between close concepts, but the concepts also need to be primed with importance in some way. If diffusion was the only process that moved attentional importance, you’d eventually end up with a homogeneous soup where everything was equally important!

The way this happens is somewhat complicated in OpenCog, and not immediately relevant to the point, but one thing to note is that external stimulus excites contextually relevant information. What that means is, if OpenCog sees that a cat walked past… then knowledge about cats (they are furry 4 legged felines) and perhaps locomotion/kinematics is stimulated (if the cat keeps walking in that direction it will fall off that balcony and fall due to gravity… okay, so maybe that’s more about prediction, but general knowledge about walking would be available in order to be used to make that prediction). By stimulating this knowledge, it’s made more important in the systems mind. Beyond the cat example[1] the same goes for reading someone’s ideas. If someone just tells you “the sky is green, purple and sparkly” and leaves it at that, you’d probably just think they were taking some particular effective hallucinogenics… especially if you were inside, at a conference about refrigerators (if it was a metereological conference, you might be willing to concede that they were right, because they are an expert in the field and have a reason for believing as such). But if the the refrigerator conference attendee first prefixed their statement with “Y’know, I was just holidaying in Alaska, and I saw the Northern lights, up there… “, then assuming you vaguely knew about auroras changing the colour of the sky, then you’d be able to believe the refrigeration expert. Or at least believe they weren’t in the habit of attending refrigeration events under the influence of mind-altering substances.

[1] It’s okay, the cat didn’t actually fall off the balcony.



1 comment so far ↓

#1   Saayaas on 05.27.16 at 8:04 pm

Can you give few of such stories and it’s purposes?

Leave a Comment