On our next PNI call we are going to talk about the use of PNI for weak signal detection. I was thinking about this, and I realized that every time I hear those three words, a movie scene plays out in my mind. It goes like this.
The astute detective and the comic-relief sidekick are walking through a busy, noisy place. The detective stops and says:
“Do you hear that noise?”
“What, the generator/fan/radio/whatever?”
“No, not that. The faint clanking/ringing/pinging noise. It seems to be coming from over there.”
And then they find the thing that solves the mystery.
In that scene, what sets the detective apart from the sidekick is not that they hear a sound the sidekick doesn’t. Both people hear the same sounds. The difference is that the detective connects the sound to a purpose in a way the sidekick doesn’t. The detective has a deep sense of situational awareness, one that includes not only where they are and what is going on but why they are there and what they are trying to do.
If you think of all the famous detectives you know of, fictional or real, they are always distinguished by their ability to hone in on signals — that is, to choose signals to pay attention to — based on their deep understanding of what they are listening for and why. That’s also why we use the symbol of a magnifying glass for a detective: it draws our gaze to some things by excluding other things. Knowing where to point the glass, and where not to point it, is the mark of a good detective.
In other words, a signal does not arise out of noise because it is louder than the noise. A signal arises out of noise because it matters. And we can only decide what matters if we understand our purpose.
In the same way, I think that the reason PNI helps people with weak signal detection is only partly because it spreads a broad net and collects information other methods miss. That is part of the picture, but it’s not enough. PNI helps people understand why they are listening. This helps them figure out what matters to them, and that helps them to choose signals out of the noise they hear.
This process doesn’t always happen at the start of PNI projects. Sometimes it only happens at the end of the sensemaking phase. Sometimes the body of stories people have already collected looks so different at the end of sensemaking that people want to plunge into it all over again with a new purpose in mind. The second time through the stories, they hear things they never heard before — all kinds of clanks and pings — because they understand so much better what matters to them and what they want to choose to pay attention to.
Sometimes I think people fail at weak signal detection because they think it’s all about finding out what is “out there,” when it’s actually more about finding out what is “in here,” in the reasons you’re looking in the first place. That’s one of the reasons I like to start PNI projects with story collection among the project planning group, because you can find out what you want to achieve by looking into the stories you tell yourself. In fact, if a project has as its goal weak signal detection, I’d say that some up-front story collection from the people driving the project is a must. It will help you make the choices that will distinguish signals from noise later on.
That’s just a simple thought to start with. I hope we can build on and improve the idea in our conversation next week.
Thanks Cynthia. This is getting interesting. Questions such as “what is aware” and “what is the range of situations that the machine should be aware of” pop up.
A memory popped up when I read your comment. A couple of years back we moved houses and we now have a back entrance that cannot be seen from the house. So we installed a camera. I hooked up the camera to the NAS (networked attached storage) which comes with software to automatically turn on recording any movement or change. So when the back door opens (the camera is in a garage) lightning conditions change (visible and/or IR) and the recording starts.
When browsing the options I noticed the system can even watch over objects. A warning goes of registered objects something move.
The question now is: is there any awareness in this system? Or it is just algorithms beavering away on data?
In my definition of awareness I would say that I become aware when the systems sends me a warning. The system itself doesn’t know it is a warning and why it sends that signal. That is all in the logic we humans put in. The machine doesn’t understand the logic. It just executes rules.
The next question is of course what happens when we would do this with A”I” (notice the “” around the I). I did my fair share of work on genetic algorithms and learning systems in the past and what I remember is that (a) one always needed to provide a learning goal and (b) after a while the learning stopped (or worse deteriorated as the “knowledge” started to behave chaotically after a while –> unlearning was not included well enough).
In short, I’m a far of a definition of intelligence that includes “must be able to cope with changes in the environment/context”. There algorithms still seem to break down. When I replace my bike the NAS system will send an alarm forever. It will never learn that I have a new bike, while my kids learn that right away.
According to people smarter than me this learning/change-adoption capability has to do with embodiedness. Only machines with physical bodies can learn: i.e. biological systems. There is a very good book on what happens when people disembody: How we became posthuman https://www.amazon.de/How-Became-Posthuman-Cybernetics-Informatics/dp/0226321460. Spoiler alert: we didn’t :-).
My take on it is that a situation, like a signal, is a choice. If you ask ten people on a busy street to describe the situation they are in, they will choose different things to leave in and leave out of their descriptions of the situation.
We define situations partly based on what we want, but also partly based on what we can handle. Children (and machines) are not yet capable of choosing for themselves, so we choose for them. That’s why parents are always pointing things out to children; they are helping them choose what to include in (and what to exclude from) the situation they see.
So yes, I think it is possible to train a machine to be situationally aware, as long we are willing to choose the situation we will train the machine to be aware of. By the time children can choose situations for themselves, the choice is out of our hands. At that point, the word “train” becomes problematical.
Well personally I haven’t seen any example of that, so today my answer would be no.
One of the questions though is what kind of input the machine would get. Just a written narrative? Or lots of additional information on the context/situation? That might help a bit, but I still think the chance of machines ever learning that is close to zero as the machine can make sense of it w/o contextual information it understands. The Q is: can machines understand at all? I don’t think so. Artificial Intelligence simply doens’t exists under my definition of intelligence.
Is it possible to train a machine to be situationally aware in your opinion?