Trust me!

I’m on the boat from Helsinki to Stockholm, eating breakfast. Just returned from a weekend in Helsinki where I, among other things, attended the Aula Exposure book release brunch. The book, which I wrote a piece for, has come out both interesting and beautiful. Shoutouts to Alex, Jyri, Marko and the book designers for such great work!

I spent last night in the cheapest possible cabin with a drunk trucker from Russia and two gays from France, also drunk. Scary — the minute these strangers entered the cabin, I knew I would sort of have to watch out for them all night. I tried being rational and just shut my eyes and sleep, but my gut feeling was constantly saying “You don’t trust these guys”. And so it happened that my relentless thoughts once more returned to the subject of trust and computers.

Kids playing
Kids lost in car game on the boat

To me, it’s obvious that one of the trickiest issues in interface design today is an issue of trust. The average user suffers from lack of trust in computers and the softwares running on them — and this, in a sense, keeps us from designing smarter programs and interfaces.
By “smarter” I mean programs that can drastically reduce human workload by conducting some kind of behind-the-scenes reasoning — I’m talking about ways of improving the way the computers work with us rather than the way we work with computers (but clearly the former imposes radical changes to the latter).

Of course, this widespread lack of trust has many reasons. A big one is that computers still are dumb and buggy. Another one would be that we’ve seen so many “smart” UIs and agents who turned out to be stupid and annoying. Remember the “smart menus” from Windows ME? You were desperately fumbling after the disable button after having experienced this numskulled something constantly deciding for you which menu items were important and which were not.

Yet, we are inevitably moving into a future where we will be forced to put a lot of trust in computers in order to get by. We are seeing evidence of this already today.
For instance, I trust my mail client to take care of junk mail for me. I have just a vague concept of how it’s doing it — probably it’s by using a combination of boolean logic and neural nets — but I’ve learned to trust it nevertheless.
In essence, what I trust is something — call it an agent or entity — capable of some kind of reasoning. Yes, I know that real people programmed it’s behavior at some point, and that it’s actually them I trust and that it’s actually them I ought to blame if something goes wrong.

But, in everyday life that’s not what happens — in everyday life you will blame the damn program when it filters out the wrong mail.
I mean, imagine calling up your secretary’s mom and complain just because your secretary happens to throw important stuff in the bin! And even if you do complain, that won’t get rid of your secretary.
Besides, it probably turns out the programmers stole the AI code from some obscure open source project anyway. And the guy who originally wrote that code died.

Silicon Slaves

I have, and millions of Hotmail users have, without even being fully conscious about the implications, engaged ourselves in a trust relation with a computer program, at least on the level of our everyday life.
In fact, the moment we make the computer do work of this kind for us — reasoning if you will — we simply have to trust it, or at least relate to it as if we trusted it.
Annoyingly enough, since it’s just a computer, we can’t blame it when it does wrong, and this gives rise to a lot of frustration. Actually it makes people furious.
On the one hand we have this entity which happily overtakes hours of our work without a complaint, on the other hand this work is done exclusively on the premise that we’re not permitted to complain if things go wrong. And we won’t get an excuse, nor a promise that the same thing won’t happen again. Actually we can be quite sure it will.

And yet, we humans seem to be able to establish a sort of pseudo trust in all kinds of devices and processes – and the need for an ability like this seems to grow by the day. Just think about everything from games to search engines to smart book recommendations to the segway to electronic pets (for your kids) to modern jetfighters to autonomous trains. Everywhere things are starting to use all kinds of different models for autonomous reasoning, and we just have to sort of go with the flow and trust these devices, even though we don’t know how they think and even though we can’t expect the devices themselves to explain this to us.

Where will this take us? I’d still say someplace good. I think this development is necessary, inevitable and that it eventually will bring harmony, possibly virtuosity or even a divine feeling of greater symbiosis, to computer use.
But we have a long way to go, and before we get there — much much more frustration.