Posts tagged ‘software’

Aperture

October 19th, 2005

Have you seen it? One of the best interfaces I’ve ever seen. Check out the quicktour movies for some true Apple design excellence. Nevertheless, some social software aspects (no, photo albums don’t count) would have been cool — and dragging keywords to pictures in order to tag them (like in iPhoto) is still a hassle…but wow, what an interface! I’m getting 1995 SGI vibes here (In a good way, that is! You know, that everyone-else-is-still-in-the-stone-age kind of feeling.).

No more drill-downs!

April 1st, 2005

Hierarchic filesystems are dying a slow death. While we wait for Apple to get its act together, we can use Quicksilver (my machine’s own little Google) to advoid endless clicking and drill-downs in open/save-dialog boxes. If you use this script, you can get rid of the clicking altogether!

Annoyingly hard to aggregate!

March 23rd, 2005

Just looking at how I could integrate my last.fm recently played tracks feed on my site, and it turns out I’ll have to parse rss (with some rdf thrown in), transform it to html, cache it on my server and then include that snipped on my page. I fully understand why few bloggers do this — it’s simply too complicated!

The easiest way to include stuff on a page is probably by using javascript, but few sites provide feeds and even if they do, it’s a far from optimal technique as your site will build up gradually when content is loading (and it won’t be very accessible either…). And now the rumors has it that IE7 will prohibit all kinds of cross-domain scripting, which effectively will kill much of the really interesting content syndication taking place on the web now…

So what we need, as my friend Adam argues, is a simple standard for seamlessly including stuff on a web page. I propose XML Inclusions, with standard http headers for smart content caching (e.g. “304 Not Modified”). Some smart, transparent proxy system would still be needed for high traffic sites — that, of course, is still the harder problem.

With this, I could include whichever feed I wanted to with one line of code. Just think of how many millions of people who would start aggregating stuff…

Trust me!

October 22nd, 2003

I’m on the boat from Helsinki to Stockholm, eating breakfast. Just returned from a weekend in Helsinki where I, among other things, attended the Aula Exposure book release brunch. The book, which I wrote a piece for, has come out both interesting and beautiful. Shoutouts to Alex, Jyri, Marko and the book designers for such great work!

I spent last night in the cheapest possible cabin with a drunk trucker from Russia and two gays from France, also drunk. Scary — the minute these strangers entered the cabin, I knew I would sort of have to watch out for them all night. I tried being rational and just shut my eyes and sleep, but my gut feeling was constantly saying “You don’t trust these guys”. And so it happened that my relentless thoughts once more returned to the subject of trust and computers.


Kids playing
Kids lost in car game on the boat

To me, it’s obvious that one of the trickiest issues in interface design today is an issue of trust. The average user suffers from lack of trust in computers and the softwares running on them — and this, in a sense, keeps us from designing smarter programs and interfaces.
By “smarter” I mean programs that can drastically reduce human workload by conducting some kind of behind-the-scenes reasoning — I’m talking about ways of improving the way the computers work with us rather than the way we work with computers (but clearly the former imposes radical changes to the latter).

Of course, this widespread lack of trust has many reasons. A big one is that computers still are dumb and buggy. Another one would be that we’ve seen so many “smart” UIs and agents who turned out to be stupid and annoying. Remember the “smart menus” from Windows ME? You were desperately fumbling after the disable button after having experienced this numskulled something constantly deciding for you which menu items were important and which were not.

Yet, we are inevitably moving into a future where we will be forced to put a lot of trust in computers in order to get by. We are seeing evidence of this already today.
For instance, I trust my mail client to take care of junk mail for me. I have just a vague concept of how it’s doing it — probably it’s by using a combination of boolean logic and neural nets — but I’ve learned to trust it nevertheless.
In essence, what I trust is something — call it an agent or entity — capable of some kind of reasoning. Yes, I know that real people programmed it’s behavior at some point, and that it’s actually them I trust and that it’s actually them I ought to blame if something goes wrong.

But, in everyday life that’s not what happens — in everyday life you will blame the damn program when it filters out the wrong mail.
I mean, imagine calling up your secretary’s mom and complain just because your secretary happens to throw important stuff in the bin! And even if you do complain, that won’t get rid of your secretary.
Besides, it probably turns out the programmers stole the AI code from some obscure open source project anyway. And the guy who originally wrote that code died.

Silicon Slaves

I have, and millions of Hotmail users have, without even being fully conscious about the implications, engaged ourselves in a trust relation with a computer program, at least on the level of our everyday life.
In fact, the moment we make the computer do work of this kind for us — reasoning if you will — we simply have to trust it, or at least relate to it as if we trusted it.
Annoyingly enough, since it’s just a computer, we can’t blame it when it does wrong, and this gives rise to a lot of frustration. Actually it makes people furious.
On the one hand we have this entity which happily overtakes hours of our work without a complaint, on the other hand this work is done exclusively on the premise that we’re not permitted to complain if things go wrong. And we won’t get an excuse, nor a promise that the same thing won’t happen again. Actually we can be quite sure it will.

And yet, we humans seem to be able to establish a sort of pseudo trust in all kinds of devices and processes – and the need for an ability like this seems to grow by the day. Just think about everything from games to search engines to smart book recommendations to the segway to electronic pets (for your kids) to modern jetfighters to autonomous trains. Everywhere things are starting to use all kinds of different models for autonomous reasoning, and we just have to sort of go with the flow and trust these devices, even though we don’t know how they think and even though we can’t expect the devices themselves to explain this to us.

Where will this take us? I’d still say someplace good. I think this development is necessary, inevitable and that it eventually will bring harmony, possibly virtuosity or even a divine feeling of greater symbiosis, to computer use.
But we have a long way to go, and before we get there — much much more frustration.

Back to the future

April 17th, 2003

Finally back in Stockholm after two years, writing an essay on future GUIs at the Faculty of Philosophy, Södertörn University College.
The album is done. Promo copies are going out.
Have been working like a dog the last month–and I’ve lived like a dog too for that matter. A nomadic dog.

A couple of weeks ago, I was lucky enough to meet Mark, who convinced me to mix every track on the album from scratch in his studio.
It has been a lot of work–but it has really saved the sound of the album, so I’m nothing but happy about it.


My nomadic home
Doglife.

Mark has a wonderful combination of digital and analog equipment. We have beefed up numerous beats by running them through his API with wild eq settings. You just can’t do that with digital. If you try, very quickly things starts sounding like representations of the real thing.
“Oh, that sounds like a tape delay”, people who listen to music will say, because they often know how a tape delay works. They know that there is a vintage, slightly unreliable mechanism involved in the production of the delayed sound. They can tell the difference between a real tape and a mimicked one–just like most people can tell the difference between fake marble and real.
That’s the whole dilemma of digital–it tries to, and it has to, mimic analog counterparts that has a long tradition with them.

Say you want to achieve distortion–in the analog world you can do it in many ways, basically it means sending an over-driven signal into something, so that when it comes out, it will be distorted. The distortion will get a different color and body depending on what you send the signal through.
With analog, there are no bandwidth or dithering problems. You can use really extreme settings and it will sound extreme–but still “good” in the sense that you won’t end up with a “reduced” or “flattened” sound that “sounds processed”.

With digital you never know exactly what you get–professionals talk about a “Black Box”-effect.
They refer to the fact that there are undocumented algorithms in most plugins that might alter the fundamental character of your sounds in a very dull, predictable way.
An example: the highly regarded Waves Renaissance plugins; because they all are all based on the same internal architecture, they all narrow and distort the stereo image of your sound in a very characteristic way as soon as they are applied–even when they aren’t doing anything!

Another example: Logic (before version 5) deducted 1 bit in the global mix-engine for every extra bus that you added to your mixer, which would gradually reduce sound clarity. It wasn’t documented, so people got gradually more disappointed with their mixes–without knowing why!

My music already lacks context. It suffers from a kind of post-modern sickness, with sounds coming from many disparate sources. Some people can’t listen to it because they start to focus too much on the individual sounds–they desperately try to identify and classify them. I’m not saying this is a bad thing–I think that is an interesting part of the music. That’s what got me hooked on sampling in the first place.
But I’m not sure I would have coped with contextlessness of my old mixes. So thanks again for the hard work, Mark!

Anyway, I’m looking forward to spring in Stockholm–and live gigs in summer. Will do a serious update of the Forss Official Site when the album is released in June.

The 21 jun, 03 my blog participated in the “Blog Ta Musicque” event organized by these guys!
More than 60 bloggers will participate. Some examples will be interesting : Kill Me Again will create a song for this day and will post it on his blog, Philippe Allard will cover the Music Day in Brussels by moblogging, and on a Wiki page Christophe Ducamp will create a collaborative page about Joe Strummer.