2010-02-22

Automounting - SSH Host or Loopback Devices

Modern Linux systems have an extensive set of automounters for all sorts of devices that are inserted into your computer. Last I checked, when I plugged in my helmet camera, Linux was merrily talking with it despite the fact it was a recent model (the miracles of UMS implementations).

I used to be really dependent on autofs for this kind of thing. Back in the day, if you wanted your floppy disk to magically appear, you had to mount it either explicitly (ick), implicitly (fstab), or magically. You'd type "ls /plug/floppy" and some magic would go out and fish the correct mount options.

The magic is still handy for all the cases when you have a file somewhere, but your computer doesn't know it. In my case, that's mostly when a file is on a remote server or on an image file.

To solve the problem, there is autofs. That's a tool that magically knows where to put things because you (or someone else for you) told it to.

The autofs paradigm has two major items:
  • mount points - which are directories that autofs watches carefully for access
  • maps - which are the magic things that tell autofs how to map things
Let's try an example: Assume you have SSH access to the server marco.example.org. If you want to copy a file from the location /home/user/example.txt to your machine in the current directory, you'd type:
scp user@marco.example.org:/home/user/example.txt .
Wouldn't it be much better for a hundred different reasons if you could just copy the file, say like this:
cp /net/marco.example.org/home/user/example.txt .
Why is that better? Well, so far it isn't much. But if you could access the file like that, you could convince your editor to edit the file in place, saving it right where it is, without having to copy it locally. You could, for instance, just add the file path of your build server, and suddenly you run software remotely.
Another, more immediately useful example, is the mounting of .iso files. When you download a Linux (or Debian, or whatever) CD/DVD, it comes as a single file you have to burn to a physical medium. Now, it would be great if you could just mount that file. Wait, you can! Not only can you do that, you can automagically do that.
So, this is what this article is all about. We will see how to use a remote server and an ISO file just like regular directories - and nothing is standing in the way of more and more interesting additions to our file systems.
[Setup]
autofs is well integrated intu Ubuntu: just type the trusted
sudo apt-get install autofs
and you are on your way. The dependencies are downloaded, the files installed, the autofs daemon started, and you are ready to go.
So, what happened? Absolutely nothing. By default, the autofs installation does nothing at all. But it has a few tricks in store.
All the action you care about happens in the /etc directory. There, you'll find a messy set of files called auto.[something]. In the current package, there are the following:
  • auto.master - the main file
  • auto.misc - an example with different hard-coded devices
  • auto.smb - another example for SMB (Windows) shared drives
auto.master is the most important one for now. If you look at it, it consists of a series of lines that look like this:
/some/path /etc/auto.something --options
/some/path is a random directory you choose that henceforth is where autofs is going to put your shares. Whenever you try to access a subdirectory of /some/path that doesn't already exist, autofs will try to mount it looking up the file /etc/auto.something for a hint at what to do. The --options are, well, optional and help determining the behavior of the automounter. --timeout, for instance, is very common: it tells the automounter to drop a directory if it hasn't been used in a set amount of time.
Of the examples, auto.misc is the simpler one, so we'll look at it first. It is composed of lines made up of three sections, just like auto.master. The sections are different in order (don't ask), but they are logically similar:
key options location
The "key" argument here is not a path but a name - it's the name of the subdirectory you are trying to access. Say you went with the default in auto.master and assigned the mount /misc to auto.misc, and there is a key "floppy" in there, then whenever you try to access the directory /misc/floppy, the automounter would look at the line starting with "floppy" to decide what to mount (the location) and how (the options).
Here you see that autofs was born to mount network shares, since anything that isn't a straight NFS share (you don't care what that is) requires special handling.
The other file, auto.smb, is a lot more complicated. First of all, it doesn't have any lines like auto.misc. Instead, it is a script you can run on the command line. What it does (if you feel inclined to read the source, follow along) is to use the command "smbclient" to query the network and find shares to mount. Once it found one that matches your desired key, it "mounts" it.
It actually doesn't do the mounting itself - it simply returns a line like the ones in auto.misc (minus the key, since that's a given), thus telling the automounter what to do.
[SSH Mount]
So, now that we looked at the general setup, let's look more closely at mounting SSH servers. From now on, I'll use the convention that the user dude on the machine kde is  trying to access a server named marco.example.org using the account guy.
The autofs setup is really simple. We need a file, say auto.ssh, that contains the automounter lines:
marco.example.org -fstype=fuse,allow_other,reconnect,uid=1000,gid=1000 sshfs\#guy@marco.example.org:/
Wait, wait, wait!!! What does that all mean? First, you have the key, which we already knew about. Then we have options, which we need to explain and chance. Finally, we have the weirdest thing as location.
Ok, there is such a thing as a filesystem on ssh. We need to install that if we want to access the functionality. That's done easily:
sudo apt-get install sshfs
Now we have the fstype installed, and the weird syntax for the location is explained. The backslash ('\') before the pound sign took me hours to figure out, so I am expecting friendly comments and PayPal donation for the time I saved you.
What about the options? fstype is fuse, which is short for Filesystem in User land (you don't need to know right now - but it's a wonderful project). Reconnect and allow_others are for ease of use. The first one reconnects if the network goes down, the second one allows others that have access to the mount point to access your ssh share (you may want to rethink it on multi-user systems - but if you want ot make sure that your mount is accessible by daemons behind your software, you better set it).
The user and group ID are those of the connecting user, dude. You can find both very easily by typing id on the command line (they are the first two numbers on the result). The first user and group created on a Kubuntu machine have the ids 1000, so that's fairly common.
[Problems]
Yeah. It won't work like that. You have to do some extra work for the setup. (Again, donations appreciated...). First, we need to be able to connect to the server using SSH:
ssh guy@marco.example.org
First time we do that, we are asked whether we'd like to accept the key. That's very important (insert security blabla), but in our case, it's mostly a nuisance. You have to connect manually the first time, because otherwise autofs is confused by the reply. So do that.
The next problem is a very immediate security problem. When automount is running, it is using the root account. So it's the root account that need to be able to access the remote server. But the root account doesn't have the credentials, dude has them. What to do?
Well, the easiest thing to do is to give the credentials to root. I mean, if you are root, you have access to the credentials, anyway, so it's quite pointless to hide them. Easiest way to do that is by creating a link between dude's credentials and root's. On a single-user system you could link the entire SSH directory:
sudo ln -s ~dude/.ssh ~root
(Assuming root doesn't have a .ssh directory yet.)
Next thing is that autofs can't ask for a password or passphrase directly. We could either use an SSH agent (which I won't cover because it's complicated) or we can use a private key without a passphrase (which is ugly because it's totally insecure).
Now, I can't stress this enough: using a private key without a passphrase is seriously dangerous: it allows anyone with access to the machine with the private key to access all the machines that allow access with it. This setup is mostly for people that, like me, prefer SSH over SMB at home and whose main laptop contains all the valuable information. The idea here is that if someone compromises my laptop, the worst thing already happened. That they can access the backup server from there, not a big deal.
Ok, now that you have given root a passphrase-less key and made it a default key, or loaded an ssh-agent with which the automounter communicates, we are ready to go. Well, we have first to tell the automounter where to put the SSH servers. For that purpose, we add a single line to the file auto.master:
/share/ssh auto.ssh
That means that from now on, our servers are going to be mounted under /share/ssh (make sure the directory exists, is owned by root, and has the permissions 755). Restart the autofs daemon with sudo /etc/init.d/autofs restart, and there you go! Now try:
ls /share/ssh/marco.example.org
and you will get the directory listing for the root directory of that server. You will be able to see, modify, and create files exactly like the user guy on that server could.
[Loop]
If you are a real geek, you probably have a ton of files that can be mounted as drives. You might have the .iso files you burnt to install Linux, you might have the virtual hard drive of a virtual machine (like VirtualBox or VMWare, or User-Mode-Linux for the courageous).
Mounting those drives is easy, but a real nuisance. Essentially, you type:
mount -o loop filename mountpoint
 and then you can access the file as if you were in the mount point. Say, you have the Karmic CD stored as /home/dude/Downloads/kubuntu-9.10-desktop-i386.iso. To put it onto the directory /share/iso/kubuntu-9.10 you would type:
mount -o loop /home/dude/Downloads/kubuntu-9.10-desktop-i386.iso /share/iso/kubuntu-9.10
Of course, the directory usually doesn't exist, so you just mount it as /mnt, and you manage until you need a different version and unmount the first one. It gets messy after a short while.
[The Solution]
What about if you get rid of the haphazard and use a script instead? Why a script, you ask. Well, the problem is that while you may know where the file is, unless you are particularly orderly, your computer won't know where to look. So, instead, we use the utility locate to find the file we need and then mount it to a well-known point.
Assume you want to mount the kubuntu ISO file. First, use locate to find it:
locate kubuntu
You get back a list of files that all match the name kubuntu. If you add the option -b, then you get only files who actual file name matches kubuntu (no files contained in directories named kubuntu. Still, we might have a ton of files.
What do we do? We look them all up and determine whether we can mount them. To do that, we use the tool file, which gives us a guess as to the content of the file. If it is a line that contains the words filesystem data, we know we can mount it and just need to pass the correct filesystem type to mount.
What we will do is prioritize the file names by length first. The shorter file name that matches our string is a better match. Then we look at numbers and select the one with the lower number, if the length is the same. Actually, you can decide to resolve conflicts whichever way you want. You can also decide not to resolve a conflict, and to return an error, in which case the mount fails (I don't like that behavior, even though it might be better, because I am doing this just out of convenience, after all!)
I wrote a Tcl script to do the job for me. I also loaded all the ISO images of CD-Rs I own onto my trusted server, backup, which I access (you guessed it) using SSH. Autofs is configured for loop on backup, and for ssh on this computer. So when I need to find a particular picture from one of the CDs I burnt, I simply type:
gwenview /share/ssh/backup/loop/pics-2003
and off I go. I don't need to get to the server backup and do the mount manually, and I don't have to care what the name of the ISO file is and where I stored it (in the backup tree or in the live image tree). I don't have to invoke ssh, don't have to worry about scp, and don't have to figure out what to cache on my machine before I fire up my image manipulation program.
[More Fun]
Autofs is best friends with fuse, the file system in user space. Since fuse development is tons easier and less dangerous than development of file systems for the kernel, there are tons of available options.
If you don't believe me, just type in apt-cache search fuse into a terminal window, and look how many of the hits you can get for a file system. There are exciting things like flickrfs (which mounts your flickr photos), unionfs (which joins two directories together, for instance if you have no room left in one...), glusterfs (for clustered servers), and for everybody annoyed enough with UpPeRcase on UNIX, ciopfs (case insensitive on purpose).

[Update: if you want the Tcl script or a Python version of the same, please let me know.]

2010-02-16

MacOS X and Linux

We all know a "control freak." That's someone that knows better and therefore automatically need to have control over a specific process, so that it's done "right." It doesn't matter that other ways might be just as good or even better: the control freak relishes the control, and achieving the controlled outcome is a goal in itself, without real connection to the problem at hand.

When Apple announced that OS X would be a complete rewrite based on BSD, I felt thrown back to the days of Yahoo!. For those not in the know, Yahoo!'s servers have been running on BSD for over a decade now, and the high engineering brass (at that time almost entirely composed of the veterans in the company) swore by the superiority of that OS. Of course, this was by virtue of things that had been true when it was chosen, and that had been largely obsoleted by the time the New Millennium rolled in, but why check reality when fiction is so much more pleasant?

Apple made its choice of OS, I assume, based on Steve Jobs's choice back in the day when he created NeXT. I assume that, just like the high brass at Yahoo!, he felt that BSD was the technologically superior solution. And it didn't hurt that "nobody else" was using BSD, so he would pretty much keep control of the kernel for the time he was going to use it.

Now, MacOS X has been a huge success, mostly because of the graphical user interface it comes with. The reader, though, is still unhappy that there are now two major UNIX variants left, MacOS X and Linux, that don't really compete and hence could join forces and become stronger on each other's backs.

Apple, to be fair, is a good citizen in the open source community. The universal printing interface used in most Linux distributions, CUPS, is theirs, as the administration screen is fond of reminding me. On the other hand, package management was leveraged from Linux: if you want to install software on a MacOS X computer, you use the same dpkg that I use on my Kubuntu box.

So, what's the big deal? I think there are two major problems with Apple's approach: the first one is to force all development through the channels of their choice of libraries and even compilers. To develop for an Apple computer, you pretty much have to go through Objective C, which is pretty much an oddity preferred by NeXT.

The other big issue is that most of the code that Apple develops is proprietary, which means if you develop for an Apple, you won't be able to re-use your code on a different computer. That's bad, because it means that a world of developers that are focused on the Mac platform could develop for Linux, too - but they don't, and they can't.

Why would Apple care if anyone develops Linux apps? Because it is in Apple's best interest to have software written for Apple computers run on the vast variety of equipment that runs Linux. Sure, some of the functions wouldn't work, or work suboptimally. Sure, some of the apps would require bizarre configuration instructions, but that's the price you pay when you use Linux. It would be a win-win: Apple would gain a footprint where it cannot compete (because it offers no devices) and Linux would get a set of for-pay apps to complement the for-free open source applications.

I know, I know, Richard Stallman would turn in his swivel chair... But fact is that a lot of the innovation in open source applications comes from for-pay software. Having, say, Microsoft Word made something like OpenOffice possible - nobody would have developed the open source alternative without a for-pay original first.

If there is one thing that the GNU effort told us it's that the monolithic stack doesn't work. Did you hear about the GNU operating system? No? That's because it never actually flew off the ground, mostly because development was so slow because it tried to do everything from scratch.

If you believe in the bazaar model of software, then you need to understand that genetic evolution occurs in increments, not revolution. That's the message to send to Apple, to GNU, and to the KDE developers. Try to re-use as much as possible, try not to force to renew everything, focus on pain, not on desire.

2010-02-13

SUX: Staples Returns Policy on Electronics

I don't know if it's just me, but it's been a couple of years or so that electronics have been breaking much faster than they used to. I still remember how shocked I was that my (otherwise wonderful) iRiver Clix 2 died one day for no discernible reason. It was the first time ever than a piece of electronic equipment I bought (that was not a computer component) died on me.

After that first shock, it's been massive. And lately, things have been getting completely out of hand. I've hat to return so many defective items, I should always buy 1.5 of them to cover my bee-hind. Like right now: I bought a bluetooth keyboard and mouse, different manufacturers, and both of them were defective. The mouse stopped working entirely after working perfectly for a while. The keyboard simply started degrading - keys stopped working. First F5, then the Alt key.

The bad thing about this development is that the gadgets themselves were excellent. They keyboard (a Microsoft Wireless Bluetooth Keyboard) is outstanding: it is flat, it has excellent tactile feedback, it is light-weight, and it uses standard AAA batteries. It's a charm! The mouse, too, pleases me much. A Kensington Bluetooth mouse with trackball - easy to use, very reliable, and accurate.

I bought the mouse over at Amazon. When it broke, I filled out a form online, they gave me a return code, and I was done dealing with the problem by UPS-ing them the broken thing. They even sent me the new mouse before the old one arrived, which I found both very trusting and very helpful.

Unfortunately, I have to compare that with the place where I bought the keyboard. Staples. I went into the store with the original receipt and told a very friendly checker what was wrong and what I wanted: the keyboard was broken, and I needed a replacement. She asked whether I had a replacement plan, and I said that I didn't. She then proceeded on saying that I had had the keyboard for too long and that I couldn't return it any more. I said that I didn't want to return it, that I wanted it fixed.

Next thing you know, she told me I should get in touch with the manufacturer and see if they offer a warranty. I said that I didn't really want to deal with the manufacturer about a repair on an item I had bought a month before, and she got the manager in.

The manager, another extremely friendly and helpful lady, explained that this was Staples policy: I could get a replacement plan, or if I declined it, then I could return the item within 14 days. I explained that I didn't want to return the item, but that I wanted a repair on a defective product, and the manager said that Staples doesn't do that any more.

She did though offer to give me the replacement plan as if I had bought it originally. For $9.99 I would get a new keyboard. I liked the keyboard, the price was certainly right, so I agreed.

On a lark, I asked whether, if this keyboard (I am typing on it right now) broke, whether it would be replaced now that I had a replacement plan. No, she explained, the plan covers only one replacement. If the item breaks, I would have to buy a new one, or get a new replacement policy.

I am sitting there and realizing I am dealing with a chain that offers a two-week warranty on the electronics it sells. Basically, this chain has a huge incentive to sell crap: the more crap it sells that breaks, the more it makes in replacement plans, etc.

Sure, you could say that doesn't make sense: after all, $9.99 is an outstanding price on equipment that works. Well, that's true - but I am unlikely to go to the store and just get the replacement. I will probably go and buy all the other office equipment I need at that point. Besides, this is contrasted to other places (like Amazon) that offer much, much longer warranty periods.

Hmmm... If I were Staples, I would certainly rethink that policy. I am certainly not going to buy electronics from them any more.

Nokia N900 - First Impressions

So, yesterday UPS brought my new N900. For those of you who've never heard of it, it's Nokia's "alternative" smartphone. So much alternative, in fact, that the manufacturer refuses to call it that way. It's a mobile computer with phone functionality. Which is exactly what I was looking for.

You see, I've been using smartphones for years now, and I always found them severely wanting to the point of being barely usable and in general not worth the hassle. That, though, is my particular use case and others, with different needs, will reach different conclusions.

Since the assumptions of my use case are a huge part of my desire to have the N900 and tip the balance on this set of impressions, let's go over them first.

I am a geek. I own about three dozen computers in all shapes and sizes. The largest one is a rack full of equipment in a data center, the smalles a BASIC stamp. I write a lot, communicate a lot, listen to music, watch videos, take pictures, and constantly surf the web for research.

I am no different than most geeks in that the voice functionality of my phone is my least favorite feature on it. Just like the generation after me, actually, two or three after me, I much prefer sending text messages to spending time on the phone. I used to say it was because I have an accent, which makes it hard to understand me. But, really, it's just because I hate talking to a little box while I can't see any of the emotional expressions in someone's face.

[For reference, I have the same reaction in a movie theater, when the guy next to me plops himself down in the seat with chili fries while the movie is in the middle of a love scene.]

I am dismayed at my current phone service provider. I am charged $69 a month for 900 minutes of talk time I never use, have a mandatory $34 charge for smartphone data use, another $15 for limited SMS texting, and had to choose between a phone that works outside the U.S. or a camera.

Worst of all, while I pay a ton of money for mandatory Internet data, when I actually try to connect my computer to the Internet via the phone, my provider wants another $30 to provide me with exactly the same thing I have already paid for. All in all, I am paying about $150 for a useless brick that I use occasionally. And to make things worse, all the power that thing uses makes it drain its batteries within 24h.

OK, that's untenable. Let's see what's wrong with the picture:
  • A phone needs to be able to put itself into a mode where, when unused, it'll last for a week. Get a clue, smartphone manufacturers! It's not that hard! Have you ever seen a Kindle? The long battery life alone is a killer feature!
  • The Internet access on the phone is useless, because half the sites don't work in the browser. Using Opera instead of the BlackBerry browser makes life a teeny little better, but nowhere near good. iPhone users: explain to Steve that you abso-&!%@#-lutely need Flash.
  • Really, making people pay for SMS is plain stupid. The low bandwidth requirements and 0 requirements for synchronicity mean that you can run as many SMSs on any wireless network without feeling the pain. Fleecing the customer (because that's what it is) is in the long term an outstanding way to get them upset at you.
  • In the long term, voice will just be a feature running over a common carrier Internet. There is no reason to assume that there is anything special about voice communication. As a result, VoIP applications will rule the world. In the short term, please don't disrupt Skype traffic!
  • In the same vein, a phone that doesn't switch automatically to a cheaper wireless network is just plain dumb. I mean, seriously, if the phone knows how to connect to the Internet, it should also know how to route your voice packets over it.
  • Contracts. They are evil. They are stupid. They are anti-competitive. AT&T was broken up over shenanigans similar to 2-year contracts. I've had my share of problems with contracts - including (*) an automatic 2-year renewal after my phone was stolen, (*) an early termination fee for a contract whose term had already expired. In general, contracts are bad, but the atrocious customer service that comes with your phone contract makes them much worse. Of course, the reason they don't fix the customer service is that you are tied to them by your contract.
The list is incomplete. But here is what I think my use case will look like:
  • A cheap phone on a prepaid plan for emergency communication. By "emergency," I don't mean 911, but being out of reach of a wireless 802.11 network, which is where I spend most of my working time and where I am always when I want to perform real work. The emergency phone is just there so that people can tell me they need to get in touch with me. Since it's an emergency contact, it would come on a prepaid minute plan, not on subscription plan. It would include SMS. The phone would be the kind you dump in your backpack and don't worry about for a week. Charge it when you do your laundry kinda thing.
  • A laptop. The deciding factors on the laptop are (*) weight (under 3 lbs), (*) keyboard (big enough, responsive enough, sturdy enough, (*) screen size and resolution (>10", >1024x768)
  • A server configured to be accessible from the Internet in a secure fashion. I use a fast HP desktop at home. It runs Kubuntu, is configured for ssh access, and I use it whenever I need something compiled or done fast.
  • A device that allows me full access to the Internet and everything that comes from there when I am on the road. It need to be fully competent, fully connected, quick-booting, and very, very small.
Devices I'd rather have separate are an eBook reader (because I read a lot) and music players. The plural is because players are so cheap, I'd rather have one for each use case (= activity).

So, how do I address the last device? There are several options:
  • iPad: great form factor, although a little big. With 3G connectivity, it suffices the needs of being always on. The problem is that it's not only NOT a fully fledged device, it is lacking in some core ways, most notably the finicky browser support and the lack of multitasking.
  • iPhone: seriously, I have lots of friends on iPhones. They are totally excited about their iPhones, and I can see why. I hate that thing. Just looking at them. Looking at the super-crappy pictures it shoots, so out of focus and low-res that you can tell they were shot with an iPhone from a mile. Looking at the way every browsing session becomes an exercise in pinching-poking-slanting-twisting. Loooking at the way my friends have memorized what sites work and what sites don't. Looking at the way they tell you this that or the other thing doesn't work because Apple hasn't allowed it to happen on an iPhone. Seriously, people? You like a nanny? Get a nanny. For the money you spent on your iPhone (and especially on the special cables, chargers, etc. that you need), you can easily afford a naughty nanny.
  • A teeny netbook. You could go for a EEE in the original 7" form factor. Nice try, unfortunately the screen resolution stinks, the thing is way too heavy, and the booting is as fast or slow as the OS you install.
  • A smart smartphone, like the N900. A-ha! That's the one. It's small, barely bigger than an iPhone, but it runs a blazing fast OS that makes life really easy. It carries GSM connectivity built-in that makes it universal. It has a decent screen resolution (800x480, for a total of 384,000 pixels (compare that with the iPhone's 153,600).
So, after I started up my N900, I already had the idea that the idea of the device was what I wanted. But was the implementation going to stand up to the idea?

I read a bunch of blogs before buying. I looked into all comments available, trying to make sure the software would stand up to my requirements for usability. There were plenty of people griping. I held back. After a while, the pressure started getting really bad: I was still spending $150 a month on a device I barely use and that I fundamentally hate.

So I went out and bought one. I got a decent price ($500) and really felt like it. But I was still quite concerned about usability.

I turned it on. I haven't looked back. I love the N900. It's easy to use, easy to understand, and most of the reviews that were negative on usability were complaining from the point of view of someone that has spent no more than 30 minutes with the device. Sure, at first you are totally lost as to how to do things - but until you were told that's the way to do it, would you have pinched the screen of an iPhone?

The browser that lives on the N900 is Firefox. It's not some mobile version of Firefox that doesn't work with this that and the other site. It's the real thing - so much so that some extensions are available for it already (AdBlock comes to mind). I've tried all sites I could get my hands on, and all the ones I need work. Even better, you (a) don't have the same need for resolution changes, thanks to the high resolution of the device, and (b) get used to twirling instead of pinching in about 2 seconds. When you want to zoom in, you turn your finger clockwise. If you want to get out, you turn counterclockwise.

The interface is quite intuitive in a playful way. When a dialog is modal (which means it demands attention), the background becomes fuzzy. You get out of a dialog by clicking on the fuzzy area. It's as simple as that. No need for a dedicated cancel button.

The underlying Maemo OS is a variant of Debian Linux, just like the Kubuntu I use on an everyday basis. Sure, the apps that run on it are mostly not the same I'd use on Kubuntu, but that's a matter of time and patience. What this means for the user is that software installation, configuration, and selection are not moderated by Nokia or Apple - they are up to you.

For whatever reason of theirs, Nokia decided to add a keyboard to the device. I think that's probably the worst decision they made, because it add visible bulk to the device (which is much thicker than it needs to be) while adding very little to the functionality. Even with the kayboard, I much prefer the huge on-screen keyboard. The keys are bigger, the display clearer. Now I just have to figure out how to confirm predictive input.

Another gripe that I have with the iPad shows that Nokia understand the market a lot better than Apple: the thing comes without any of the gizmos that are what makes the iPhone so popular. Essentially, what you want in a "mobile computer" is all the possible sensors imaginable: video camera, photo camera, GPS, bluetooth, touch sensitivity, accelerometer, microphone, etc. The thing is connected to the Internet, so all the processing it couldn't do on its own, you can just offload to a server.

There are a lot of iPhone applications I love because of that: barcode readers that use the built-in cam; tuning forks that tell you when your guitar has the perfect pitch; location browsers that tell me that my soul mate is within reach. You know, just the gamut. The iPad is really dumbed down that way, but the N900 has pretty much everything you'd like. (The webcam functionality seems to be limited at this point.)

I am pretty much done with my first impressions. There are still two points I need to address - one positive and one negative. Let's start with the negative one first: when an application decides to freeze, it's incredibly hard to figure out how to get out. GMail IMAP, for instance, tends to freeze up the email client (IMAP and GMail with its enormous folders are a bad match anyway). You have unfortunately no way to tell whether the app is (a) frozen or just busy, and (b) what to do next. The only solution I've found so far is to click the menu bar on top, where only the status applets (for battery, bluetooth, etc.) seem to work.

Closing on a positive note, and agreeing with pretty much all online reviews, the Skype support is simply amazing. Using Skype on a computer is nice, but using it on a phone is out of this world. You get the clarity of the calls that you are used to, but the convenience of a phone.

I set up my Skype account to allow calling and being called from landlines and mobile phones. The cost is a fraction of what you'd pay my current wireless provider, and it works just wonderfully. Maybe it's that I spent half the Opening Ceremony of the Olympics calling myself from the Blackberry to the N900 to check on quality, but I am just in love with the functionality, the cost, everything about it.

I say that as soon as the N900 has a few more and more useful apps in (the) store, it's going to be the thing to have, no questions asked.

2010-02-12

YHIHF: The Information Revolution's Very Own Losers

I think we pretty much survived the first wave of the information revolution largely unscathed. We learned that giving money to startups without a clear business plan is a bad idea, we learned that astronomical salaries and fortunes made of stock options are not for real, and we learned that the Internet, as a whole, is here to stay. Why, haven't you heard the first youngster ask you yet how life was before the Internet?

Now it's time for a deeper assessment, one with long-ranging views, a structural look at things: who are the real winners and losers of the Internet Revolution? What did, indeed, the Internet bring that is such an epochal change?

Let's start with a quick note: Al Gore was right. The whole point of the Internet is the Information Revolution. We have more information available to us, it is much higher in quality, and it's much easier to find.

That's wonderful, isn't it. But you know, the opposite was the case for most of mankind's history. For millennia, there was very little information available, it was extremely low in quality, and it was really hard to find. Just think you are a peasant in Medieval Europe: what should you plant? First, there are only a very few crops available to you. Then, the reasons why you should plant one instead of the other are only marginally understood and you mostly rely on astrology. Finally, the information on good crops in pretty unattainable: how would you know that there are potatoes in the New World?

Nowadays, things are quite the opposite. I recently had a snowboarding accident. A separated shoulder. I had no idea what that was, except for the fact my left collar bone was detached from its resting place and freely floating. Not so good.

The next thing you know, I am on the Internet, and within a few minutes I know everything I needed to know about the injury I sustained. I could validate what the doctor and the physical therapist had told me, could add information to that, could set up a recovery plan, and could decide where the medical staff had lied to cover for liability. (Knowing exaggeration certainly counts as lie in my book.)

So, we are winners in this information game. Who are the losers? Well, that's everybody that has been making money with information. Who's that, you ask, and you are probably thinking of bookies and newspapers.

Well, you see, the business of information is actually enormous, one of the largest in the world. It's the entire financial industry, including banking, brokerage, and insurance.

How so? Well, you see, a bank doesn't actually produce anything. It just finds sources of money and connects them with people that need it. You think the "products" that a bank offers are loans and savings accounts, but that's not true. A bank sells information, that's all. The products are fronts to convince you to give the bank money: the interest rate on a loan, for instance, is displayed as a cost to you for the money the bank gave you. The interest rate on your savings account is your profit.

The bank detaches your savings account from the loan that pays for it in the same way the Federal Government takes its taxes and spends its money in an unrelated way. You cannot say, I do not want to pay for this particular project that you don't want, primarily because it would be impossible for the government to tell what money goes in which direction.

How is that important, though? Well, you see, since connecting money sources with money sinks (in computer lingo) is the only real function of a bank, it deals simply with information. And since you can find that information more cheaply than with a bank, you will at some point simply bypass the bank. The difference between loan interest rates and investment interest rates (the "spread") is enormous, somewhere between 5% and 15% of the amount in a constant basis. The only thing the bank has as a downside is risk, which it mitigates in the same way it has been doing for thousands of years (collateral). It is the fat cat that the Internet will kill.

Another example? A very egregious one: insurance companies. What they sell is information. Actually, the don't sell you their information, but useless products based on that information. You see, the point of insurance is to take a particular risk and give it a dollar value. What is the likelihood that you will have a car accident? How much will it cost? The product of cost and probability is the likely payout, and ideally an insurance company would charge you precisely that amount.

Now, why do insurance companies make obscene profits? Because they know something that you don't know, namely what the likely payout is going to be. Sure, there are costs associated with insurance - administrative, regulatory, marketing, fraud detection. But ultimately, the business of insurance is to sell you something above the value of risk.

Sometimes the insurance cost is absurd more than simply obscene. You have probably rented a car before and gotten annoyed at the agent painting in graphic detail the horrors of having to deal with an accident in your rental car. I sure have, and the pressure tactics are amazing: I've had rental companies refuse a rental if I didn't carry proof of insurance with me, put a hold on my credit card for the value of the car, preach for ten minutes. Then I looked at the insurance cost versus the rental cost and made a simple realization:

It is completely impossible that insuring a car could cost more than renting it. If I can rent a car for $20 a day, and the insurance on the car itself (that is, for damages to the car) is the same amount, then logic dictates that every other car returned is damaged. How do I figure? Well, if I made a business renting cars to rental car companies, then I would ask for $20 a day, just like they do. If a car is returned damaged, I just give them a different car, for $20.

The logic has flaws, but it works. I've had an insurance company try to charge me more for motorcycle insurance than the motorcycle is worth. How does that make sense? Let's see: I can either give you the money each year, or buy a new (used) motorcycle every year. Unless I wreck the motorcycle every year, not such a good deal.

Think about it. You don't need to go to an insurance company, you really just need a place that is willing to pool your bets.

2010-02-11

KDE 4 - The Big Letdown

Have you ever used KDE? Do you even know what it is? Well, I suggest you go to http://kde.org and check it out: it's by far the best-looking desktop environment for Linux users, with a look and feel that tries to mimic much of the latest Windows and MacOS eye-candy. It's really spectacular, if I say so myself.

Unlike its biggest competitor, GNOME, KDE is written in C++. By itself that's not an advantage, but it is possibly a hint at the main features of the language: inheritance chains and re-use. KDE applications look and feel much more similar to each other than GNOME apps and they seem to share a lot more functionality.

KDE 3.5, by now several years old, was a marvel in functionality, stability, and ease of use. It was far and away the best suite of desktop applications for Linux, light years ahead of the corresponding GNOME applications (with a few notable exceptions).

Then, something happened. It was triggered by a major release of the library underlying KDE, Qt. Qt is written by a company in Norway, Trolltech (acquired by Nokia a short while ago). Really, originally the GNOME project came into being because some folks objected to Qt, more specifically to the licensing restrictions that came with it.

Qt4 came into existence and required a rewrite of KDE. From release 3.5, there had to be a major jump to 4.0. Unfortunately, here the KDE developers and leadership completely failed us. They decided not only to rewrite KDE 3.5 to use the Qt4 libraries, but to completely change everything about KDE.

I should have known that catastrophe was near when I installed the first release of the KDE 4 series, KDE 4.0. Unlike standard Linux convention, the 4.0 did not indicate a stable release, but a hodge-podge of half-baked applications and pieces of software that more often than not didn't do what I wanted them to do. It was so bad that it was quite impossible to figure out why anyone would want to switch to KDE 4 at all.

Now, years later, we are slowly moving to KDE 4.4. Sadly, functionality is still not restored and even flagship applications are marred with several bugs that make life irritatingly annoying. Worst of all, it feels that what we got added into the mix is mostly eye candy, while the fine-tuned functionality that was the hallmark of KDE applications in the past is gone completely.

I'll give you an example: KDE's media player, Amarok, is far and away the best media player available for Linux. It has a pluggable architecture in which you can pretty much extend anything, and it's really flexible in recognizing media that has been added and removed. Best of all, it allows for extentions - tools, scripts, new media sources, etc. that enhance the functionality in ways that no single development team could ever envision.

Then came Amarok 2. It took forever to get used to the new interface, one that was pluggable where I didn't need plugging, but that horrifyingly decided to completely change the way extensions were handled. As an effect, the hundreds of them written for Amarok 1 (actually, amaroK) were completely useless with the new application. Not only was that the case, developers of amaroK scripts were not even treated with a guide on how to port their extensions to the new platform. We were all lost, users and developers alike.

KDE 4, I am sorry to say, was a complete nightmare, especially compared to the parallel developments in the GNOME world. There is a GNOME application you are familiar with, Firefox. It is pluggable and extensible, too. Whenever there is a new Firefox release, at least some of the old extensions still work. It's really not that hard. Besides, the Firefox team does explain how the browser changed and what an extension developer needs to do when porting to the new release.

To give you an idea of how irritating KDE has become, I will give you a simple example: the panel at the bottom. It's an old idea - both to desktop interfaces (you probably saw it first in the Windows 95 start menu) and to KDE. So it's not this revolutionary new idea that needs to be figured out first, right?

There are widgets on this menu. Ya no, like the clock, and the desktop icons. The former shows you the time, the latter shows you the desktop. Easy enough, no? You'd think.

Gripe #1
: Right-click on either of them. You get a context menu. One of the items is something of the form, "Remove this widget." Remove this widget (actually, don't). Now try to get it back. Fun, isn't it? It used to be the case in prior releases that you had to find an empty space on the panel and right-click on it. In most environments, though, there was no empty space, so you were left on your own, trying to figure out how to get your clock back.

Gripe #2: OK, this is really a usability issue more than a technical problem. If I right click, left click, middle click, or completely the clock widget, I can't change clock settings. Maybe I just got used to being able to do that on Windows, but it seems to be the logical place for me to set the clock. I see a clock, I see that the clock displays the wrong time, I set the clock.

Not in KDE 4. There I have to go to the system settings, where I find "Time and Date" in the Computer Administration area (or some such).

Now, you might ask, why would I spend my first post on this blog kvetching about KDE? Because I really love a lot about KDE. It's a wonderful environment with high degrees of consistency and functionality, and in its 3.5 incarnation it was by far the best environment available not only in Linux, but in any major operating system.

Then comes KDE 4, and it's all gone. We are slowly moving towards more usability after the KDE developers have been hammered for years with complaints from users. They have even lost the crown of best desktop environment to GNOME in some surveys and polls.

I hope the KDE developers, especially the leadership, have learned from this major debacle and work on addressing both the issues with the applications and the general approach they have taken.

I mean, seriously: just this morning I tried to get the contents of my playlist from Amarok to my media player. There is no way to do that. None whatsoever. Researching the web, I found the Amarok forum, where someone complained about the same thing. After the usual vapor about things being fixed in the latest release (they weren't), the helpful suggestion to file an enhancement request.

An enhancement request? You don't consider the inability to get your playlist onto your iPod a core function of a media management application? Imagine Apple telling you that you should consider filing an enhancement request when you find out that iTunes doesn't allow you to copy your playlist to your device. Grrrr....

And all of this just because I care.