2010-12-26

Creating Panoramas with Hugin

Like it? This is what the cliffs and the ocean look like at Black's Beach, California. Christmas Eve 2010. See that steep canyon gouged into the cliff face? That's how I got to the beach, on a trail some call the Goat Trail.

I stood at the bottom of this thing and took a picture of where I had come down. Then I took a picture of La Jolla to the South. Then one of the nudie beach to the North. Then one of the waves and the low sun. Then I decided I was just going to cover the whole world around me, and figure out a way to stitch the pictures together.

Back home, I remembered I had used this took, Hugin, to create panoramic images a while back. It was a pain: you had to couple up the images and click on a series of points that matched up. Image 1, image 2, click, click, click, click, click... Then Hugin would figure out just exactly how badly you had screwed up your picture by tilting, turning, panning, etc. and would create a panorama. It took about four hours of tedious work.

I wasn't looking forward to that, so I looked on my computer (apt-cache search panorama) for alternatives. I downloaded all of them. There was a GIMP plugin called pandora that didn't seem to be doing much, but it complained initially about not having an application from hugin installed (I just had to restart GIMP to make it work). There was an "easy to use" application called fotoxx that seemed more primitive than easy to use. There was a CLI app called enblend. Other stuff seemed to be mostly supporting software.

I wept, I gnashed my teeth, and I started Hugin. Lo and behold! It started with a tabbed interface and (miracles!) a WIZARD!!! It promised it would ask you what it wanted, and started by asking about the pictures you wanted to use as a base. Then it started working, actually giving me feedback so that I didn't have to wonder whether it had died on me. Then it came out with a crude map of the panorama. Perfect! It had figured it all out on its own!

Next came a series of optimization steps, and that's where Hugin need to be a little more proactive. For instance, it won't tell you that it will adjust exposure, so you worry how weird things will look like with this particular section overexposed. It also shows you how it tilted and modified the original image to make it fit into the panorama - a real miracle.

Once I figured out the optimization step (about ten minutes), I saved the project for later re-use and exported the image. That was fun!

I should mention at this point that Hugin sensibly leaves parts of the image out that aren't included in the image components. That means that a section at the top and the bottom were missing and needed to be filled out. To get that done, I used the outstandingly marvelous "Smart Remove Selection" plugin for the GIMP.

Use OpenOffice.org to Create an Image Gallery

I went on this hike on Christmas Eve - a steep trail down a ravine, with the most unusual people and the most unusual features. At the end, a gorgeous beach with surfers and pretty ladies with parasols waiting for them. Huge mansions staring down from lofty cliffs, and an amazing sunset that colored the cliffs a golden hue of honey.

I looked at the pictures, and they were pretty. Unfortunately, due to the unusual nature of the features and the number of different things you were looking at, it wasn't quite clear how (or even if) they belonged together, and what the story behind them was. The pictures needed an explanation. A story had to be told.

I did what I do these days when I record a hike: I posted the pictures on Facebook. Add the pictures, add captions, and wait for comments. That sure works, in a crude way. More generally, though, this approach has several issues:
  • In Facebook, the caption is an afterthought. You read it only if you don't know what the picture is about, or if you happen to stumble upon it
  • Facebook has no way to annotate an image. You cannot highlight portions, add notes, or expand parts for better viewing. You have to edit the image outside Facebook and upload it with annotations
  • Facebook doesn't provide a way to visualize a story as a video, with slide auto-transition
I realized that what I wanted was a video. Something like the DVD authoring software that takes an image folder and makes it into a DVD. You put it into the DVD player and you get a slide show.

Wait! I said the words! Slide show! My gallery was a dreaded Powerpoint™ presentation!!!

Now, every geek hates Powerpoint presentations. That's because we have sat through one too many of them. The one that usually ends it for us is the one where the marketing drone discovered transitions and animations, and now we have to sit through an hour of flashing text and banners that swirl through the air into place. Meanwhile, we are thinking about this bug that needs to be fixed and about the pickup basketball game we are missing.

But slide shows are perfect for the purpose I had in mind. Put a bunch of pictures together, add the text that explains them, and use the presentation software's awesome annotation powers to enhance the image. Stay away from gimmicks as much as possible and you got yourself a video.

I looked at different solutions online, and none of them satisfied me. The problem with cloud software is that every change is painful, as it needs to wait for round trip times. You post a minuscule change, it takes a couple of seconds, then you get a chance to see the results. Will this color show on the image? Is the annotation the right size?

Downloadable software works, but on Linux the quality is so-so. I tried a bunch of different gallery applications, but none of them did what I wanted. Then I caved in and used OpenOffice Impress, the group's Powerpoint replacement. Impress is amazing in functionality, by the way, and my reluctance comes from the fact it gets really slow when editing large presentations.

When I started creating my gallery, I realized there were really only three types of slides I wanted:
  • Accessory slides - text only, like the title page and the thank you page at the end
  • Vertical layouts for pictures shot in portrait mode
  • Horizontal layouts, for landscape
If I had shot videos, there would have been two pages for those, but the layout would have presumably been the same as with the pictures, only with video objects.

I created the title page, then started with a vertical image. I laid it out in the nice presentation layout: title, image on the left, text box on the right. Seemed reasonable enough. Since the next page was the same format, I just copied the one I had, replaced image and text, and got going.

Things got a little tough with landscape images, since they don't leave a lot of room for text. I ended up placing them in the top left corner, with text floating around them. That turns out to be a problem, since Impress (and I recall Powerpoint) don't float text around images like desktop publishing programs do.

That all went well, though. I had the three slide types, and whenever I added a new picture, I just selected the appropriate slide type, pasted it at the end, replaced the image and the text, and on to the next one.

Now, I told you Impress can be a hog. So after about five slides, the image selection dialog would take 20 seconds to load. Paging through it would take another 10 seconds per page. With over 100 images, that seemed completely impossible to deal with. Fortunately, if you shut down and restarted Impress, things were quicker until you added another five pages.

That was tedious. Oddly, though, paging through the slides was fast, as was adding annotations and editing text. So if I had a script or wizard that generated the slides for me, that would have been marvelous.

I went online and checked how you create wizards for Impress. After all, what I wanted was not much different from a mail merge: get a data source (here, images) and plug it into an output format (here, slides). Not an ounce of information.

Fortunately, I remembered that OpenOffice standardized on XML as persistence format. I looked at the saved file and realized it was a PKZIP archive. That was to be expected, since PKZIP is the default persistence format for Java archives (JAR files).

I unzipped the archive and quickly discerned the various components. Kudos to the OO.o developers for creating a very sensible layout. I looked at the file content.xml, which had the slides in it (as a quick grep with the text on one of the slides revealed). Unlike some XML formats that are binary CDATA stuck into an XML wrapper, this one was very much in the spirit of eXtensible Markup - you could easily see how it worked.

So I wrote a little script that takes a template file and a set of images, and creates page after page of content. Then it saves the presentation, and you get to load it and modify it. Adding annotations, removing images where you don't like them, and finally saving the whole thing again. Then you can take the presentation and show it on its own, or share it using the Impress exporters. There is one for Flash, one for PDF, and one for HTML slideshows - amazing stuff!

The Live CD Web Server Project (Ongoing Series - Part I)

Imagine burning a CD with your latest and greatest in web sites, putting it into the computer's CD drive, turning on, and magically you have your web server running? No installation, no configuration, no login - no viruses, no persistent hacking, no version conflicts?

The idea came to me when running a large computer cluster in a three-tier application. The front end web servers were always the worst problem: they handled all the load, they handled the majority of attacks, they were the entry point for vulnerabilities and denial-of-service abuses.

I had tons of problems to deal with, so I never got to realize this idea: take the software we run on the web servers, make it into a CD, tear out the hard drive from the servers, and make the whole thing run from read-only drives. Stuff it and forget it.

I recently started thinking about the project again. I wanted to take an old machine I had sitting at home and train it as a minimal web server. It had to run LAMP and Joomla, but not allow persistent updates to the database. The project I thought of sounded like a perfect match, so I started looking into the available distributions.

Turns out nobody seems to have started a project in this direction, despite repeated calls for it. There are tons of Linux distributions on Live CD, but they are all fixed. The idea is that someone puts content together and the CD is to showcase the creator's efforts.

2010-10-20

Creating an Encrypted Subversion Repository on Linux

Why?

I have my source code on a server in the cloud. That makes perfect sense - I want to have my code accessible from everywhere, even if the only person accessing the repository is my own self. Access is secured using SSH with PKI - only whoever has the private key can access the system, no passwords allowed.

While I feel pretty secure about access, it bugs me that the source code is not encrypted at rest. Whoever gains access to a copy of the repository (for instance, from a backup) has the code in cleartext. That's absolutely not good. On the other hand, setting up an encrypted repository is too much of a hassle, and I couldn't find anything online about how to do it.

One rainy day (yes, we have those in Southern California, and we look at them like people in Hawaii look at snow) I decided I had enough of it. I wasn't going to take it anymore. I had to do it.

What Not?

When setting up my encrypted repository, I wanted to avoid the most common mistake: a repository that can be accessed from the machine itself. You see, the problem with most encryption software for drives is that it stores the key with the hardware. If you do it that way, the encryption is pretty pointless.

You could set up encryption so that only people with login access to the machine (who also know the password) can decrypt the repository. This approach works well for encrypted home directories, but in my source code access there is no password.

So, whatever I did, I needed to pass the credentials (or the path to them) with the request itself. The request would provide location and password, and that would be sufficient to unlock the encrypted file.

How?

My ideal scenario was simple: a Truecrypt repository on the server with SVN (Subversion) access. I base the whole description on this combination, and the peculiarities of both come into play at several times.

I chose Truecrypt over, say, CryptFS because the repository is a single file. It is completely opaque to the intruder, and I can even set it up so that it's not clear the file mentioned is a Truecrypt repository. (For instance, I could call it "STARWARS.AVI" and make people think it's a bootleg copy of a movie.) With most crypto filesystems, encryption is per file, which means the file name and the existence of single files (and directories is visible).

I chose Subversion over, say, git because... well, because my repo is already in Subversion, and because SVN has this neat remote "protocol," which consists of creating a remote, secure connection to the server and executing the commands locally, without a special network protocol involved.

Tricks

The first part is very, very simple: after installing truecrypt and subversion (as well as ssh, which you should already have) you need to create a Truecrypt container. Choose a file container and give it a Linux (ext3) filesystem, and make it big enough to fit the largest size your repository will ever grow to.

To create the container, simply type in truecrypt -t -c on the server. That will start the interactive dialog that will create the encrypted file. Give it any name (I assume here you called it STARWARS.AVI) and location (doesn't really matter). The defaults are fine for everything, you'll provide file name, file size, and none for the file system. When it comes to the password, choose something really really good.

[Note: volume creation on the client has the advantage of being able to use the graphical interface, which helps a ton.]

Since we didn't select a filesystem, we have to create one. To do that, we need to learn a little about truecrypt internals - but we'll use it in a moment for the actual subversion trick, so it's not too bad. Here goes: truecrypt creates a series of "devices" that the system uses to talk (indirectly) to the encryption core. That's done because truecrypt lives entirely in user space, and hence encryption is not available on a kernel level.

The devices are called mappers and reside in /dev/mapper/truecrypt*. To Linux, the devices behave like regular block devices (read, like a hard drive). One you have a map done, you can do anything with it that you would normally do with a drive, including formatting.

To map, you invoke truecrypt with the container name and the no filesystem option:

truecrypt --text --password= STARWARS.AVI --filesystem=none --keyfiles= --protect-hidden=no

(Long options this time to save the pain of explaining.)

Now you should have a mapper mounted - if you type mount, you should see one line that starts with truecrypt. Remember the number after aux_mnt (typically 1 on your first try).

Now we create the filesystem:

mkfs.ext3 /dev/mapper/truecrypt1

(You may have to be root to do that - in which case add "sudo" at the beginning of the line.)

AutoFS

The "evil" trick that we are going to use next is a dynamic filesystem mounted automatically. AutoFS is a package that allows you to declare that certain directories on your system are special and access to them requires special handling. For instance, I use autoFS to connect to my SSH servers. The directory /ssh on this machine is configured to open an sshfs connection to whichever server I name. So, if I write:

ls /ssh/secure.gawd.at

I get the listing of the home directory on the server secure.gawd.at. (The server doesn't exist, but that's beyond the point.)

In this case, we will use what is called an executable map: autoFS invokes a script that you name, and will take the results as the configuration options it needs to mount the filesystem. In our case, the script will first open the truecrypt container and then move on to passing the options for the underlying filesystem to autoFS.

Once more: ls /svn/magic-name - autoFS - mapper script - truecrypt mapper - mount options - mount

I wrote the script in Tcl, which is still my favorite shell to use. It requires nothing but Tcl and Tcllib, both package available in pretty much all Linux distributions (although the Debian people, noted bigots, require you to specify a version number). You can download it here.

Copy the script into the /etc directory. If you didn't already, install autofs on your machine. Now edit the /etc/auto.master file and add a line for this new file. Let's call it auto.enc and link it to the /svn directory.

A little magic setup: you have to create the directory (sudo mkdir /svn); then you have to make the map executable (sudo chmod+x /etc/auto.enc); finally, restart autoFS so that it looks at your map.

Now we have to choose the container and map. To make my life easier, I chose to use a .d directory in /etc. If you create a directory /etc/auto.enc.d with (sudo mkdir /etc/auto.enc.d), then all the links inside it will be considered maps. The name of the link is the name of the map, and the location it points to is the file container.

If you want to use the container /enc/data/STARWARS.AVI under the symbolic name repo, then you would do this:

ln -s /enc/data/STARWARS.AVI /etc/auto.enc.d/repo

Grand Finale: Credentials

Now the big question: how does the mapper find the password? That was the big question at the beginning. The way I solved it was to add the password to the name of the directory. Crazy guy!

If you followed the instructions above, then whenever you access /svn/anything, the map is consulted. The map script looks at the data passed in and looks for an "@" sign, which is considered the separator between map and password. So, if you wanted to access the repository repo with the password "secure," you would type in

ls /svn/repo@secure

The script would mount the mapper and tell autoFS to mount the directory as an ext3 file system.

But, but, but!!! You are passing the credentials in cleartext! That's bound to be terribly bad!!!

Well, yes and no. The transmission between server and client is SSH, so nobody can see the password in the clear. On the server, the password is in the clear, but it is not logged anywhere (unless you tell SVN to log everything). On the other hand, someone that happens to be on the server when a request comes in is also able to look at the encrypted data, since it is mounted for that period of time. So if an attacker looks at the password, the attacker might as well look at the files that are protected.

Epilogue

Let's just assume you got everything working - you should now be able to create a repository:

svnadmin create svn+ssh://svn/repo@secure

Now you should notice that the file STARWARS.AVI has been modified. If you mount it using truecrypt, you will see a series of files in there - files that SVN will continue using from now on whenever you access the encrypted repository. Hooray!

Notes and Addenda

1. I set the expiration timeout for the directories low, but not incredibly low - at ten seconds. You do that by specifying the "timeout" parameter in the auto.master file. That way, I can do an svn update; svn commit cycle without requiring a second mount. You can play with the parameters yourself.

2. The encryption scheme could be improved easily by using keyfiles instead of passwords. To do so, you would place a keyfile on a remote location (a web server, maybe) and require the script to get that resource, decrypt it using the password provided, and then use that as the keyfile. The advantage is that you require three pieces of information: the truecrypt container, the encrypted keyfile, and the password to the keyfile, to do your bidding.

3. Disclaimer: this setup works for me, but that's because I munged around for dozens of hours until I figured out all the options and configuration items necessary. If it doesn't work for you, don't sue me. If it works, but it stops working after a while and your brilliant source code is lost forever, don't sue me. Proceed with caution, always make backups, and never rely on the advice of strangers. Mahalo.

2010-09-25

The YouTube Conspiracy in Abby/CClive

There is a command line utility available on all Ubuntu derivatives called cclive. If you install it (sudo apt-get install cclive), you can give it a YouTube URL and it will download the video on it. I love using it for backup purposes - I upload a video from the camera, perform all my changes on YouTube, and then cclive the outcome for posterity. Just in case YouTube "forgets" about my latest cam ride.

There is also a GUI for cclive, called Abby. Abby is more than just a frontend for cclive, though, it also helps with pages that have multiple videos on them - playlists or RSS feeds. Abby is hosted on googlecode and written in QT/C++.

I started surfing, so I decided to scour YouTube for surf instruction videos. There is a set of 12 introductory videos by two Australian pros on there, so I decided to download them to take them to the beach. My N900 gladly displays YouTube videos on its gorgeous screen. Unfortunately, though, the T-Mobile coverage is fairly bad, so a downloaded video was the only real option.

BPM Detection in Linux

Doing a lot of cardio workouts, it is really good to have music that beats to your rhythm. The pulsating sound gives you energy and pace, both excellent ways to make a good workout, great, and to make time pass faster. When I get new music on my player (a Sansa Clip+ - the almost perfect player for a Sporty Spice) life is good. An hour of running is gone before I even know it, and when I look at the calorie count, I feel like Michael Phelps freshly crowned with Olympic gold.

At first I would stumble across music that matched the pace. I do lots of different kinds of workouts, so certain songs would work with different segments. I have a running pace, a hiking pace, a mountain climbing pace, a cycling pace, a weight lifting pace, a spinning pace, etc. I would get used to certain songs in certain parts, but that would get old, fast.

Then I got used to counting beats. I would look at the big clock in the gym and count beats for 15 seconds. That would give me a general idea of what I could use a song for. My favorite spinning song, for instance, was "Hazel Eyes," so anything that had the same beat count would be a good replacement.

Then I started getting bored with this random approach and realized I had a library of hundreds of CDs ripped onto my computers. I just had to detect the beat automatically and I would be able to simply do a lookup search for a specific BPM count and get all possible results.

2010-09-24

The Rise and Fall of Internet Browsers

It amazes me how, since the very inception, Internet Browsers have been subject to periodic meteoric rise and subsequent fall. They do so a lot more than other pieces of software, like operating systems or word processors. It seems people are much more willing to throw out their browsers than virtually any other kind of software.

It all started with the venerable grandfather of them all, Mozilla Navigator. Marc Andreesen, the ur-type of the "smart kid with an idea brighter than even he thinks it is who goes on to think he's the smartest person on the planet because he's been lucky with his idea", and his team created the software and threw it out. Instant success, huge company, enormous IPO. But a piece of software that was horrible, and got more and more horrible as time wore on.

Mozilla was mired in the conflict of the dot-com days: how do you monetize a piece of software without charging the user? It would take almost ten years for Google to show us, but back in the day, it meant shareware. Mozilla was selling servers, and the browser was a loss-leader. It got all the attention that a loss-leader gets - it got more and more bloated, supporting more and more reasons for people to upgrade their servers (and not buy them from anyone else), but in the process it got slower and slower.

Finally, in one of his last acts of Imperial Fiat, Bill Gates decreed that the Internet was not a fad and that Microsoft needed to get in on the action. A few years later and a ton of lawsuits after, Mozilla was dead (or bought by AOL, which is pretty much the same thing) and Internet Explorer the only dominant figure in the landscape.

Then IE started showing problems. Not bloat and slowness, although those became more apparent. No, it was security that became the big issue. IE's security model was cooperative and not designed for the abusive exploits of Internet Mafia conglomerates. As a result, surfing certain types of "shady" sites would invariably land your machine into zombie territory, or at least get you a virus infection or two.

When the Mozilla Foundation announced it was looking at a new, brand new browser named Firebird, even the most hopeful were not easily convinced. Navigator was a monster, written by people that needed to get things done, no matter how unmanageable the result, and Firebird would have to be a rewrite from scratch to compete.

But it did. Renamed Firefox (sadly), it began a march of conquest that landed it to top spot in the browser stats. Nowadays, almost half of all Internet users choose Firefox, while IE has only a little more than a third of the market.

Firefox was helped by a series of advantages: it was much faster than IE; it was factors more secure than IE; it had an extensive extension system with loads of useful things - useful for users, which IE had traditionally ignored in favor of usefulness for companies. Only lately has Firefox started to show weakness, and from the most unlikely of sources.

I invested much time in my Firefox setup. I have the extensions I want, synchronized across my two dozen machines (don't ask) using a sync extension. I have Firefox customizations for nearly everything, and I write my own Greasemonkey scripts. Yet, I started using Google's Chrome browser (Chromium on this laptop). Why? Because Chromium uses "one process per tab".

How does it matter? Why is it so important to me that each tab have its own process? The answer is Flash. You see, Flash is a giant memory leak. Whenever I land on a page that has Flash on it, memory gets allocated (by the Flash plugin) and never released. After a few hours of heavy browsing, my browser slows down to a crawl. Another few hours, and it's completely unusable. After a day, I have to restart it, and the process of freeing up memory may take upwards of 10 minutes.

Flash on Linux, of course, is an afterthought. The way Adobe treats its Linux users, though, shows all the weaknesses of the technology in a merciless way. First, there is the closed nature of Flash: Linux users cannot suggest modifications or fix bugs, as they do with other software, because the plugin is closed.

Then, there is the "one-size-fits-all" approach of the plugin. I find Flash used for controls on web pages (especially ones that require notifications), for e-cards, especially of the inspirational or funny kind, and for online videos. Those three use cases are totally different, and using the same software for each of them is only in the interest of the maker of the software, Adobe, not in the interest of the user.

So. for now I am forced to leave Firefox for no reason of its own and adopt a different (and very capable) browser simply because I can't get Flash to work on FF.

2010-09-14

Solar-friendly Gadgets

Living in San Diego, you spend a lot of time at the beach. There, it's a tragedy to have no outlets to recharge things, while you have a lot of sun available. Ideally, you would use a solar charger to recharge all your devices while they are being used.

There are lots of chargers on the market. Solio makes very popular ones, but they are by far not the only vendor. Basically, all of them in one way or the other allow you to connect a variety of devices to a solar panel that outputs the correct power (voltage).

When buying a charger, you should look mainly for one thing: that the output is USB. Dedicated plugs are nice, but ultimately you'll end up connecting most devices via USB, so something that requires an adapter to get to USB requires you to carry something extra (Solio, for instance, mysteriously has output that looks like USB, but isn't. You need an adapter to translate to USB, which is really stupid. I assume it's done to have something else to make you lose and buy.

The devices, though, are a little more challenging, since there is more than one thing you have to look at.

2010-08-22

A Tale of Broken Packages

Now, you are probably going to think this is about mishaps with a shipping company. Boohoo! My UPS package arrived damaged... Sad Face... None of that: UPS is reliable as ever, as are the other shipping companies (FedEx, DHL, USPS, what have you). No, the packages I am talking about are Linux/Ubuntu packages.

One of the major advantages of using a distro like Ubuntu is that someone else figures out what works with what and makes packages available for you to download and install semi-automatically. It's really easy and a lot more fun than the super-crappy way you install software on Windows. Instead of downloading an installer that tells you to shut down all applications before starting and then goes through a hundred screens of questions that you really don't care about (Where do you want the software installed? Do you want the software to report usage statistics?), in Ubuntu you just say, "I want Amarok," and there it is.

Sure, the whole system could use improvements. For instance, it would be more than just nice to add user ratings to the packages, so that you can see which one of the zillion alternatives is rated best. Also, it would be nice to know the size of the download before you get started. Finally, it would be great if there was a meta-server that lists major available repositories, and you'd just check the box next to the ones you'd like.

What I am going to be complaining about here, though, is more basic. I expect the download of a piece of software to give me a package that (a) doesn't break my system, (b) doesn't cause security problems, and (c) does something useful. While (a) and (b) have not been violated yet, I just got a package that clearly violates (c) big time.

2010-07-16

Rethinking Traditions

A confession: I don't like signing emails. I find it stupid. You know the message is from me, after all your email client tells you that before you open it. What's the point of salutation and signing? What does, Sincerely, Cinserely tell you that you didn't already know?

Turns out there was a good reason for the signing. That's from the days of snail mail and before there was such a thing as a typewriter: the presence of a signature was the only certain way to know who wrote you a letter. I remember the days when you'd get one, and you'd turn it over to read who sent it. If I had known I would feel old just for admitting I had ever read a hand-written letter, I would have believed everything science fiction told me.

But now I write emails, and I got used to doing a lot of things that you couldn't really do with paper. For instance, I reply inline - breaking up long messages and replying to a question right beneath it. Or I make creative use of the Subject: header. Of BCC: myself to have a record of sending the message.

More often than not, I won't sign an email. It's not that I forget, and it's not that I am too lazy. It's that I find that a truly pointless activity. I will typically close my message with a friendly greeting to the family or coworkers, or a wish for something fun, but rarely sign. It's a tradition we keep on, just because we don't think about it.

2010-07-14

Is USB the New Outlet?

I just got a refurbished Garmin Edge 705 (review to follow) along with a bunch of craptastic free sample gadget from China. The thing they had in common? Instead of having dedicated chargers, they all came with an AC/DC converter to USB.

It used to be that you were flooded with dozens of different chargers. After a while, you'd forget which charger belonged to which gadget and you had to label them. Every time you'd move, then, you'd have this monstrous mess of chargers that you couldn't get rid of, because you had long forgotten what gadgets they belonged to.

There were craptastic adaptable chargers with dip switches for current and voltage and different tips. You constantly risked destroying your gadget by choosing the wrong voltage, so they were really just an item of last resort. It was a nightmare.

2010-07-09

YHIHF: Do Internet Shopping Companies Ship by Access?

It's been a few months now that I noticed something odd going on. Whenever I buy something on certain online shopping sites, nothing happens for a while. Then, when I log on to check what's happening with my order, mysteriously it ships on the same day.

It all started with this deal site. I ordered a refurbished computer, and while the site stated they were shipping within two days, after four I had no email confirmation, no tracking number, nothing. When I went to the site, it said it was undergoing construction, and that sent me into Scam Prevention Mode. I sent them an email demanding an update and alerted my credit card company.

Next morning, UPS faithfully delivered my computer. Overnight shipping. At that point, I thought they had just somehow messed up and wanted to make up for it.

Then I started noticing it with other sites. At first, it was an argument with a big online vendor: I checked their Super-Saver Shipping box for my new Acer laptop, and they sat on the order for a week. When I told them their policy was that free shipping allows them to perform a slow shipping, not a slow fulfillment process, they gave me drone talk for a while, but then made it happen. (Coincidentally, the laptop ended up arriving the day before I unexpectedly had to fly to Germany for a funeral.)

2010-07-03

Comparing eBook Readers

The local Best Buy has a display with different eBook readers, so I got a chance to hold them all in hand and compare them. Nice way to entice customers, by the way!

Price wars are all the rage right now. The new Nook reader came out, and the price dropped to $149 (no 3G). Amazon followed suit and dropped the price of the Kindle to $189 (with 3G, compared with $199 for the nook 3G). Sony interestingly still sells ebook readers, and those were on display, too.

Admittedly, considering that the device is not the major buying factor for either Barnes & Noble or Amazon, the price of the device is still way too high. At the very least, the companies should offer discounts for buyers of the respective devices to cover their cost, since you are buying something that ties you to a particular vendor (still).

Of the devices on display, the Nook clearly had the upper hand. The Sony readers were quite nice, but the decision to put the touchscreen on top of the e-ink dims the latter considerably, giving the whole reading area a washed out, grey-ish look. The price is right, though, and Sony e-readers are not affected by the usual Sony price inflation.

2010-06-25

MeeGo to Become Default OS for Nokia Smartphones?

To my complete surprise, I found an article on Slashdot this morning, in which Reuters was quoted as saying that Nokia is going to ditch Symbian on the N series of phones. Instead of the trusted in-house OS, Nokia is moving their flagship product to MeeGo.

For those of you who don't know, MeeGo is the merger or Nokia's very on Maemo project and Intel's Moblin. All three projects have the same aim: to provide a full Linux distribution for mobile devices. Nokia's main aim is the smartphone market, while Intel started out thinking about MIDs and netbooks.

The move on Nokia's part is quite unexpected, since they just recently reiterated that Symbian would always be at the core of their smartphone offering. Symbian is great, don't get me wrong, but I wouldn't want to develop for it. It's getting old in the tooth and requires a lot of work to get used to.

Now, what does that mean in the mobile landscape? Why would Nokia move in this direction? What does that all mean in context?

2010-06-24

Kindle and Calibre

I've been using Calibre on and off to get content to my Kindle, and I have to admit the software is gaining a lot in functionality as it matures. The version label (currently 0.6.42) doesn't do functionality, stability, and ease of use justice at all, and I highly recommend it to anyone with a Kindle, regardless of use.

What do I do with it? I mainly add all those files I couldn't otherwise read on Kindle. Calibre allows me to easily do the following:
  • Display web pages
  • Get RSS feeds
  • Convert text-based PDF files
  • Show ePub files
  • Access Project Gutenberg ebooks
The flow I have to follow for each task is slightly different, but all in all they are fairly easy to get used to and in some cases downright simple.

To get a web page onto your Kindle, for instance, you just save the page as HTML and convert it by simply adding it as a new ebook to Calibre. You automatically get the best feature of Calibre: once you tell it what the target device is (Kindle, in this case), it will automatically convert to a format (.mobi) that works on that device, and it will format the ebook for he device properties.

RSS feeds are solved differently (and justly so) in that they are scheduled in a separate module. You can add your own RSS feeds, and there are some pre-defined ones. The adding process is not as straightforward as it could be, mostly because RSS feeds are so different - some come with full articles, others with a brief preview, others still with just a link. Once you choose the feeds your are interested in, you decide how often you want them updated, and Calibre will download, convert, and push as often as you'd like.

For PDF and ePub files, the process is the same as for web pages. The main difference here is that some PDF and ePub files are protected. Calibre doesn't know how to deal with that (which is quite to be expected). Otherwise, it does a pretty good job, handling only headers and footers poorly.

Gutenberg is finally smarting up to Kindle and is starting to publish some of its ebooks in .mobi format. That means you can download them straight to your computer and either add them using Calibre of simply adding them to your Kindle documents folder. Really dead simple.

I would be perfectly happy with Calibre if it added just two features:
  1. Sharing of "recipes." Things like RSS feeds and header/footer detection need advanced settings that are recipes of some form (XPath for header/footer, for instance). It makes very little sense to write a recipe if you are the only one that uses it, and there is no easy way to share them. I know it's a pain to set up a repository for automatic sharing, but it would make like so much easier. The current recipe for PDF header/footer, for instance, detects the headers and footers that web browsers put on printed pages. Admittedly a good default - but what am I going to do with the PDF of a book, with the standard chapter heading and page number as h/f?
  2. Direct download of URLs. It annoys me that I have to download a file from the web - be it HTML or .mobi - to add it to my list of books. I guess direct integration with ebook providers (for free or for pay) would be really welcome, as would an option to add a book from URL.
I'll keep you posted. You should certainly try out the program, though, if you like your Kindle for more than reading Amazon offerings.

2010-06-18

Quick Greasemonkey Script for Kelley Blue Book

One of the most frustrating user interfaces on the planet is that of Kelley Blue Book. It just seems to be designed around the idea of making you click as many times as possible to get to the information you need, instead of around the notion of quick access. Is that because KBB wants to maximize ad impressions?

Well, it really doesn't matter. There is only a certain amount of clicks I will endure until I start typing URLs manually into a browser, and KBB got me there. In frustration.

How does the navigation work? Well, it's essentially tree navigation. You choose first whether you want trade-in or retail values. Then you select the model year, then the manufacturer, and finally the model (and later on, options). That's great if you want to look up the values for a particular car, as for instance if you intend to sell yours.

Now, imagine you are looking to buy a particular type of car - say an AWD sedan. You settled on either an Audi A4 or A6, or a Subaru (any of many models). Your budget is constrained (yeah, well) but you are less interested in the car's age. I would think that's the more "normal" case.

So, now you are armed with a list of cars (from Craigslist, cars.com, etc.) and you need to look up their value. Say you found two Forresters, one from 2004, one from 2005. You found an A4 from 2008 and an A6 from 2003. You got an Outback from 2001. They are all nominally in your price range.

So, off you go to KBB. Logically, you'd want to focus on the two manufacturers or six models you are considering, but in KBB, you have to first hand in the year. After you chose the year, you can drill down to the car you are considering. When you are looking up the next Forrester, no help: you have to go all the way to the top (the year) and drill down.

Of course, if you look at the URL, you notice that it specifies year, then manufacturer, then model. If you just change the year in the URL, then you find what you want. Huh? You don't have to go all the way to the top?

Now, why are they navigating this way? It makes no sense from a user's perspective: when I say I want to look up the value of a 2006 Subaru Outback, the next logical thing to look up is a 2005 Subaru Outback, not a 2006 Lamborghini Countach. But the deal is that the navigation is organized around availability: the year constrains the possible manufacturers (not much), and then the models available.

That means that the programmer found out that each model is available only in certain years, and hence built the navigation around that fact.

The rest of the world thinks that's stupid.

Hence, the need for a very simple Greasemonkey script that adds the missing navigation. You get to the page you requested (e.g. for the 2006 Subaru Outback) , and links to the year, manufacturer, and model (and trim, where available) are augmented by drop down boxes that allow you to choose alternatives. You can click on the year drop down and select a different year for the model listed. Or you could click on the model and see an alternative - all other things equal.

Let me know if you are interested in the script. It's not packaged for public consumption, yet, but if you want it, you can gave it!

2010-06-11

Compact Flash - Long Obsolete?

A few years back, I thought I'd revive my days of taking pictures and bought myself a digital SLR. Back then, they were not all the rage, mostly because they were so incredibly expensive and the pictures they shot were not much better than the point-and-shooters'. Seriously, if it hadn't been that I was already used to SLRs and that I knew how powerful the lenses are, I would have done without.

I bought a Canon EOS Digital Rebel XT. I am not going to bore you with the specs, since every camera you get for free with a two-pack of ink cartridges is better these days, but it shot passable pictures (not particularly good ones, though) and I thought getting the tele lens out for the AIDS ride might be a good idea. You know, shoot the incoming finishers when they are not two inches away, that kind of thing.

Well, turns out the camera had a stinky 2G card. Back in the day, I am sure, it cost a fortune. I may have chosen it because it was the best bargain, but I could have probably bought two ink cartridges for the price of that one... (Sorry if you didn't follow the sarcasm.) I thought, well, you can get 8G cards for $20, I'll just head out to my fave store, The Shack (or whatever they call themselves today), and get myself a Terabyte of them.

I got there and looked at "options." There was one card available, a 4G card for $30. It seemed like a seriously bad deal. I actually half-way thought CF cards had gone out of style and that a SD to CF adapter might be the way to go. They didn't have any of that.

Instead, I decided to go to my least fave store, Best Buy. They opened a new store just across the street, and between the Shack a few blocks East and this new Best Buy, I am not sure how long that little store will survive. In any case, I get into the BB and manage to dodge the first wave of drones that come to "help" me (i.e. upsell me to whatever they need to throw out today).

When I stupidly think that the memory cards are going to be with the portable hard drives, a drone attacks. He's actually kinda nice and not as salesly as I am used to from the store on Harrison in San Francisco. He doesn't know of CF cards, though, hasn't seen them. He is certain, though, they don't carry an adapter. He runs back to the computer and just in time for me to ponder whether I should leave, he comes back with news: they are in store. The cards, not the adapter.

We walk to the other wall of the digital camera section and see the CF cards. There are four types: outrageously expensive 4GB, outrageously expensive 8GB, and outrageously expensive 16GB. I mean, the prices were laughable - the 16GB ran over $250! For that price, I can easily buy a 2TB portable drive!

Of course, they were supposed to be ultra-fast. But my camera is ultra-slow, so it doesn't really matter. I ended up picking the 4GB model, praying that everything will work out fine. Then I walked to the digital SLR section and looked. All of them switched to SD now.

How quickly whole standards obsolete!

2010-06-08

My Ideal Phone

I am OK with my N900, but not really happy. The software is still buggy, the thing is too heavy, and the slider keyboard is useless. I do like certain things about it, tough, and that got me thinking: what would a perfect phone look like?

Form Factor

I find that the bigger the screen, the more I like a phone. Actually, it's mostly the resolution that I like - I couldn't live with the crappy resolution of the 3G iPhone, and the resolution of current Android devices (and of the N900) is OK. Even better, though, would be a phone with a screen that folds in the middle. A little like the current crop of eReaders, only that those tend to have two different screens (one eInk, the other LCD-type). You either flip the phone open (screens protected inside) or the screens are on the outside (one turned off until you flip the phone flat). That way you get twice the screen real estate at half the carrying size. Gotta love that!

Battery Life

Clearly the worst gripe about smart phones is their outlandishly crappy battery life. The N900 fits with most and doesn't quite manage a full work day without recharge, but that's absolutely unacceptable on a device you use for mobile communication. Clearly, we need phones that can last a full 24 hours at average use, which includes constant texting and at least two hours talk time.

Or....

At the very least we need an easier way to charge our phones. I am not talking wireless charging, necessarily, but at the very very least an end to the endless number of plugs and cables. Stop the madness, standardize on one single power input, and make it possible for us to have power outlets into which you can directly plug a phone.

Software

The current crop of phone operating systems isn't quite there, yet. The problems vary, but it's mostly a combination of three factors:
  1. the OS vendors tries a lock-in strategy
  2. the relationship between phone and desktop OS is not clearly outlined
  3. the limitations of the tool chains make it hard to develop
iPhone suffers from all three. Windows Mobile is too much desktop, not enough phone. Android chose to go the Java route, which is good for portability but bad for developers, who have to buy into the paradigm.

The N900, on the other hand, would have the winning combination. You can simply take software written for the desktop and recompile it for the phone - make your changes to the user interface, and you are done. That gives you enormous leverage: you can write the backend code once and you know it will work (as is the case with the standard DB format on it, SQLite), you can reuse text mode utilities if you just recompile them, you can stick with whatever expertise you have already.

The problem with the N900 is one of implementation. It's the winning idea, but it isn't well done. For instance, it's a serious bitch to get the development environment going - you have to wade through a ton of web pages that are partially outdated, and even the virtual image offered is not kept current. Things that should be easy (e.g., getting a current re-build of a text mode app for Maemo) are hard. Why not have a current list of Linux source packages in Debian and build them on Maemo by default?

I think Maemo has the same problem that KDE ran into as of lately: great idea, but not enough people working on it, and those that are are too ambitious about their own agenda.

Connectivity

People of the world, unite! Demand that your smart phone be what you expect it to be: your computer when you don't want to carry a computer. How come so few smart phones have USB hosts? I want to be able to plug a printer, a scanner, a card reader into my phone. I want to be able to attach my phone to a TV via HDMI. I want my phone to have a standard interface for extensions, so that I can easily plug in a GPS module, or a better camera, or a credit card scanner, or a heart rate monitor.

Manufacturers are making a huge mistake by ignoring connectivity options. Of course, I understand where that comes from: any new connector means thousands of support calls, every open connectivity channel means compatibility issues left and right. Whoever starts the movement is going to have to deal with a host of issues.

But you know what: it's absolutely worth it. Remember when USB came out? It just didn't work. You needed drivers for everything, half the USB devices had drivers so poorly written, they'd crash your computer, and their were mutually incompatible half the time. It was a nightmare But now, is there anything easier than USB plug & play?

That's probably the reason the first connectivity option I mentioned is USB. Just add USB host, and we'll figure out a way to get modules for our phone that can connect, and software that will work with it.

End App-centricity

One of my worst gripes about the N900 is that the software isn't standardized. There are some apps that use Berkeley DB, most others use SQLite, and a third set uses incompatible formats. Some things are done well - for instance, the Conversations widget handles both SMS and IM, and the Phone widget does both Skype and GSM well. But there is no overriding architecture for that.

The idea should be that all data are stored in a central repository, and the apps should have access to it. Apps shouldn't even be allowed to store anything in a format that is not the central repository's, especially if it is user-dependent and fast-changing (like messages of any kind).

Examples? The email app stinks big time. That's mostly because it has no unified inbox, so that every GMail account I have requires a bunch of clicks to get to. It also does late binding, which means it will tell you there is a new message, but when you want to see it, you have to first load the current server view. Most importantly, though, you cannot combine things that have the same frequency as email, like RSS feeds, into your email view.

The idea is the life stream - a single point of entry that combines all the data you want into a single view. There you can filter the types as you wish (if you wish). Things are arranged chronologically and updated automatically. They scroll on their own. There are sources of data. There is a view. You can interact with the data the way you see fit.

But don't force me to switch apps just because I want to see more data. Even in a phone that does multitasking well, it's a pain.

2010-06-07

MMS on Nokia N900/T-Mobile

OK, this is the latest on N900 and T-Mobile MMS. It works, but you have to figure out a few things. To save you the time and aggravation, here is my recipe:
  1. Download and install (via Application Manager) fMMS.
  2. In fMMS, go to Settings -> Internet Connection Settings
  3. Get the current settings from this page (they worked for me)
  4. You are ready to go
The settings (in the format used by fMMS), in case you can't reach the page or whatever, are summarized below:
  • Access point name: wap.voicestream.com
  • MMSC: http://mms.msg.eng.t-mobile.com/mms/wapenc
  • User name: 
  • Password:
  • HTTP proxy: 216.155.165.50
User name and password are blank. Let me know if it doesn't work for you!

[Oh, and by the way - the settings will work only if you are on the T-Mobile Internet connection. WiFi won't work.]

2010-06-03

Mashups, Lady GaGa, and the Rebirth of Pop

A slightly unusual post today. I'll talk for a change about music, even if with a firm rooting in technology.

As someone who lived through the 80s, I can say that whatever you may think of politics, economics, fashion, or architecture, you'll have to admit that music in that decade was exciting. Sure, the 60s had the Beatles, the Rolling Stones, etc. - but the 80s had by far the best pop music. In addition, they started an explosion of musical movements and genres (rap, punk, hip hop, etc.) that came to fruition (and popularity) a lot later.

There were, as of my reckoning, two main reasons for this explosion. For one, the CD came on the market and with it a completely new ball game for music distribution. CDs were incredibly cheap to mass-produce, incredibly accurate in their rendition, and the players were obviously equivalent. Everybody could listen to high quality music from a cheap player, and CDs could be sold at extremely low cost.

The other reason is the advent of PC technology. Married with the MIDI interface for musical instruments, it put music in everyone's reach. A two man band could perform the same music that until then required a whole band, and professional recording was increasingly affordable. The democratization in music had begun, and the fruits were clearly visible.

The 90s brought a double shock. For one, there was enormous consolidation in the music industry, with smaller labels being gobbled up by larger ones, and these in turn by even larger ones. The end result was a choking off of the creativity and inventiveness of small labels in favor of big names. Even those big names, though, were choked off, as the story of Prince shows quite clearly.

The second component, in a wonderful symmetry to the perfect duo of the 80s, was technological. In the 90s, technology to copy CDs became more and more widely available, and the long decline in sales of music began.

The new Millennium brought an intensification of the trend. For one, there was the Internet: the perfect medium for the transmission of digital files. Napster started something that became clearly unstoppable, while the music labels continued their consolidation and rejection of a search for quality over the predictable.

You see, music has had a trend to simplification. Pop music in particular had started a long progression towards less and less content in music. You had an idea - a new melody, beat, or harmony - and you'd extend it for four minutes. That would be fine and in the trend of he time - the mixing shops would take your song and extend it from four to eight or even twelve minutes.

Going dancing soon became an exercise in boredom, at least as far as the music was concerned. You'd listen to something that would go on forever, pulsating its way into your brain until you zoned out. Music became the background, not the reason for dancing any more.

In doing so, music bucked a trend towards increasing complexity and information density that the Internet has brought on. Consider the new trend in editing: take a scene with low information content (such as a person moving on a straight trajectory) and increase the speed in editing, acting as if the viewer had used the fast forward button. That's the trend: make information more compact, give me a constant stream of information, don't bore me. We have become very efficient information processors, and the Internet has become the main source for our constant desire for information - and fuels increased ability to process information.

Music couldn't skip the trend forever. The information density in music had to increase again, making music richer than it was any time in the past. We wouldn't accept a musical idea spread over four minutes for long, things had to change.

The first ones to do something about it were DJs. They took matters in their own hand and decided to do something innovative: combine different songs into one. For it to work, the songs had to have similar beats and compatible chord progressions - and lo and behold, the lack of imagination in music made that all possible!

It so came to pass that mashups became all the rage. You can hear all sorts of mashups, for instance by typing "mashup" in the YouTube search bar. Some are combinations of current pop chart songs, some combine those with rock and roll classics. An astonishing amount of them mixes Eurhythmics' Sweet Dreams (are Made of This) with whatever flavor of the day is available.

A second degree of mashup is the multi-mash. DJ Earworm is an outstanding example of this kind of work - since 2007 he has produced a mashup of the top 25 most popular songs of the year (amongst other work). These mashups are staggering in quality and complexity and succeed in creating a new genre in music, one where other people's songs are the instruments used to create a new song. (Oddly, that's basically the idea that Edgar Varese had 50 years ago. His music, though, sounds really alien and frightening.)

Enters Lady GaGa. Those who know here point to her whole persona as the reason for her success: the outfits, the political opinions, the collaboration with other artists. That's all very true, but without innovative music, Lady GaGa would not succeed.

Listening to her most popular songs, one notices that they are more complex than typical pop songs. Instead of the simple aba or ababa structure, in which chorus and narrative alternate, her songs have three, four, five key musical concepts that are followed around and alternated. Bad Romance, the crazed sinfonietta whose video became the most watched in YouTube history, is particularly rich with unrelated themes. Lady GaGa is also perfectly able to have those themes play with each other, adding a mashup element to the dramatic complexity of her music.

You may like the Lady or not, but you'll find out that soon everybody is going to write songs like hers - or even more complex. And we can get back to the musical explosion of the 80s.

2010-05-28

Roku Player

I was getting more and more frustrated with my Netflix account. The company was going more and more in the direction of online streaming, but you had to use Internet Explorer to watch online movies and shows.

I knew for a long time that Netflix was going to push online streaming. After all, the whole business model with the mailing of DVDs had to be hugely expensive and incredibly inefficient: you'd spend a vast portion of your revenue on simply postage, no matter what wonderful deal they had with the USPS.

But Internet Explorer only? That surely couldn't be, especially considering how poorly IE fares these days on the Internet. Turns out the reason is that Netflix chose to use whatever DRM Microsoft is pushing right now, which already tells you two things:
  1. Netflix is not serious about browser streaming
  2. We have to expect the usual fall-out of digital restrictions
On the browser streaming front, the obvious is happening: Netflix is pushing direct streaming to devices that are connected to TV sets anyway. That's your game consoles, mostly, and if you go to the Netflix site, you see all the major platforms represented merrily. There is the PS3, Wii, Xbox. Soon, you'll be able to stream your Netflix onto your Nintendo DS! (jkftr)

An interesting set of devices, though, are those that are specifically there just for streaming of video content. Of all the ones available, the Roku player looked best, so I got one. I went for the medium version, since the cheap one doesn't do HD and the only advantage of the more expensive one is the latest wireless standard. I suggest you stick with wired connections, though, since wireless is not as smooth in my experience. (The prices are eminently reasonable for all three, ranging from $79 to $129 retail.)

The guys at Roku did an amazing job. The unit is dead simple to setup, it's noiseless, and fairly small at about the size of a fat CD case. You plug it into the network, the power, and the TV set (cables provided, except for network) and it starts configuring itself. You can choose from a series of channels - for pay and free - and start linking all sorts of accounts.

You actually can see how much thought went into usability when you start looking at linking. As soon as Roku leaves off and the partner steps in, usability shoots down a few notches. In essence, you go to a specific URL, fetch a code, and type that code either into your computer (if you started the linking from the Roku) or into the Roku unit using an on-screen keyboard.

Frankly, since there aren't that many interesting channels, the complexity of linking is not a big deal, currently. Surely, though, if you had a lot of channels to choose from (and hence to try out), a little more standardization would be necessary. In particular, having to switch from Roku to computer and back all the time is a real pain.

Positives 

Let's start with the good things, since there are a lot more and they are in balance the heavier ones.

The picture quality is amazing. HD shows on the Roku download reliably and without artifacts (most of the time, see below) on a cable Internet connection. The images are vivid, and switching from SD to HD on the Roku is painless and free (as in beer).

The choices of movies and shows on Netflix are great. Not everything is available, but a lot is, and more is added all the time. If you have a Netflix account anyway, then, you save a lot of money because you pretty much can do without the multi-disc subscription and switch to the cheapest subscription they have.

The software is very stable, reliable, predictable, and easy to use. There is one way to get to the menu, there is one way to switch things around, there is one way to navigate. Dead simple and consistent.

Despite the youth of the platform, the selection of interesting and innovative channels is amazing. You can view your Facebook photos (not videos for some reason) online, you can watch selected videos from sharing sites.

Negatives

I admit I am entirely hooked on the idea and implementation of the Roku player, and most of my negative observations are either comments on the industry or suggestions for improvement.

On the former, DRM is clearly an issue for the service. It starts with the limited selection of movies and shows - something that all Netflix users have gotten used to an which we blame the industry for, not the provider. There were reports of Netflix not getting the latest movies, released to DVD but not provided to Netflix.

Of the movies available, you will get only one version. The extras on the DVDs are missing, presumably for the same reason. There are also no subtitles, which I find extremely annoying.

Something that is a combination of poor implementation and DRM is the fast forward and rewind component. It's just not useful. What it does is present you with a series of snapshots at regular intervals into the stream, and you skip to the one you want. Sounds acceptable, but it means that you never enter into the stream at a precise location. Additionally, it takes a while to rebuild the stream once you skip - in the end, in most cases you are better off not skipping but watching through, which takes away half the fun of watching a movie to me.

Seriously - who doesn't want to skip through some of the scenes in most movies? You know, the endless car chase, the bar brawl, the endless kiss? The scene were the protagonist walks through the hulking bowels of a space ship to build tension, just boring you to tears since you've seen that a million times before? I think the most important invention of the 20th century is the fast forward button, and there is not an old movie I like to watch without it by my side. Seeing scenes at 2x and 4x speed is hugely beneficial.

Rant aside, you can't do that with the Roku. First of all, there is no 2x and 4x - just skipping through frames. There are no scene points, either, as we have gotten used to on DVDs: the player skips right into the middle of a scene, and to rewind to the beginning you have to move back, which means rebuilding the stream.

Now, rebuilding the stream takes a while - somewhere between a 10 and 60 seconds. So it's not that switching web page kind of deal.

On the user interface front, there are several aspects of the player that could use some love. I mentioned the inconsistent and cumbersome way linking is handled. But that's understandable at this point.

Some things could be improved on the player itself, though. For instance, you cannot use the forward/back buttons to move to a different episode in a series. You have to go through a menu item ("Select different episode" or such). That's annoying, because switching between episodes is something you have to do every time you want to go to the next episode without watching the end credits.

Lack of subtitles, something I already mentioned, is particularly painful to people that are hard of hearing or to whom the show's language is not familiar. I hope, since it really makes no difference at all to Netflix or Roku, that feature is going to be added soon.

Recap

If you have a Netflix subscription, no current game console, and no immediate need for subtitles, run and get yourself a Roku player! It's an amazing little gadget, and just the saving in subscription cost will make it pay for itself within a year. If I had three thumbs, the little Roku would get all three of them up.

2010-05-19

Rockbox on the SanDisk Sansa Clip v1

I've been fascinated by the RockBox project for a while now. It is an alternative firmware (i.e. OS lite) for music players with a bunch of extra features. It is being ported to a range of different devices and it is becoming the Linux of sorts of MP3 players. (Soon to be replaced by actual Linux, one presumes...)

The first device for which I had a RockBox port was an old iPod, the ones we would call "classic" these days. I loved it, since it have me the freedom to play games, read OGG Vorbis files, and play with the interface - all things that the original iPod stubbornly refused to do. The installation process was painful, but it was worth it. It was a tinkerer's dream.

I installed it on a bunch of different players after the fact, but I never quite bothered following through on any of them. There was the Nano that disappeared in a motorcycle mishap (the zipper of the backpack flew open). There was the e280r that refused to play nice because of DRM.

Now it's time for the Sansa Clip. In case you don't know, the Clip is AnythingButIpod's best music player of the year, and I have to admit, it's pretty sweet. The sound is amazing, the resilience outstanding, the cost ridiculously low, it connects to standard mini-USB, and best of all it plays OGG files.

Now, you are probably turning your head sideways when I mention OGG, but there are big advantages to OGG files, mostly that, since the reference implementation is open, it reads all sorts of files well. On the other hand, most of my players have problems with at least some MP3 files. They'll read the metadata fine, but then die on playing. Or they will read for a while, but then die at a particular file location. Or they don't like a particular bitrate/sample rate/whatever combination. It's all very frustrating.

So I decided to give Rockbox a try on the Clip. The first thing you notice: it's just like on the big iPod, minus the fact you can't play videos. (Well, it would have been a little weird if that had worked, like getting a banjo to play a Beethoven symphony.) There are some strange key mappings (so far, OFF seems to be the equivalent of Back; huh?) but all in all the functionality is great. Best thing: you can switch between original and new firmware by simply rebooting!

R.I.P. Kindle

The disturbing trend of electronic gadgets dying earlier in their life span continues. My Kindle gave up yesterday. I turned it on, and the e-ink display was broken, showing me weird streaks on the top right portion. It looks like blunt force, but the reader was not exposed to any force.

Well, I was kinda happy with my Kindle - loving the wireless connectivity, long battery life, and clarity of the display. I didn't like a great many things, either - the lack of display formats (no native PDF?), the one-shop-only policy, the DRM (including remote deletion), the lack of apps, and the unusually unfortunate hardware choices (no touchscreen, terrible keyboard).

Time for a reality check. Let's see how Amazon handles a customer whose beloved Kindle is dead. I bought it on September 4th, 2009 and it was shipped on the 7th. According to their customer support, the device is good for warranty repair until October 2010. There are companies (like iRiver) from which I won't buy products anymore because of the terrible customer service when I needed a repair. Let's see how Amazon handles this.

Update #1: Got a call from Amazon, they are going to ship a replacement within the day. PERFECT customer service! So far, 10/10 stars.

Update #2: Replacement shipped, sent the defective unit with the shipping label provided, and everything is good. Loading books on the Kindle is a snap, and I am absolutely, perfectly happy with Amazon customer service this time around!

2010-04-22

Developing for Maemo - Fremantle VirtualBox Image

So I've been playing around with the Fremantle VirtualBox image that Maemo.org provides on their web site. The idea is great: you get the image, run it in a virtualization environment, and you don't have to set up anything.

Great idea, poor implementation. First of all, the image doesn't come pre-loaded with instructions. I would have expected a big icon to show on the desktop after launch, telling me what to do next. Instead, there were icons for the VM native software, as well as for the development environment.

Next thing, I start the environment. It's the standard Eclipse, optimized and configured for Maemo. Nice. I decide to create a test project. Easy enough. Let's make it simple, let's create a Python project. Easy enough.

Give it some default values and you are ready to go. We are going to create a PyQt4 project that displays a Hello, World! app. My goodness, could it be any simpler?

You get a wizard that asks you a bunch of questions. Nicely done. Then it tells you it is going to create the project. It merrily starts doing stuff, a progress bar moves back and forth, bouncing left and right. A half hour later, the bouncy bar is still merrily swinging, and I start getting the impression something is wrong. I try to kill the wizard, but it won't die. I open a terminal and kill the thing.

I start looking at the reason for the issue. I try to isolate the problem, but I can't seem to make anything work. All other project types have the same issue. The wizards just hang. WTF?

I read and read and read, and finally I find amongst the myriad pages on the subject one that says that the scratchbox environment isn't initialized properly and that the resolv.conf needs to be updated before starting. Now, maybe you know what scratchbox is, maybe you don't. Maybe you know what resolv.conf does, maybe you don't. I knew both things and also knew that they would cause the behavior I saw, so I manually ran the update, and lo and behold, the project was created.

Next thing I notice is that the image is ancient. It is running Intrepid, two releases of Ubuntu behind. That wouldn't be much of a problem. It does also have targets for the Diablo release, the old version of Maemo that runs on the 810s. That's worse. Finally, the Qt libraries are not installed on the Fremantle targets/boxes, and the keys are not in the apt repository.

Now, I know how to fix all of that, after working with Linux for almost two decades. I'll address that in time. But the real question is: why isn't that already in the image? Why would anyone have to spend a day just to configure something that really should run out of the box, and that it would take only hours to configure correctly? Why do I have to download 1.5G of image just to have something that isn't configured properly.

Methinks this project is not going to go anywhere if nobody starts paying more attention.

2010-04-20

The Suicide of the Kindle

Who remembers the time when Amazon deleted a bunch of books from Kindles? It was the dark ages of e-ink, Amazon was the undisputed master of ebooks, its marketplace was teeming with "publishers" that sold books in the public domain. One of those publishers offered a book whose copyrights had not expired. Understandable, given the absurd length of copyright extensions.

In any case, what Amazon did next was completely horrifying to any book lover. They used the Whispernet connection with which the Kindle communicates to the "mothership" and instructed Kindles with this particular copy of the book to delete them. There you are, reading your purchased copy of Orwell's 1984, and then on the next morning, it's gone.

Now, electronic gadgets usually start as analogies to analog ones. The Kindle is in analogy to a book. The idea is that of replacing hundreds of books on your shelves for a single book in your lap. Advantages: it's lighter than many books, and when you are bored, you can get a new book without going to the book store. Disadvantages: you need to charge it, it is much more expensive, and - apparently - Amazon can delete books as it sees fit.

To a non-book lover, the problem will seem minor. To a book lover, though, the idea of someone reaching into my bookcase and stealing one of my books is traumatic. So Amazon already got a really, really bad head start with this event. It will have to convince me to buy one of its books against the odds it might want to delete it.


2010-04-03

Imagine Roads Like WiFi Networks

I finally made the switch from Verizon to T-Mobile. I had been tempted for a long time, had gotten the final push last year, and finally I found the phone and opportunity that I wanted. I had heard all sorts of horror stories about T-Mobile, but with the amount of talking on the phone I actually do, it's perfectly fine if they work only 30% of the time. [Note: they have been extremely reliable, so far, and none of the horror scenarios I heard about actually came to bear.]

I came to think about the Verizon ads and how much their marketing department pushes the fact they have the best network. I guess I have to concede that point, Verizon has always been good to me from a coverage perspective, and only in Hawaii would calls get dropped frequently.

Then I wondered: why isn't building the wireless infrastructure a task for society to undertake? Why do we have every company that wants to provide wireless service start from scratch, building a whole network that covers the entire nation?

The situation is similar to that in railroads: the government didn't build them, it left the task to the Stanfords and J.P. Morgans. It gave each railroad company a monopoly over a specific route and let the company build the railroad and set prices for transportation.

When paved roads for cars came along, the government chose a different route: roads were built by government entities and were free. Of course, this difference over time destroyed railroads. In return, it gave us splendid roads, highways, and freeways.

In wireless networks, the situation is slightly different again: the government monopoly is not for a location or route, but for a band of the wireless spectrum. This means that we have many networks in the same location. To make things completely annoying, most networks are perfectly incompatible with each other. There are two major technologies (GSM and CDMA), different bands of the spectrum, and even when the technologies are compatible, the providers do their best to prevent interoperability.

Let's see how that translates to the road vs. railroad analogy.

If roads were like the wireless infrastructure, we'd have:
  • Several roads built side by side and paralleling each other where most cars drive; where few cars want to drive, there would be no roads
  • Subscriptions to a particular type of road. If your type of road does not exist in a particular location, you wouldn't be allowed (or able) to use that of a different road carrier
  • Flashy cars that work only on one particular kind of road. They could work on a different type, but the manufacturer has a deal with the road carrier to prevent that from happening
  • As long as you drive your car or leave it parked, everything is fine. As soon as you want to use it to haul a trailer, you have to pay whatever price your company demands, regardless of the fact it didn't tell you how much it would cost ahead of time (SMS and Internet pricing)
  • You'd only be able to buy a car the road carrier approves of. Any road carrier can declare a car unfit for its road network
Of course, roads don't work that way. You can take any vehicle on a public road, with certain limitations, because all vehicle produced suffice the requirements of the road. You don't see parallel roads that go from one place to another - if the traffic is bad enough, you see one wider road.

So, why did we switch from a private enterprise system to a public infrastructure system, only to switch back to the private enterprise system? Why private railroads (a failure), then public roads (a success), then private wireless infrastructure?

Imagine a different setup: a public organization is responsible for building the wireless infrastructure. Instead of dealing directly with 300 million customers, it wholesales the infrastructure to private companies that offer whatever they like. You can get the "traditional" 2-year deal with free phone from one, and a different a-la-carte system from a different company.

How is that better? For one, the cost of setting up wireless infrastructure is much higher in the current system. We have the same place covered by multiple networks, and that's invariably more expensive than a single network.

Then, since there are more resources available in the first place, coverage could be much better for everybody. There would be fewer dead spots, because it would be cheaper to cover them.

Additionally, since voters ultimately run the infrastructure carriers, there is much more direct involvement in investment. Right now, if your carrier doesn't feel the desire to invest in a 4G network, your only option is to walk to a different carrier, which means you have to get a new phone, subscription, etc.

But isn't this socialist? No, in multiple ways:
  1. The common infrastructure wouldn't prevent a private party from building its own network. It just wouldn't happen because the capital investment (and risk) are the biggest part of the problem
  2. Allowing companies to make a profit by offering the best packages with the wholesale infrastructure is very much capitalism at work. It's like shipping companies on the road, or restaurants at the road side
  3. The whole point of free markets is freedom. That's why they are called free markets. If you need several hundred million dollars to enter the stage, there is no freedom. That's evidenced quite visibly by the current state of wireless in this Nation

2010-04-01

Life with my Nokia N900

I've been running around with my N900 for a good time, now, and I have a better idea of its pluses and minuses. Time to share!

The Good:
Sound Quality: It's a Nokia, and it shows. Calls are crystal clear, the sound is perfect, and when the ring tones start blasting, it feels like the ghetto is on tour.
Screen Resolution: Seriously, I've used a BlackBerry and an iPhone, and I pity the poor fellas that continue using them. In particular, using the pinch-zoom on an iPhone is just about the most annoying thing in the world, because you constantly have to do it. Nah, get yourself an N900 and see the world with new eyes.
Extensibility: Yep, it doesn't have as many apps as the iPhone or even Android, but it's much easier to write new ones and to make old ones work on it. I tried it myself - after the frustrating experience of setting up cross-compilation, I actually got tcpdump compiled and installed and had a ton of fun seeing it work. There is just something about having a full environment to work with that is special.
Speed: It's zippy, alright? It's zippy enough that you immediately know when you are sent from WiFi to GPRS, because all of a sudden the browser can't keep up with you. As long as you are running on the little device, things are FAST.
Browser: Hello, Firefox! I got so frustrated with all the sites that don't work properly in Opera Mini, WebKit, or BlackBerry. On the N900, all sites WORK. It's just like sitting in front of your desktop and using your browser there. (Notable exception recorded later).
Skype Integration: A wonderful idea! You register your Skype account (whether online only or offline, too) and you get all your Skype calls into the regular phone mode. You can't really even tell the difference between Skype and regular calls, both inbound and outbound.

The Bad:
Battery Life: Sucks. As bad as the iPhone. If I stay out of the house for more than 8 hours, it dies on me. Grrrr...
OVI Software: One of the only things I absolutely regret is that Google Maps doesn't work in the browser. The navigation conflicts with the screen gestures for the browser itself. I wouldn't mind if the Maps from Nokia were anywhere near Google, but they are just pretty, but useless. I got tons more mileage out of my BlackBerry without GPS than with the N900 with. OVI Maps feels like it was written in the last Millennium, cumbersome, devoid of features, and quite clueless as to what you want from a mapping software.
Buggy: The bugs are right where you'd least expect them, and they are such that the experience is really unnecessarily hindered. For instance, for some mysterious reason, the phone refuses to show me who's calling half the time. I found a workaround: popping the keyboard open forces the phone app in landscape mode, which shows me all the info - but what do you do with a phone that displays precisely nothing when a call comes in?
Inconsistent: A phone that is open has one immense possibility, that of creating a hub of information. Incoming streams can be displayed alongside each other, filtered by source, by type, by urgency, whichever way you want. Not so in the N900. For instance, I use Pidgin for instant messaging - and despite the fact that IM is available through Skype in the main phone app, the messages that I get on Pidgin are somewhere completely different than those that come from Skype. This is kept throughout the phone: the contact entries are stored in a Berkeley DB, while the browser properties are stored in SQLite databases. That's all unnecessary: create a consistent architecture and work on all apps supporting it.

The Ugly:
Keyboard: Sorry, people, an Internet tablet that has a slider keyboard but doesn't support bluetooth or USB keyboards is plain dumb. It's not that it's bad, it's just stupid.  Once I learned to use the on-screen keyboard, I've never used the slider (except for those keys that are not available on the O-SK). The keys are too tiny, don't have enough feedback, and are in general too icky. If I could shave 1/3 of the height of the device by removing the keyboard, man would I be ecstatic! (Why bluetooth keyboard are declared "not a priority," is a mystery to me. Especially because in the pre-N900 tablets, they worked perfectly well.)
Gizmos: The camera is nice and well-integrated, so kudos for that. Everything else, though, seems to be half-assed. There is a webcam, for instance, but the picture quality is abysmal under low-light conditions, and there is no support in any meaningful app. To mock all of us, the only one that actually seems to work in bright light is "Mirror," which shows you to yourself. Cool? Not so much.
I mentioned lack of USB host support. That means that, while you have a fully competent Linux machine, the USB connector is crippled. You could print over the network, for instance, but you can't over the USB port. You could use an external webcam, but you can't connect it. I am not sure whether the problem is with the hardware (lack of USB host support on the controller) or the software, but in either case, the lack of functionality is maddening.
USB Port: Nokia, is it that hard to use standard connectors? Why on earth do we need a non-standard port to connect a USB device? Just give us a phone with a standard USB mini port, like the BlackBerries. It's not just that I don't want to carry one more cable with me - it's that if I am somewhere without the cable, there is no way I will find a Nokia connector at the nearest 7/11 - but I will almost certainly find a USB mini cable.
MeeGo: Did you hear the latest? Nokia merged its Maemo efforts with Intel's Moblin, and now they are MeeGo. Everything changes, nothing stays the same. Oh, and since Nokia bought Trolltech, you'll have to switch from Gtk to Qt for development. I hate saying this, but every time you force a change of infrastructure on people, you lose more developers willing to develop for you. It was true with Windows CE, which frustrated me to no ends with its incessant change of APIs, and it's true for these phones, too.
Cost: Seriously, Nokia. There is ONE phone that runs Maemo. It's the N900. It's freaky expensive. Give us something that a college kid could use on a shoestring budget. It's perfectly pointless to give us a Rolls Royce and expect the open source community to work with it. Know Thy Audience!

2010-03-18

Kindle or iPad - the Battle of the Small(-ish) Giants

Obviously, at the time of this writing, I don't own an iPad. They are about to ship in a few weeks, though, and I think it's a good moment to reflect on eReaders from a reader's perspective.
I read lots of books, in the order of three to four each month. I bought a Kindle a year back, when it still cost $349, and was all excited when I got it. The idea was intriguing: you get a device that is a replacement of a whole library. You don't have to lug around books, and you can even download new books at a discount directly from the store.
The implementation, on the other hand, leaves much to be desired. The opportunity was definitely there for Apple to march into the field, and they certainly did. The low-end iPad is certainly compatible price-wise with the Kindle (the equivalent in functionality, with 3G module, clearly isn't), and the deals Apple has been pushing with content providers are working in its favor.
Let's review what's good and bad about the Kindle first:
Good About Kindle
  • e-ink. The Kindle is amazingly easy to read, especially in conditions that are punishing for a normal computer display. I can take my Kindle to the pool, to the beach, on a train, and never have to worry about bright conditions spoiling my enjoyment. Especially because I particularly enjoy reading in the sun.
  • Battery life. I can leave my Kindle unplugged for weeks before it complains it needs a charge. It's wonderful! Of all my gadgets, the Kindle is the only one that survives for the amount of time I expect it to, and that translates to reliability. A device that needs to be plugged in all the time, especially a device (like the iPods or iPhones) that come with a weird non-standard plug is totally useless to me.
  • Online connectivity. The ability to download a new book or magazine once I am done or bored with the current reading is incredibly important and alone worth buying a Kindle. Imagine you are at the beach and suddenly in the mood for a tome on Non-standard Analysis. What to do? You are not going to run and get one, you are just not going to read.
Unfortunately, that's pretty much it. A few items could have been used strategically but weren't - for instance the fact it runs Linux could have been used to make it expandable, but that isn't happening. It was a really stupid choice and would have given a leg up to Amazon, especially considering what was going on in parallel on the iPhone, but Amazon slept through that.
On the negative side, we have a bunch of problems. Some of them are strategically addressed by the iPad, some others aren't.
Bad About Kindle
  • Screen size vs. device size. An eReader is a device to read. All I want from it is a screen. All interaction with the eReader should be done via the screen. Not so with the Kindle: all interaction occurs with a keyboard at the bottom, buttons on the side, and a joystick menu. That's plain horrible, and I wonder what Amazon was thinking when they came up with that concept. For starters, the keyboard is virtually unusable: the keys are too small, the tactile feedback too weak, the keys spaced too far apart. Using that keyboard is so bad, I find even an on-screen keyboard a better solution. Then there is the joystick, which is unusable as soon as you use a protective cover - which is almost mandatory. You end up with a whole lot of useless buttons when all you want is screen. 
  • Better annotations. It goes without saying that the terrible keyboard does its part in this, but right now the Kindle is quite bad at taking annotations. First there is the dreadful joystick you have to use to get to a particular section of the text, where all you want to do is point at it. Then you have to type the annotation using the crappy keyboard. Finally, the annotation is not saved in any format you could find useful unless you have it on the Kindle. Even there, then, the annotation isn't something whose visibility you can turn on and off, like in a Word document for revisions. Instead it appears as a footnote. Bleh. If Amazon realized how important annotations are, how wonderful it is for a reader to be able to scribble annotations on the side and to turn them on or off at will (something you cannot do in a real book), they might even do a little more work about them.
  • Big Brother is Watching You. I don't think Amazon could have done more damage to the Kindle than what it did when it deleted legally obtained books from Kindles. It was a strange case: someone sold a book on the public domain market not knowing the book was still under copyright (understandable, considering the bizarre length of copyright claims). The people that bought the book didn't know this was against the law, but the books were still deleted from their Kindles without warning. One morning they were there, the next they weren't, and they could do absolutely nothing about it. For a reader, the problem is not technical or legal, it is emotional: I love books. When I buy one, it's mine. I treat them well: the text books I had in college 20 years ago still look like new, through something like 15 moves in three countries and across 12 time zones. I don't want to buy a "book" only to find out that it's gone. Even just the possibility of that occurring is horrible enough that it makes me extremely reluctant to ever buy an eBook on the Amazon store. Amazon should make it clear in writing that it is never under any circumstances going to remove a book from your device or your online storage without first obtaining my consent. It can generously charge me if there is an issue, but removing the book without my consent is absolutely not an option.
  • More Stores. When you go to Amazon, they do a good job at offering you choices. You see the book at a given price, but then there are a series of "New and Used" books you can get potentially at a (substantial) discount. That's all gone when you shop for eBooks. Amazon is the only seller, and you are not even given any option. You buy the book from Amazon, or you have to go offline, download the book and manually install it onto your Kindle. You used to not even be able to read text files and PDF - fortunately I believe the latest software update does away with that oddity.
The iPad, of course, is not really an eReader. The short battery life makes it quite impossible to rely on (in that it's intensely different from a cell phone, since you are typically much less focused on your eReader and forget to charge it much more frequently). The price is too high, the screen makes it useless outdoors. To be sure, there are going to be lots of Apple freaks that will buy one, even stand in line to get one on the first day it comes out, but for a person that likes reading, the iPad is a bust.
What about apps? What about color?
I don't know about apps, mostly because I have trillions of other gadgets that offer better app experience than my eReader. I have my trusted N900 for everything gamey, my netbook for everything that requires a real OS. If I had to choose between using the Kindle for games and having two weeks of battery time, you know where my heart is.
On the other hand, apps that expand my reading experience and make it more enjoyable, those would be extremely welcome, especially if they do not significantly alter my overall experience. I could envision a forum app that functions as a virtual reading community, where I could discuss a particular chapter or phrase in a book. I could envision an Atlantic app that would tell me which articles match keywords in the book I am reading. That kind of thing. And, of course, crossword puzzles, sudoku, and scrabble. Online. Low bandwidth, low computing power.
Color is definitely a no-no. Yeah, sure, I heard a lot about text books and the need for color, and there is definitely a market for it. Just not for me. I find color beautiful, very important, but if I had to sacrifice the battery life, forget it. Once there are color e-ink displays with the same characteristics as black/white ones, I'd switch, but to go from e-ink to LCD: no way!