Tuesday, August 26, 2008

Blogging with ScribeFire

Very interesting way to blog especially if you are looking and commenting on a page. With ScribeFire you can split the Firefox window into separate frames.

This is just a test. I'm going to add an image and see what happens...


That was very cool. The picture was uploaded via blogger's api!

I'll write more about this tool later, once I'm comfortable with it.

Wednesday, August 20, 2008

Those Amazing Jamaicans!!!!

Unless you've been fast asleep during the month of August, you're probably already well aware of the two Olympians trying to outshine each other in Beijing. Michael Phelps and Usain Bolt. The first week belonged to Phelps. Eight gold medals which included seven world records and an Olympic record. An incredible achievement. The second week belonged to Bolt. As I write this, he's already broken the 100 meters record by running 9.68 seconds and the 200 meters in 19.30 seconds. No wonder they call him lightning bolt.

The American media is calling Phelps the Greatest Olympian Ever. Of course, this has led to an amazing number of debates everywhere. What does the Greatest Olympian Ever mean? Can you even begin to compare a track athlete to a swimmer?

Tuesday, May 27, 2008

Linux versus Windows

I've always been a Linux advocate but I've never really made any effort to try and convince anyone to use it. Even though I have a passion for it, enjoy the freedom and flexibility it provides I know that others don't share in my joy.

But I recently convinced a friend to try it out. In talking to my friend and trying to get him interested, I actually convinced myself that there are good reasons to see things my way. From being a passive activist I'm now joining the ranks of becoming an evangelist.


Here's my story.


I started using Linux a long way back. I actually fell into technology by accident even though I've always had a passion for it. By education I'm an Architect. However, I've always loved engineering and physics and most definitely computers. I bought my first computer, a Tandy SX 1000, around 1986 or 1987. I had come into sudden wealth and parted with over $2,500 for this purchase. A Radio Shack computer that came with equipped with a twelve-inch monitor (green character display) and a single 360K floppy disk.


The sudden wealth (a whopping $4,000) was an astronomical amount of money to a student on a $550 monthly allowance.


In any case, the Tandy had a disk with the operating system. DOS version 2, I believe. After booting the computer, I could put in my other Tandy disk which had a game and a few small programs - a text editor, a calculator, and a copy of Quick Basic - a programming language. I spent countless hours in front of that screen, slowly gaining proficiency in writing Basic programs that could do meaningless things -- like read files and count all the vowels.


Tinkering with my Tandy wasn't an issue of "getting things done." It was a matter of playing.


It became a serious school tool when I got a copy of AutoCAD and WordPerfect. Now I could get some real work done. Papers were finished in the leasure of my roach-infested apartment in the McGill university ghetto and technical drawings were automated using Lisp - another programming language.

Towards the end of my degree I retired the Tandy to the simpler life of less tasking work, for example the odd Basic program or WordPerfect job. The Tandy had also taught me the importance of being able to type and for that alone I'll be eternally grateful.

My new computer was a clone purchased through the university on a student loan. It was pre-installed with the latest, and then hottest, operating system -- Windows 3.0. I enjoyed using Windows, but I was constantly opening DOS so that I could do strange things with Basic or look for stuff at the command line. I found that Windows actually limited the way I did things which is very strange. One would have thought the opposite would have been true. Having buttons to click on and the ability to open multiple windows at once so that you could "multi-task" should have boosted productivity. Then, as in now, the opportunity to multitask resulted in the sense that work was being done. Whether or not actual work was accomplished remains highly debatable. Those were also the big heady days of bulletin board services. Computers together with the Internet made it easy to collaborate and share ideas and work. The Internet was in it's infancy in the early 90's.


But even back then, in the early to mid-90's, there was sense in the air that there was something else brewing out there. Somewhere around late 1994, now having left Architecture as a profession and focussing on software development, I got hold of a copy of Slackware Linux.


Installing Linux back in the mid-90's was a masochistic experience. Trying it once was enough to turn you away and lead you right back into the arms of Windows. Linux was terribly difficult to install if you didn't have a university degree in computing. What was even worse was trying to make it useful. Even with a graphical interface (which I mainly used to support opening more than one xterm terminal) applications of "business" functionality were few. Nearly everything had to be compiled from source code many of which wouldn't compile cleanly on the first try, unless you were really lucky.


Inevitably the weak dropped back and the brave slogged onwards. With each new Slackware update, more experience was gained, confidence bolstered and with renewed courage I actually began to enjoy the challenges of working with the system. There was no telling when something would break and the application would come crashing to its knees while vomitting an incoherent error. It was Linux that taught me the term "core dump." When an application dropped from the skies Linux users would often ask if it "dumped core?" This was amusingly reminiscent of futuristic space rockets ejecting waste plutonium as they streaked across space and time at speeds greater than light.


Slackware toughened up and the creator, Patrick Volkerding, continued to simplify the installation process. To install Slackware you still needed an intimate knowledge of your computer. You needed to know the model of your network device or your video hardware. Back then you couldn't simply stick the CD in the drive click on the "setup" button and walk away for a latte. The X Windows graphical interface was crude and grainy. Not polished and smooth like Apple or even Windows. Life was really tough but things were improving slowly.


By the late 90's I was still using Windows as my primary computer both at home and at work. I had a spare computer for "playing" with Linux. Slackware was my primary Linux distribution even though I dabbled with Red Hat's distribution occasionally. Red Hat's package management feature was extremely attractive. Adding and removing software was easy using Red Hat Linux. All you needed to do was find the right packages on the Internet. In Slackware Linux everything had to be compiled but that was not a real issue. The compilation process was so simple that in many cases the software created from a source code compile ran much faster and smoother than the pre-compiled versions that you could download.


In addition to the fact that you could download software in source code, customise it and then build it, using Linux you could also modify the actual operating system by compiling the kernel. The kernel is the central component of most computer systems. The go-between application programs and system resources like memory, the CPU and other devices. Most modern operating systems provide a generic kernel with plug-and-play modules for the different components inside a computer. The generic Linux kernel supports every common hardware device under the sun but because you have access to the source code you now have the additional power of building a customised kernel.


Building a customised kernel is easier than it sounds. In fact it's a simple process of selecting options from a menu and then issueing the build command -- make! The trick lies in being able to identify accurately the hardware and any devices that are inside your computer. It's not enough to know that you have an Ethernet network card. You need to know that it's a 3-Com card or an Intel card with a Broadcom chipset. Why would anyone build a custom kernel? The simple answer is so that you can optimise performance. A smaller kernel with only the support for the devices you need should run faster while consuming fewer system resources.


By the late 90's I was building customised kernels for my Slackware systems out of bordom and curiosity. Back then it was necessary. A smaller kernel did make a difference on those slower processors and memory was as precious as gold. If your computer had only 64 MB of memory, then using every single megabyte efficiently was the difference between productivity and irritation. The smaller the kernel the more memory available for application programs. The more memory available for games. The more efficient the kernel, the faster it took the computer to start. If the kernel had the device drivers built into it then the relationship between the devices and the software applications was much more intimate. But times have changed. Nowadays the hassle of building a kernel is a last resort option. Only an esoteric problem involving exotic hardware would force anyone to go down that path. Computer resources are not as precious as they used to be. Todays computers have tons of memory and storage that the effort in tweaking and fine tuning is truly wasted. The adage is true. If your software is slow and sluggish, buy a better computer.


As hardware got faster in leaps and bounds. Smaller processors doing much more work. Multiple cores on a single chip. Increased cheaper memory. Huge disk drives making storage a commodity. Graphics cards that could handle three-dimensional algorithms with smooth output. Programmers got lazy. Bloat code became the main feature of the late 90's and 2000's. Since the hardware could do so much, why try to optimise the software. Make the software huge and bloated and let the hardware worry about the speed. It's almost like aeronautical engineers saying, "let's not worry about the weight of the plane, we'll just make bigger engines and use more jet fuel!"


I've gone from 8 MHz processors to over 3 GHz today. That's over a four-hundred percent improvement in just the speed of the processor. From 640 K memory to 4 GB another astounding six-thousand percent leap. But I don't think that software speed has improved that much. Programmers back then were frugal and nit picky. They used system resources like a desert traveller would hoard water. Every drop counted. Making things smaller, tigheter, faster was the mantra of the day. The hardware people definitely understood this and it's visible in hardware designs today. things are smaller and compact. My iPod shuffle is an amazing piece of technology.


So how did it grow from a hobbyists project to the successful operating system that it is today?


Linux was invented by Linus Torvalds. Sometime in 1991, Linus posted a message on the newsgroup comp.os.minux. The message read:


[Quote]

Hello everybody out there using minix

I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones.

[/Quote]

He was obviously wrong. This free operating system has grown to become a huge success.


Strangely enough there are some comparisons with the success that Linux is having to the success found by Microsoft through it's sale of DOS. Microsoft's original partnership with IBM was to provide the Basic programming language for their personal computers. In fact in 1980 Microsoft didn't have an operating system and they'd suggested to IBM executives that they take a look at CP/M (Control Program for Microcomputers) written by Gary Kidall. The story is that IBM couldn't arrange a meeting with Gary Kidall so they went back to Bill Gates and gave him the contract to write the new operating system. The rest, as they say, is history. Bill Gates purchased rights to another operating system (QDOS - Quick and Dirty Operating System) from Seattle Computer Products for $50,000. Microsoft kept the deal with IBM a secret from Tim Patterson, the head of Seattle Computer Products, and went on to develop MS-DOS. Microsoft was further able to convince IBM to let them retain the rights to sell MS-DOS separately from IBM Personal Computers.


So you can make the loose analogy that Microsoft, starting from humble roots, with hobbyist type operating systems has developed and matured a product that is the market leader today. In much the same way Linus started an operating system which has matured and is no longer a hobbyists toy.


There's a huge difference in approach between Microsoft's success and Linux's. On the one hand, Bill Gates would not have built MS-DOS if IBM wasn't willing to pay for it. However Linus built Linux for free in order to learn. Linus was curious and he had a desire to improve minix and wasn't interested in getting paid for it. Bill Gates has continued to improve DOS and it's successor Windows by closely guarding the source code. The intellectual property of Windows belongs to Microsoft Corporation and it's development is done by a handful (though a rather large and competent one) of people. The Linux source code is open for anyone in the world to look at and modify. It took Windows a long time to mature. It's taken Linux a relatively short amount of time. Without financial backing and the support of paid engineering the success of this operating system is simply amazing.


The Linux of today is a competent, stable and functional written work. It encompasses the best in software development authored by the best brains in the industry. It's certainly true that Microsoft has a depth of engineering talent that is enviable, but rumour has it that a lot of those people are closet Linux users. There's something about Linux that draws the computer geek out from their hiding place. Like a strong harvarti to a mouse. Many global companies that resisted the call to support Linux are now advertising on their web sites. IBM contributes greatly to Linux development. Not only financially but also engineers and technical support. Novell, once a powerhouse in the network operating system space has aquired a German Linux distribution, SuSE, and turned it into their primary operating system offering in favour over Netware, their flagship product. They even have a roadmap encouraging their existing Netware clients to switch over to SuSE. Sun Microsystems, Oracle Corporation, Nokia and so on. The list is a who's who in the technology business.


The playing field is no longer about the favourite team at home in front of a capacity crowd that's mainly composed of their relatives. Some of their uncles and cousins are now suiting up for the opponents. Linux is showing that it can play forward just as well as it can handle defence.


But there are real differences between the two operating systems unlike the reasons why someone would buy a Honda over a Toyota. The differences are obvious to those, like me, who have been in both the Microsoft Windows space and the Linux space for a long time. I continue to work with Microsoft Windows at the company where I'm employed, but my primary computer there is a Linux computer. There would little need to use Windows if I wasn't employed. It's not because I don't want to pay for software, on the contrary, I do pay for some of the applications that I use, even though I don't have to. This is perhaps part of the philosophical difference between Linux and Windows. Windows was started as a commercial venture from the beginning. It's roots are based on cash flow and balance sheets. The momentum of development is driven by the opportunity to increase the bank balance at the end of the day. Linux was started as an idea. An idea to see if something could be improved for free through worldwide collaboration. Linux people were curious about whether or not we could do this, or if we could do that. It's no wonder that even today the "free" software market is so large. Linus Torvalds is not the only one who had the desire to play and invent "for the fun of it," but he has inspired so many to take up his challenge and jump into the sandbox with him.


It took a long time to get the commercial world to see advantages in free and open source software. Nowadays that debate is largely over. We've proved that complex systems can be built through large distributed teamwork. The debate against it only echoes in the checkered blue halls of a certain company based in Redmond, Washington. The fact that they're taking notice is a good thing. They've noticed it enough to even have a Microsoft Linux division. I kid you not. The publicised purpose of the Microsoft Linux division is to allow for better interoperability with Linux systems. They claim that they just want to play fair but we all know that they want to be let into the sandbox. For more information check out: http://port25.technet.com/


Microsoft Windows is closed source code. Microsoft Windows has built-in complexities mainly due to how it's been historically developed. It's a behemoth of a product that often suffers from strange ailments. Software programs take on the personalities of inmates in an insane asylum deprived of food. One minute you're productively working, like a 40-piece orchestra your word processor wraps text around images provided by your camera while the web browser dashes back and forth from the Internet bringing you news, entertainment and other subliminal messages. Suddenly the violin section begin to bash each other using their bows. The clarinets also get upset and stop working. The conductor (also known as the operating system) throws his wand in the air and walks off the stage to get a coffee. Days like these are expected when using Windows and when they happen you calmly wait for the stage to clear, tell everybody to "take five" and come back when they're feeling better. You included.


It would be untrue to claim that Linux is not immune to some of these ailments. I've brought Linux to its knees more than once. But there's a major difference. When I'm using Linux, I'm rarely scared of pressing buttons as though they might explode. With Windows, each action is tantamount to walking across a mine field. You never know when you might get blown up. For the most part, when a Linux application stops misbehaving, you shut it down and send it to the corner. That application won't be allowed to play with the others till it learns how to behave.


There used to be an overwhelmingly larger number of Windows applications to the number of Linux applications. The argument against Linux used to be that it can't do "this" or it can't do "that." For the most part, that argument is now old and tired. Most businesses can run their financial applications, document processing and graphics applications, payroll systems, messaging and web programs in Linux. In fact, a great many of these applications were built for Linux and are more mature in Linux. Since applications proliferate for both Windows and Linux it's now easier to evaluate the operating system based on the merits of what an operating system is supposed to do. That is to provide a fair playing ground, to manage resources effectively, to allow the user to interact easily and as intuitively as possible with the machine. A lot of the interaction is governed by the devices that are provided. A keyboard is a keyboard and it doesn't matter which operating system you choose. So is a mouse. However the user interface semantics can either ease or hinder your computing experience. For the most part, Linux graphical applications have taken their cue from their Windows counterparts. Partly in an effort to make the transition from Windows to Linux transparent, but also because the user interaface components are universal. Like cars provide steering wheels, seat belts, speedometers, trunks, gas (or petrol) tanks, the graphical components of software have developed across operating systems based on the objective of usability. It's no wonder graphical interfaces look alike.


For myself, I've discovered that I can do anything in Linux that I can do in Windows. More importantly, I have a lot more fun in Linux than I do in Windows. I enjoy the fact that the system bares its guts to me allowing me to see what it's doing and in turn experiment without fear. Unlike Windows, when you remove things you don't like, they trully leave and don't hang around in ethereal space waiting for the tunnel with the light at the end to appear.


Linux has bent over backwards to make itself integrate in networks of all types. If your business uses a Windows network, Linux computers can authenticate with Windows, browse Windows shares and provide Windows-type services. Only lately has Windows started to behave more Unix/Linux-like. For exchanging documents with my Microsoft Office colleagues I use Open Office (http://www.openoffice.org/). It can save Microsoft Office native documents. In Linux creating PDF files is free so there's no problem actually exchanging PDF documents with Microsoft Windows users. For art and graphics work my two favourite software applications are The Gimp (http://www.gimp.org/) and Inkscape (http://www.inkscape.org/). All these applications are available for Windows and work well in that environment too. The Linux advantage is one of cost and robustness.


In conclusion, Linux is a viable system for use at home and for business. I think that it's a far superior system to Microsoft Windows for ease of use, ease of configuration, networkability, robustness, immunity to attack and technical support.


Sunday, May 04, 2008

Ubuntu Issues

Ubuntu has a bug with dates set before 1970. The system will boot fine and you'll get to the gdm login screen, but when you type in your username and password, then press enter, you'll be left with a brown screen.

The fix is simple. Simply set the right date.

Once you get to the brown screen, press CTRL+ALT+F1. This will take you to a terminal. Once at the terminal, log in normally with your username and password. Just as you did in gdm.

Type the command: date to check the system date.

To set the date, you can type in something like the following:

date -s "Mon May 12 13:12:00 EST 2008"

This sets the date to Monday May, 12th at 1:12 PM in the correct time zone.


Sunday, April 27, 2008

Ubuntu Linux


The Latest release of the Linux distribution Ubuntu was released officially on April 4th, 2008. This version has been long anticipated. In my estimation, possibly since version 6.06, the previous version which promised long-term-support, a.k.a., LTS. LTS, according to Canonical, the company that markets and supports Ubuntu, simply means that you'll get support for at least five years on the server side and three on the desktop.

Ubuntu release cycles are every six months. The numbering (or versioning) method is quite interesting. The version is based on the year and month that the upgrade is officially released. The very first Ubuntu version was released in October, 2004. Therefore, the very first Ubuntu is versioned 4.10. The single digit year comes first, followed by a two digit month. The second one came on time, six months later in April, 2005. It's version number was 5.04. And so on.

Another interesting tidbit is the product name, or code name, assigned to each new version. The first Ubuntu (4.10) was code named Warty Warthog. The next one was Hoary Hedgehog. I just love it. Ubuntu users even use these names in discussing the product. Take a brief visit to the support pages of Ubuntu and you'll read sentences like, "...I'm trying to upgrade from Gutsy to Hardy and...," or, "... is it possible to use the same NIC drivers in Hardy that are supported in Feisty?"

Most vendors normally have code names, mainly used by the developers, for their products. Even stiff-upper-lipped Microsoft. Windows 95 was code-named Detroit. Windows 2000 didn't have a code name, but service pack 1 was called Asteroid. Windows XP - Whistler. Windows Vista - Longhorn.

I definitely prefer the creativity of Ubuntu names.

What makes Ubuntu really exciting for me is the fact that it's really trying hard to bridge a gap between commercial (and that might not be the right word) products and the Open Source world. When you pay for something, you expect it to work. The vendors supposedly put a lot of time polishing the product. Making it easy to install and use. Full of features, robust and supported when things ultimately go wrong. In the early days, Linux distributions required a lot of knowledge on how computers work. Installing Linux was an introduction to a computer science course. Over time, the user-friendliness of Linux has matured. Linux, especially Ubuntu, is comparable to Microsoft Windows in ease of installation. Ubuntu's not alone. Red Hat Enterprise, Fedora, SuSE Linux and Linspire are a few that come to mind immediately. When installing these products, it's no longer necessary to have an intimate knowledge of your hardware as was the case before.

But Linux distributions go further than Microsoft Windows when it comes to putting together a computer system. When you install Windows, all you get is a multimedia player (Windows Media Player), a couple of featureless text editors (notepad and wordpad), a rudimentary painting program (Microsoft Paint) and a few distractions such as games and system tools. From a users perspective, there's not much you can do yet. You need to spend a little bit more money. Purchase a full featured graphics program for editing those photos. A comprehensive office suite that includes spreadsheets (for those accountants), presentation packages and perhaps a better word processor that includes spell checking and complex formatting.

When you install a Linux distribution, you get a wealth of applications that make you immediately productive. The Gimp, for image manipulation, Open Office for business documents, sophisticated DVD tools, Internet applications and much more. With Ubuntu, you also get a system, second to none, for adding new software and keeping the system up to date. Even the task of moving from one version of the operating system to the next one is a simple upgrade using the same tool. That would be like updating Microsoft Windows XP to Vista by a simple visit to the Microsoft Update site. Unlikely.

I'm writing this on a Dell Latitude D620 notebook computer. It has 4GB of RAM and a 100GB disk. The wireless card has a broadcom chipset. The video card is an nVidia GForce 7300.

The laptop was purchased in March 2007 and as soon as Feisty Fawn (Ubuntu 7.04) was released it was relieved of the burden of Windows XP. I was never able to get the wireless networking part stabilised. Even though I used the ndiswrapper package, almost foolproof when it comes to Windows-only driver devices, the card was detected, would work, but was very unstable. Every other part of the system was excellent. I used Envy to install the nVidia video driver.

Six months later when Gutsy Gibbon (Ubuntu 7.10) was released I upgraded the system online. Keeping the same Windows XP drivers and the ndiswrapper package, wireless networking was absolutely stable. I could connect seamlessly from network to network with the excellent Network Manager application.

Upgrading Ubuntu has always been easy. Other than wireless connectivity, and sometimes problems with video and some USB devices, everything seems to work without additional effort. This was not the case in the early days of Linux since most hardware vendors really concentrated on providing support for Microsoft. In any case, this last upgrade (Gutsy 7.10 to Hardy Heron 8.04) wasn't very pleasant. Here's how it went.

As usual, my system notified me that there was an upgrade available. As in the past, I clicked the "Upgrade" button to start the process. A dialog window jumped to the front of the screen indicating that things were happening. Messages indicating that files were being fetched from the Internet, local settings were being altered and so on flooded the screen. But after a minute or so, everything stopped. The dialog window turned an unhealthy shade of gray and nothing seemed to be happening.

I tried to close it by clicking on the little button in the top-right corner with an "X" on it, but it wouldn't respond. I even launched the handy "xkill" program which normally destroys any Linux window application by clicking on it, but this didn't work. Eventually, I had to open a terminal window, find the program by using the "ps" command and then issue another command to get rid of it.

Oh, another nasty side-effect of this partial upgrade, and the subsequent murder of the frozen dialog window was that my system was left in a state of confusion. For some reason, it now thought that it was in the new version, however, there were still five-hundred programs that needed upgrading. I discovered the reason for this but I won't get into it here. It had to do with the way Ubuntu records the locations where it fetches updates to programs. They're called "repositories" in Ubuntu-speak.

After fixing the side-effect and attempting the upgrade a zillion times, I realised that I may have to, for the very first time, actually upgrade by installing from scratch. I've been so used to upgrading a system "intact," including Microsoft Windows, that the possibility of backing everything up, installing a fresh copy of an operating system, and then having to put everything back, was quite daunting. I had everything installed and configured exactly the way I wanted it and I would have to find all my software and try to remember every single tweak.

So I soldiered on. Jumped onto the Ubuntu forums and asked, no begged, for help. I finally found something that sounded promising. By killing the failed attempt at an upgrade, it was then possible to open a terminal and complete it manually. I know what you're thinking. This sounds very much like hacking, but as a true Linux aficionado, I knew that I wasn't doing anything that the operating system couldn't take. Microsoft Windows people would never dream of doing this. If an upgrade of Windows jammed in the middle, and by the grace of God the system was still usable, the only solution would be to salvage any work on the disk and reformat it.

I proceeded to hijack the installation and it completed six hours later. Fetching all the updated files from the Internet took a very long time. The video resolution was completely off. My screen is able to handle 1440 pixels across by 900 pixels down. The maximum resolution that I was getting was 800 by 600. This was because the video driver that had been installed by the upgrade was the wrong one. Getting a copy of the latest version of Envy (EnvyNG) seemed to install the driver, but still I could not fix the resolution problem. I thought I'd tried everything that I could but I've since learned that there was an additional solution that I could have tried. A solution that seems to be helping others with the same problem.

The wireless card was detected. Miraculously I didn't need to use the ndiswrapper package. I knew this since I had used the live-cd version of the upgrade to test my hardware. What was confusing is that in the live-cd, everything was perfect.

I started upgrading on Friday April 25th at about 10pm. I fought with the upgrade till I discovered the hijack work-around at about 3am at which time I went to sleep and allowed the upgrade to complete. When I woke up to go for my morning run at 7am, the upgrade still wasn't finished. It seemed to have completed sometime between 7am and when I got back at 8am. Between 8am and 10:30am I fought arduously with the video card.

At 10:30am EDT on Saturday April 26th, 2008, the date that will be remembered as the date the Toronto Transit Commission stranded commuters in downtown Toronto, I finally gave up and started backing up the documents on my system in preparation for a clean installation. By 12pm, my backups completed and intact, I bid goodbye to my (crippled) system, inserted the fresh CD of Ubuntu and began to wipe the system.

The entire installation took between forty and fifty minutes to complete. When the CD ejected from the drive and I rebooted into the brand new system, everything was good. The video even looked sharper, clearer than it did in the previous version. A few mouse clicks later, which were necessary to download the proprietary drivers for the wireless card, and I had full wireless connectivity. I pulled the Ethernet cable from the jack behind the laptop and enjoyed an unfettered surfing experience. Oh the joys of being able to move freely about and not worry about cables and other hindrances.

Hardy Heron is a must-have upgrade. I needed to download proprietary drivers also for the nVidia card, this is only because the desktop that I wanted needed full hardware acceleration capabilities of the video card which the Ubuntu provided one.

Hardy Heron installs a beta version of the Firefox web browser. At first I was sceptical about using beta software and wondered why a released version of an operating system would tolerate beta software. But after using it for the entire weekend, Firefox 3 is close to being a production release and feels more stable than it's version 2 parent. Firefox 2 was very unstable and notorious for freezing and destroying every Internet session that you were using.

The fact that Ubuntu provides a live-cd is an excellent method of testing your hardware for compatibility. This used to be the case in the Microsoft Windows world, however, the new version of Vista is extremely demanding on hardware, and even though it might install on a computer that was perfectly happy with Windows XP,