Friday, October 8, 2010

Web Developer Testing Server Setup: Ubuntu LAMP Step-by-Step

As a web developer, we need a server on our LAN to test our work before we put it online. Testing on a standalone machine is both more realistic and less intrusive than running a web server on one of our work computers. Until recently our testing server was running OpenSolaris 2009.06, but Oracle has pulled the plug on OpenSolaris. I decided to take a look at Ubuntu Linux and found setup to be straightforward. Using Ubuntu 10.04.1 Lucid Lynx, a lot of the work has been done for you, with just a couple of gotchas along the way.
You will be required to choose passwords at various points during this procedure. Since our server will live behind a firewall on our LAN and is only for testing purposes, I used the same password throughout for simplicity. This is not recommended for a server that will see live deployment.
Click on any of the images in this article to zoom in.
  1. Download the 32-bit or 64-bit disk image, depending on your hardware, and burn it to a CD. Our server is an old Dell Dimension 3000 so we're running the 32-bit release.
  2. Boot your server from the CD. Select your language, then choose the first menu option, "Install Ubuntu Server".
  3. Proceed through the installation until you reach this screen:
  4. Select "LAMP server" to install the Linux Apache/MySQL/PHP combo. I also selected "OpenSSH server" so I could administer the server from anywhere on our LAN, without having to walk up to the machine.
  5. When the installation is complete the server will reboot. Login and make a note of the IP address assigned to the machine. This is the address you'll use for transferring files using FTP, testing websites, and remote admin (if you installed OpenSSH).
  6. Execute the following command to install phpMyAdmin, for easy management of your MySQL databases:
    sudo apt-get install phpmyadmin
  7. At this screen select apache2:
  8. At this screen select "Yes":
  9. When the phpMyAdmin installation is done, complete the following four steps to stop phpMyAdmin from complaining that "The additional features for working with linked tables have been deactivated..."
  10. Execute the following command:
    sudo nano /etc/phpmyadmin/
  11. Scroll down until you see this:
  12. Add the following line:
    $cfg['Servers'][$i]['tracking'] = 'pma_tracking';
  13. Press Control-o to save the file, Enter to confirm the filename, then Control-x to quit the editor.
  14. Complete the following three steps to stop apache2 from complaining that it "Could not reliably determine the server's fully qualified domain name, using for ServerName".
  15. Execute the following command:
    sudo nano /etc/apache2/conf.d/fqdn
  16. Enter the following text:
    ServerName localhost
  17. Press Control-o to save the file, Enter to confirm the filename, then Control-x to quit the editor.
  18. To install an FTP server for file transfer using DreamWeaver or similar, execute the following command:
    sudo apt-get install vsftpd
  19. Enter the following command to edit the vsftpd configuration file:
    sudo nano /etc/vsftpd.conf
  20. Scroll down until you see this:
  21. Uncomment the write_enable and local_umask lines to look like this:
  22. Press Control-o to save the file, Enter to confirm the filename, then Control-x to quit the editor.
  23. Take ownership of the web hosting directory by entering the following command (replacing <your user name> with the user name you created during the Ubuntu installation):
    sudo chown <your user name> /var/www
  24. Restart your server by entering the following command:
    sudo reboot
  25. Test your web server by entering the IP address you noted earlier into your browser. If it's working you will see a message like this:
  26. Test phpMyAdmin by appending "/phpmyadmin" to the IP address.
  27. If you're in an environment that supports Bonjour (aka Zeroconf), e.g. if you're running Mac OS, execute the following command:
    sudo apt-get install avahi-daemon
  28. Now you will be able to access your testing server by name instead of IP address. Just append ".local" to the name that you gave your server during installation ("ubuntu" by default).
If you find this guide useful I'd love to hear from you.

Friday, July 2, 2010

Div Content Rotation Using JavaScript

Our designer needed a way to rotate the contents of a specific div on a web page. The content to be rotated would include text and images in varying layouts. There are plenty of examples online showing how to change part of a web page using JavaScript, but most of the examples I found use JavaScript to generate the HTML. This means either the designer has to be comfortable programming in JavaScript, or they have to call on a programmer to make changes.

I wanted our designer (my wife) to be able to use their preferred design tool (DreamWeaver) to create the contents of the rotating divs, without having to call on a programmer (me) to make changes. I also didn't want to place any restrictions on the HTML contents of the rotating divs. The solution I came up with is described below. You can see it in action here. The div that rotates is the Business Profile.

The divs
<div id="before">
    <p>this is the div before</p>
<div id="destination">
    <p>the contents of this div will be replaced</p>
<div id="source1" style="display:none">
    <p>contents of <span style="font-weight: bold">first</span> rotating div</p>
<div id="source2" style="display:none">
    <p>contents of <span style="font-style: italic">second</span> rotating div</p>
<div id="source3" style="display:none">
    <p>contents of <span style="font-family: sans-serif">third</span> rotating div</p>
<div id="after">
    <p>this is the div after</p>

What we have here is some arbitrary page content for illustration (div id="before"), followed by the div we want to update with the rotating content (div id="destination"). Next comes a series of divs (div id="source1" etc.), the contents of which will be rotated into the destination div one at a time. Finally we have some more arbitrary page content for illustration (div id="after").

The thing to notice about the divs to be rotated is the style="display:none" property. This prevents them from being rendered or having any impact on the layout of the page. Without this property, the code above would render like this:

this is the div before
the contents of this div will be replaced
contents of first rotating div
contents of second rotating div
contents of third rotating div
this is the div after

Instead, what we see (without the JavaScript) is this:

this is the div before
the contents of this div will be replaced
this is the div after

The designer can create more divs to be rotated, using their favorite design tool, then paste them into the page source (as div id="source4" etc.), adding the display:none property to keep them from rendering or affecting the page layout.

The JavaScript
The following code is added to the body section of the HTML, below the set of divs, causing it to run when the page is rendered:
<script type="text/javascript">
    var sourceName = ["source1", "source2", "source3"];
    rotateDivContent(sourceName, "destination");

When the designer adds more divs to be rotated, they include the corresponding names in the var sourceName array. This array determines which divs will be rotated into the destination, and in what order. This also allows the designer to bring specific divs into and out of rotation, simply by including them in or excluding them from the array.

The following code is added to the head section of the HTML:
<script type="text/javascript">
// select a source div and copy into destination
function rotateDivContent(sourceName, destName) {
    var divIndex = getCookie("divIndex");
    if (null == divIndex) {
        // initialize to a pseudo-random value
        var now = new Date();
        var index = now.getMilliseconds();
    } else {
        var index = parseInt(divIndex);
    // adjust index to enable selection from array
    index %= sourceName.length;
    // increment so the next div will be selected next time
    setSessionCookie("divIndex", index + 1);
    // copy HTML from source div to destination div
    var sourceDiv = document.getElementById(sourceName[index]);
    var destDiv = document.getElementById(destName);
    destDiv.innerHTML = sourceDiv.innerHTML;

// search for named cookie and return its value, or null
function getCookie(name) {
    var cookieRE = new RegExp("(^|; )" + name + "=([^;]*)(;|$)");
    var found = document.cookie.match(cookieRE);
    if (found) {
        return found[2];
    } else {
        return null;

// create or update a session cookie
function setSessionCookie(name, value) {
    document.cookie = name + "=" + value;

// prevent caching of this page
window.onbeforeunload = function () {

The function rotateDivContent does most of the work. In this example, a session cookie is used to keep track of the index of the next div to be displayed. My theory is that even users who are super-sensitive about privacy are reasonably likely to have session cookies enabled. The first time the user visits the page, the cookie doesn't exist, so the index is generated pseudo-randomly by grabbing the current time in milliseconds. On subsequent visits to the page within the same session, the value is incremented and the destination div content is rotated.

Updating the contents of the destination div is as simple as setting its innerHTML property to that of the selected source div. Instead of writing elaborate JavaScript to generate a limited set of HTML, the code remains simple, and the designer can create arbitrarily complex HTML in the source divs using their preferred design tool.

An anonymous empty function is assigned to window.onbeforeunload to prevent caching of the page. This forces the script to run every time the user visits the page, ensuring that the div contents will rotate. (Unfortunately Safari 5.0 fails to re-render the page when the user clicks their back or forward button. Firefox, Camino and IE work as expected.)

You're welcome to use or modify this code as you see fit. Please let me know if you find it useful.

Wednesday, June 23, 2010


CrashPlan’s unique feature is the “you show me yours and I’ll show you mine” of backup. What I’m talking about is the ability to backup to a friend’s computer, by mutual consent, which probably means letting your friend backup to your computer. This feature is part of CrashPlan’s emphasis on targeting multiple backup destinations from a single application. CrashPlan offers four classes of backup destination.

If you and a friend each have a CrashPlan account, you can exchange friend codes. When you enter a friend code into your copy of CrashPlan, you gain the ability to backup to your friend’s computer over the internet. This gives you offsite backup without the need to pay for online storage. You have to rely on your friend’s computer being online during the times when you want to perform backups, but as long as there’s enough overlap in your typical online times, this shouldn’t be an issue. More importantly, your friend’s computer will have to available online or in person for you to restore any data.

Any computer you install CrashPlan on using your own account becomes available to you as a backup destination. Destination computers can be on your local network or on the other side of the country. As with friends, the only requirement is that a destination computer be online when you need to backup or restore. In one scenario, you have a home server with ample free disk space and you use it as a destination for onsite backup. In another scenario, your kid goes off to college and uses a computer that stayed home for offsite backup.

A destination folder can be on your main drive or on an external drive. For example, you can let CrashPlan automatically backup to an external drive as an alternative to something like Mac OS Time Machine.

CrashPlan Central is the name of CrashPlan’s online storage destination. The compelling feature of CrashPlan Central is the pricing. There are essentially two levels. The Individual Unlimited Plan lets you backup an unlimited amount of data from a single computer. The Family Unlimited Plan lets you backup an unlimited amount of data from any number of computers, provided they are all owned by you or by a family member. Both plans compare favorably with all the other online storage options that I considered.

CrashPlan works on Mac OS, Windows, Linux and, uniquely among the solutions I researched, OpenSolaris. The inclusion of OpenSolaris may not seem like a big deal, but that happens to be the OS that we run on the in-house web server that we use for testing. As a result, I’ve learned to appreciate ZFS, the OpenSolaris filesystem. In short, ZFS is one of the most reliable filesystems available, and you can have it for free with OpenSolaris. Put two or more identical hard drives into a mirror or RAID configuration (a snap with ZFS), set up your OpenSolaris box as a CrashPlan backup destination, and you have a very solid onsite backup solution. ZFS can automatically detect and correct physical hard drive errors that would result in silent corruption on most filesystems. With CrashPlan’s automatic archive maintenance running on top of ZFS, you should be protected from everything but physical destruction or theft of your onsite backup.

For now I’m sold on CrashPlan as our main offsite backup solution and seriously considering it for onsite backup too.


SpiderOak is a highly capable online backup solution, with competitive pricing in 100GB increments for an unlimited number of computers. Your first 2GB of online storage is free, just like with Dropbox. The application gives you extensive control of your ‘SpiderOak network’, which consists of all the computers that you’re backing up, and all the files you’re backing up on those computers. SpiderOak works fluidly across Mac OS, Windows and Linux, automatically uploading changes to any file or folder that’s marked for backup. Selecting the files you want to backup, restoring backed up files and previous versions are all easy tasks with the SpiderOak application. Beyond efficient online backup, SpiderOak has a couple of tricks up its sleeve.

First, let’s say you like the idea of synchronizing specific data across two or more computers (a la Dropbox), but you don’t want to have to move files and folders around to achieve this. With SpiderOak, you can set up a ‘Sync’ between two or more folders in your SpiderOak network. Those folders can be on different computers or on the same computer, SpiderOak doesn’t care. Once a Sync is set up, SpiderOak will keep those folders synchronized automatically. Want to do the same with another folder? Just set up another Sync. It’s like having multiple Dropbox folders, all of which are independent, so you don’t have to synchronize everything on all your computers.

SpiderOak’s second clever trick is a feature called ShareRooms or Shares. This feature lets you make a subset of your data available to others online. You set up a named Share that includes one or more of the folders in your SpiderOak network. These folders don’t have to live on the same computer. When you want to give someone access to a share, you give them either the login credentials or a unique URL. Your friend or colleague can then browse and download anything contained in that share. If you make changes to files or folders included in a Share, those changes are reflected in the online ShareRoom. Users can even be notified of changes via an RSS feed.

So why would anyone use Dropbox when you can have SpiderOak? Although SpiderOak easy to use, the process of setting up a Sync or a Share is not as simple as just installing Dropbox and throwing files at it. Contrast Dropbox’s no-UI approach with the 5 main tabs, 11 sub-tabs and maybe 50+ buttons, checkboxes, combo-boxes, text boxes and menus of SpiderOak. The absolute simplicity of Dropbox is a big win if what it does so well is all that you need.

We’ll continue using SpiderOak for its Sync feature. This will make it easy for us to keep files synchronized when we’re both working on the same project, without having to move project folders around. The only reason we won’t be using it for all our online backup is that I found a solution offering unlimited storage for the same price as SpiderOak’s 100GB package. If you have less than 100GB of data to backup, SpiderOak’s unique combination of features is compelling. I also find the company’s philosophy and openness to be very refreshing.


When you install the Dropbox application, it creates a special folder under your user folder - this is your ‘Dropbox’. Any files you put in this folder are automatically uploaded to your online storage area. These files are also accessible through the Dropbox website on any computer that has an internet connection. All you have to do is login with your username and password.

The point of Dropbox is that when you install it on a second computer, the contents of your Dropbox are synchronized on both computers. Make a change to a file in your Dropbox on one computer and those changes appear on the other computer. Install it on a third and the same files are synchronized on all three. Install it on your iPhone and ... well, you get the idea.

Dropbox is clean, simple and easy to understand. In fact, it’s so simple it doesn’t even have a user interface, beyond a small set of preferences. It works automatically and seamlessly, on Mac OS, Windows and Linux.

I’ve installed Dropbox for three clients and they are all very satisfied. Two have both a desktop computer and a laptop, with files they want to maintain and backup on both. Instead of going nuts trying to keep both machines in sync by copying or emailing files back and forth, they just work inside their Dropbox and their files remain synchronized and backed up offsite. They get the bonus of a local backup by having their important files stored on both computers. Both clients have less than 2GB of data that needs to be backed up, which means they get their offsite backup free.

The third client uses Dropbox to keep specific files synchronized between two desktop PCs. One PC runs his CAD program, the other controls his CNC milling machine. He told me having Dropbox is saving him an hour of file sharing chores every day, while also preventing multiple version confusion.

Here’s why Dropbox isn’t a solution to our offsite backup needs:
  1. We have a couple of hundred GB of data to backup and Dropbox gets expensive at that level.
  2. We don’t want the hundreds of GB of data we’re backing up to be duplicated across all our computers.
  3. We don’t want to move the hundreds of thousands of files we’re backing up into the Dropbox folder.
Nevertheless, I will continue to use Dropbox for what it does so well and to recommend it whenever it makes sense.

For a free trial of Dropbox, click here and we’ll both get an extra 250MB of online storage for free!

Backup Strategies and Solutions

If you’re serious about protecting your data, you need both onsite and offsite backups. Onsite backups provide immediate access to previous versions of files and copies of accidentally deleted files. Onsite backups can also get you up and running again within minutes of a hard drive failure. Offsite backups protect you against less likely but more catastrophic events that result in physical loss or destruction of your data and your onsite backups.

At our onsite backups are implemented primarily using the Time Machine application that’s built into Mac OS 10.5 (Leopard) and later. This gives us instant access to hourly backups for the last 24 hours, daily backups for the last month, and weekly backups limited only by the capacity of our backup drives. We also use SuperDuper to duplicate the system drive of our Mac Pro to enable immediate restart in the event of drive failure.

None of the above protects our data against physical disaster. If the house burns down, our backups are toast. When I was commuting to an office every day, I took a duplicate of our Time Machine backup to my office each Monday on a portable hard drive, rotating weekly between two drives so there was always a fairly recent backup stored offsite. Now that I work at home, we need a different solution to offsite backup.

We could rent a safe deposit box at our local bank and continue with the portable drives, but we’d need several more drives (two for each system being backed up) and one of us would have to remember to make the copies and switch them at the bank. This would introduce the elements most likely to break any backup strategy that has a manual component, namely human error and inertia.

I decided to take a closer look at online backup solutions for the offsite portion of our backup strategy. I didn’t consider anything that doesn’t run on MacOS. My search included Arq, Carbonite, CrashPlan, Dropbox, JungleDisk, memopal, mozy, and SpiderOak. The following three posts describe the solutions that I plan to use and what I like about each of them.

Tuesday, March 23, 2010

Joys and Woes of BSD on Sharp PC-MC24

Sometimes the only thing preventing me from selling a used PC as a complete system, instead of a collection of parts, is the lack of a transferable Windows license. With that in mind, I decided to see what the world of free operating systems has to offer in 2010.

Most people have heard of Linux, but that wasn’t what I had in mind. I had a bad time with Solaris in the past, between unsupported network cards and getting stuck in a software update dependency cycle, so that was out. I considered OpenSolaris, but like Solaris, it’s really aimed at the server end of things.

Many people have heard of FreeBSD, more so since it became the core of Mac OS in 2001. What I hadn’t heard of before last week was PC-BSD. It’s a flavor of FreeBSD aimed specifically at the PC desktop. I downloaded the DVD image for PC-BSD 8.0, which took all day on our 1.5Mbit DSL connection. After checking the MD5 signature, burning and verifying the image, I slid it into the Sharp’s DVD drive.

Wouldn’t boot. Somehow when they designed the PC-MC24, Sharp contrived to enable booting from a CD but not from a DVD[1]. OK, download the ‘boot only’ CD image, burn and verify, and start a network install. Of course, it took all day again because it’s downloading the entire install again. Finally I had it installed, but no matter what I tried, the machine would always hang after the display configuration screen.

I figured maybe the install was bad, so I ran it again (overnight this time). Same result. I booted in single-user mode after enabling verbose logging. Unfortunately the logs were nowhere to be found. I tried everything I could think of but nothing worked.

As a last-ditch effort I downloaded the previous version (7.1.1). I almost cried when it worked right out of the box. It even detected the video card and wireless network card correctly on the first attempt, which is more than I can say for Windows XP. Within minutes I was downloading Firefox and OpenOffice. I still don’t know what the issue is with version 8, but I’m having fun exploring version 7.

[1] Correction: I found out later that the DVD was bad, even though it had verified as good during the burn process. The Sharp will boot just fine from DVD.