Coding with Jesse

Detecting and Debugging Timeouts and Intervals

May 28th, 2007

When you start to cram a lot of JavaScript animations and Ajax onto a web page, it can become tricky to know all the code that's running in the background. When you start to detect some performance issues, it's equally tricky to track down what code is being executed.

Luckily, the only way to get code to run in the background with JavaScript is through the functions setTimeout and setInterval. And more luckily, we can overwrite these functions so that we can know whenever they are being called:

window.setInterval_old = window.setInterval;
window.setInterval = function(fn, time){
    console.log('interval', fn.toString(), time);

    return window.setInterval_old(function(){
        console.log('interval executed', fn.toString());
        fn();
    }, time);
};

window.setTimeout_old = window.setTimeout;
window.setTimeout = function(fn, time){
    console.log('timeout', fn.toString(), time);

    return window.setTimeout_old(function(){
        console.log('timeout executed', fn.toString());
        fn();
    }, time);
};

This will send output to Firebug whenever a timeout or interval is first initiated, and again when the function is actually called.

This solution is rather nice because it will let you know what 3rd party JavaScript widgets are doing in the background as well without needing to add debugging messages to them. You could overwrite nearly any method like this to get similar debugging messages as well (like document.getElementById or Array.prototype.push or practically anything).

Wikipedia Discussions

May 23rd, 2007

I'm sure you all use Wikipedia on a regular basis. But what you may not be aware of are the millions of hours of hilarity, entertainment, and often revealing information waiting for you on every page of Wikipedia. At the top of the page is a link to the "Discussion" of that page, where the authors talk about what they want to add or remove, and often get into great, long debates (read: flamewars) about the contents of that page.

Some notable examples I've uncovered:

Just a small preview of the 1,797,673 "Talk" pages waiting to be discovered. If you find any other gems, please let us all know in the comments!

Redirecting after POST

May 22nd, 2007

When working with forms, we have to think about what will happen when someone clicks back or forward or refresh. For example, if you submit a form and right afterwards refresh the page, the browser will ask if you want to resend the data (usually in a pretty long alert box talking about making purchases).

People don't always read alert boxes, and often get used to clicking OK all the time (I know I fall in this category), so sometimes comments and other things get submitted more than once.

To solve this, you can simply do an HTTP redirect after processing the POST data. This is possible with any server-side language, but in PHP it would look something like this:

if (count($_POST)) {
    // process the POST data
    add_comment($_POST);

    // redirect to the same page without the POST data
    header("Location: ".$_SERVER['PHP_SELF']);
    die;
}

This example assumes that you process the form data on the same page that you actually want to go to after submitting. You could just as easily redirect to a second landing page.

On this site, on each blog post page, I have a form that submits to the same blog post page. After processing the comment, I send a redirect again to the same page. If you add a comment and then refresh, or click back and then forward, the comment won't be submitted twice. (However, if you click back and then click Add Comment again, it will. I really should filter out duplicates, but that's another topic.)

This works because you essentially replace a POST request with a GET request. Your browser knows that POST requests are not supposed to be cached, and that you should be warned before repeating a POST request. After the redirect, the page is the result of a simple GET request. Refreshing the page simply reloads the GET request, leaving the POST request lost between the pages in your browser history.

Helping visitors with .htaccess

May 21st, 2007

When I changed all my URLs, I put in place something to email me whenever there was a 404 (page not found). This way, if I screwed up something with my forwarding, I'd know.

It turned out that people were getting 404s mostly for one of two mistakes. Either there was spaces in the URL (an error of copying and pasting perhaps?), or there was a trailing period at the end of the URL (probably from it being part of a sentence, and the period becoming part of the URL when auto-linked).

The two broken URLs look like this:

http://www.thefutureoftheweb.com/blog/helping-visitors-with-htac cess

http://www.thefutureoftheweb.com/blog/helping-visitors-with-htaccess.

I thought I'd make things easier for people by auto-correcting these two mistakes. I added a few lines to my .htaccess file like so:

RewriteEngine on

# remove spaces in URL as a favour to visitors
RewriteCond %{REQUEST_URI} .+\s+.+
RewriteRule ^(.+)\s+(.+)$ /$1$2 [L,R=301]

# remove trailing periods on URL as a favour to visitors
RewriteCond %{REQUEST_URI} \.+$
RewriteRule ^(.*)\.+$ /$1 [L,R=301]

Unlike my major changes the other day, I'm not obliged to maintain these rewrites forever — after all, they are still mistakes in the URL, and I'm not giving out URLs with spaces and dots in them on purpose. However, I'd rather bring people to the correct page if I can.

Resizing a web layout based on browser size

May 19th, 2007

Some people thought that my new layout was too thin, and I had to agree. Originally, I wanted the width of the text on the page to be in a more narrow, more readable column. I also tried to stick to a layout that could fit within a browser on an 800x600 resolution. The result was a column of text that was less readable because it was too narrow.

Today, I added a bit of JavaScript to the page to resize the layout for anyone with a browser wider than 930px. The JavaScript looks like this:

var body_check = setInterval(function(){
    if (document.body) {
        clearTimeout(body_check);

        if (document.body.clientWidth > 930)
            document.body.className += ' wide';
    }
}, 10);

Every 10ms, this script checks if the body is available yet. As soon as it is, the checking is cancelled, and the 'wide' class is added to the body if the browser is wider than 930px.

I opted for a polling technique instead of using window.onload, or even instead of addDOMLoadEvent, so the design wouldn't noticeably jump when the class was added.

To go along with this JavaScript, I added the following in the CSS:

#body { width: 760px; }
#main h1 { width: 560px; }
#main .section { width: 444px; }

body.wide #body { width: 910px; }
body.wide #main h1 { width: 710px; }
body.wide #main .section { width: 594px; }

I isolated the 3 fixed widths that would need to change, and simply increase each of them by 150px whenever the 'wide' class is added to the body.

I hope this wider design is a bit more readable for the 98% of you with a higher resolution.

A URL is (maybe not) forever

May 17th, 2007

Last year, I wrote that A URL is forever. Well, like any good hypocrite, I went and changed my URLs yesterday.

I used to have URLs like:

/blog/2007/5/a-url-is-maybe-not-forever

Originally I thought having the date in there would make my site more scalable, so in 100 years (ha!), I wouldn't have a problem of finding a unique URL for my blog posts. Yesterday, I decided I'd rather have shorter URLs and just make myself come up with unique URLs for my blog posts (a matter of taste, really). So now my URLs look something like this:

/blog/a-url-is-maybe-not-forever

So yes, my URLs weren't forever. But I didn't just change them all and break all the old URLs. No, the original URLs all still work. To do this, I added a 301 (permanent) redirect to my .htaccess file, like this:

RewriteEngine on

# need this forever
RewriteCond %{REQUEST_URI} ^/blog/d{4}/d+/.+
RewriteRule ^blog/d{4}/d+/(.*)$ http://www.thefutureoftheweb.com/blog/$1 [L,R=301]

Now, for the life of this site, I have to support both styles of URLs (at least for all blog posts posted before today). That's a sacrifice I'll have make to have shorter URLs. And that's really what's important: once a URL is released into the wild, it should always bring someone to the page it originally referenced, even if the preferred URL for that page changes.

Detecting focus of a browser window

May 16th, 2007

If you have some constantly running Ajax or JavaScript updates on a page, it might be nice to pause these when the browser is minimized or in the background, or when the user switches to another tab. After all, there's no sense in using the user's CPU and network if they aren't even watching what you're doing.

To achieve this, we can use the window.onfocus and window.onblur events like this:

function onBlur() {
	document.body.className = 'blurred';
};
function onFocus(){
	document.body.className = 'focused';
};

if (/*@cc_on!@*/false) { // check for Internet Explorer
	document.onfocusin = onFocus;
	document.onfocusout = onBlur;
} else {
	window.onfocus = onFocus;
	window.onblur = onBlur;
}

These events work in every major browser (Firefox, Internet Explorer 6/7, Safari & Opera).

Unfortunately, there's no way to tell with JavaScript if the browser is visible to the user. For example, they might be chatting on IM in a small window in front of the browser. You'll also find that the page is 'blurred' when you click into the location bar of the browser (except in Safari). You might want to display a message like "PAUSED" (think Super Mario Brothers) so people know why everything has stopped moving.

I've set up a demo page where you can try this out.

[Update October 14, 2008 - Seems the blur event handler would fire in Internet Explorer when the focus went from the body to an input or link. I've changed them to use document.onfocusin and document.onfocusout instead, which seems to work better. Now, putting focus from the body into an input causes them both to be fired, but one right after the other (onfocusout, onfocusin) resulting in a final "focused" state.]

PHP vs. Ruby on Rails: Update

May 15th, 2007

It's now been six months since I announced I had switched from PHP to Ruby on Rails. Reader Richard Wayne Garganta wrote me to ask:

So, now that you have worked with rails a while - are you still in love with it? I tried it a while back and found out my biggest problem was deployment. I have considered returning to it. Your opinion?

Originally, I decided to start using rails because I was getting bored doing server-side development. I asked myself, why was it so much fun to program in JavaScript, but so boring to program in PHP? I figured it might be the programming language itself that was boring me, so I took a stab at learning Ruby on Rails.

Ruby is a really great dynamic language that is fun to work with (though somewhat tricky to get used to). Rails is a great framework, especially when it comes to using ActiveRecord to simplify working with complex data models. I think that Ruby on Rails is a great way to build a complex web site.

Rails also makes it much easier to set up tests (which I was pretty lazy about) and have separate development and live environments. There's a lot of great solutions to common web development problems that makes Rails a lot more fun to work with, especially on big projects.

Eventually I realised that server-side programming in Rails, while a bit easier and more fun, was still server-side programming. Even though I didn't have to worry about writing SQL and building forms, the types of problems and challenges were basically the same as in any server-side language. I realised that even a new language and framework wouldn't change server-side programming altogether. At the end of the day, I still have a lot more fun coding with JavaScript, Ajax, CSS and HTML.

Lately, though, I've been having a lot more fun coding simple templates using PHP. There's something kind of simple and sweet about making a page dynamic by just by putting a few lines of code in a standalone template. MVC is the only way to build a large site or application that is easy to manage, but if you're doing something simple, it's definitely overkill. It's no surprise that the 37signals and Ruby on Rails web sites run on PHP.

So the moral of the story? Use Ruby on Rails for applications, use PHP for simple web sites, and don't use either of them if your passion is client-side development. :)