Javascript Triggers

PPK has written up an article on alistapart about javascript triggers, it’s interesting, unlikely to be new to many, but that’s alistapart generally. The general idea is that you should include custom attributes or use ID in your HTML to allow you to add scripted behaviour to the document.

However there are a number of important problems not mentioned. If you add behaviour to your document outside of the HTML, there’s a time period where the user can interact with the HTML elements without the proper behaviour, either they’ll trigger the no script fallback - which could be annoying they get a much worse user experience, and you’ve wasted your time writing the script. Or as is more likely for the majority of script developers these days, they’ll end up with nothing happening.

The delay in behaviour loses one of the key elements of UI that of consistency.

You can of course author your script in such a way that this isn’t a problem, however to do that you can’t abstract out your scripts, they have to be inline next to the elements and immediately follow the HTML elements they’re modifying, the onload event is much too late. There is another alternative, that of hiding your content until such time as everything’s been rendered and the behaviour attached, this is far from trivial - how can you be sure you can show it again in a degradable manner? - and of course it makes your site seemingly much less responsive, so I think that is worse.

Seperating script and HTML simply isn’t practical in todays mark-up languages, even if it’s desirable. Simple onsubmit form validation might cut it, just about, because the user is unlikely to interact with the form enough until after the behaviour has been attached, but the more complicated validation that provides real-time feedback, that’ll be down the same problem of “onload=’someformelement.focus()’” where most users are well into the field before being viciously snapped back to the first.

Techniques like XBL, or HTC’s are practical future ideas of how proper seperation of behaviour can be achieved, but I don’t think a retrofit onto HTML is a good idea yet.

Comments

  1. Jimmy Cerra Says:

    Well, if you use only small javascript sources (1-3k), then these idioms usually work well. The required downloads would usually be pretty quick even on a 36.6k bps modem.

  2. Jim Ley Says:

    The size of the javascript is irrelevant, if you’re waiting for onload, it’s the time taken for your enttire page to download, including all linked media, the 150k of images, the adverts which need extra DNS look-ups etc. onload is simply too late. I use a 1MB cable modem, I see the problem of onload - focus on googles homepage, which doesn’t have huge amounts of javascript, it’s all inline and there’s only tiny images. Limiting the size of your page doesn’t solve the problem, it may make it rarer for the user to see it. Making it rarer is silly, when you can eliminate it entirely.

  3. Seth Says:

    I notice no such delay on Google, the cursor is in the search box by the time I’m able to first glimpse it. In fact, I’ve never noticed such a Javascript problem. Could you provide other examples?

  4. Jonathan Broome Says:

    Since the idea of losing a wifi connection mid-page load is pretty valid, what if you use a dummy handler inline, and replace it later via triggers?

    In a compromise (triggers vs reliability), suppose you did this (pardon my pseudo-code):

    [head]
    [script]
    function validateForm(form) {
    alert("Please wait"); return false
    }
    [/script]
    [script src="validation.js"][/script]
    [/head]
    [body onload="applyTriggers()"]
    [form onSubmit="validateForm(this)"]...[/form]
    [/body]

    “validation.js” would contain the applyTriggers() function for event-based validation, and another function duplicately named “validateForm” that contains the full logic applied to the form submit.

    Before validation.js has arrived, the embedded dummy version would force the user to wait/try again, and once the .js file arrives, the latter-defined function of the same name would replace the former, and let the user continue with full validation in place.

    One problem here is making the user wait. On Google, for instance, the wait makes no sense. On an intranet web app, the wait for validation might be more necessary.

    Another problem: if you’re bringing js into the structure document anyway, why not just do it all there? Using a server side #include is nearly as separational - your code could still all be in different files, it’s just not independently cache-able on the client side.

    Is this a fair compromise, or are the problems more than I thought, or should I just go back to my crack pipe?