GFX::Monk Home

Posts tagged: javascript

preventDefault on a checkbox's click event is inaccurately named

This drove me mad for a few days: calling event.preventDefault() on a <input type="checkbox>"'s click event has a surprising result. It will not prevent the default action from occurring, but will cause the input element's checked (and indeterminate`) property to be _reset to the value they had before the event occurred.

So consider this code:

var shouldBeChecked = false;
onclick(checkbox, function(e) {
	e.preventDefault();
	shouldBeChecked = !shouldBeChecked;
	checkbox.checked = shouldBeChecked;
});

Seems a bit unnecessary to preventDefault and then end up doing the same thing as the default action, but bear with me. The important steps here are:

  1. a click occurs
  2. the browser remembers the current checked value, and eagerly toggles it to the opposite
  3. the event handler fires
  4. preventDefault() is called (it doesn’t immediately change the state of the checkbox)
  5. the event handler sets checkbox.checked to some value (it doesn’t matter)
  6. the event handler completes, and the event cancellation behaviour occurs
  7. if the event was preventDefault()ed, the _original value of checked is restored

Notably it’s not really “cancelling” the default behaviour, it’s opting into a mode which will rollback the default behaviour under most circumstances. That means regardless of what your event handler sets the property to, it will be ignored.

Interestingly, this is all in the spec, and it’s actually pretty obvious once you find the relevant section:

The canceled activation steps consist of setting the checkedness and the element’s indeterminate IDL attribute back to the values they had before the pre-click activation steps were run.

Unfortunately I didn’t know where to look. I googled, and found some stack overflow questions and multiple react bugs (this was my biggest hint that it wasn’t my fault). Ultimately I took a literally-two-minute dive into the servo code, found htmlinputelement.rs and searched for “checkbox”, which brought me not only to code which confirmed the behaviour I’d seen, it was accompanied by direct links to the relevant spec. So thanks, servo!

But why were you doing something so dumb?

Oh yeah, I said “bear with me” above. The fact that this code is unnecessary here is not really the point - it’s just a trivial example. Imagine that the logic to determine the next value of shouldBeChecked was complex, or abstracted away and not the direct responsibility of this particular event handler. In general, it seems reasonable to tell the browser “I am in control. Don’t do the things you normally do, I’m going to deal with it all”.

This is exactly what many recent UI frameworks do, e.g. react. React doesn’t want to deal with stateful input elements and their transitions, but instead provides a virtual DOM where (conceptually) the entire state of the world is specified declaratively without regard for the previous state. Suddenly it becomes entirely reasonable to tell the browser “step aside, I’ll handle this” even if the end result in many cases is indistinguishable from just letting the browser do its thing.

Indeed, react has a problem with this very behaviour - there are a handful of issues on github but all they can do is to document it.

..and how can I fix it?

Well, there’s a couple of options:

  1. use ugly hacks

If you call preventDefault() and then:

setTimeout(function() {
	checkbox.checked = shouldBeChecked;
}, 0);

… it will work. But that’s terrible.

  1. just don’t call preventDefault()

This is almost always doable, but might require some special-casing.

If you don’t use preventDefault and you always assign to checked during your event handler, your value wins. In this case there’s no need to prevent the default, because the default already happened (and then you overwrote it). So you can rely on that.

Where it gets tricky is if you don’t always assign to checked in your event handler. e.g. much of the point of a virtual DOM is only performing the minimal modifications required to update an element’s state from <previous> to <current>. If your <previous> state was unchecked, an efficient virtual DOM should not set checkbox.checked = false, because that’s not necessary. So if you don’t use preventDefault(), your virtual DOM can get out of sync with the real DOM, which is bad1.

In practice, most checkbox clicks are going to result in an immediate toggle - any cancellation will probably occur in response to e.g. a network error, which will be fine (since it’s asynchronous). It’s rare that you’d need to both:

  • sometimes prevent a checkbox’s default action synchronously, and yet
  • not be able to use a more appropriate mechanism like the disabled attribute

If you do ever do have that, you’ll just have to add some potentially-ugly code to the event handler to figure out whether to call preventDefault, rather than leaving it to a generic virtual DOM library.

  1. I can’t trigger this buggy behaviour in react though, it likely has additional smarts around dealing with input elements changing behind its back.

Figuring out what transducers are good for (by trying to use them for a bunch of problems in JavaScript)

I’ve been aware of transducers for a little while, but haven’t actually used them, or even really felt like I fully grokked what they were good for. They come from the clojure community, but are making their way into plenty of other languages and libraries too. I’ve seen claims that they are a game-changing, breathtaking new concept, which didn’t really square with what they looked like.

So I thought I’d learn more about them by just attempting some plausible but detailed examples with them in JavaScript. If you’ve heard about transducers but aren’t really sure what they’re good for, perhaps this’ll help clarify. And if you’ve never heard of transducers, feel free to take a detour via the clojure documentation.

Ending a stream in nodejs

stream.end();

Nope. Nodejs is single-threaded, and almost nothing useful happens synchronously. At this point, you have registered your interest in ending the stream, but some I/O (and possibly even further logic) needs to happen before that’s actually done. So your data isn’t actually written, and can still fail.

Ok then, lets use a callback to get notified of when the stream is actually done!

stream.end(null, null, function() {
	console.log("OK, now it's done!");
});

Well, yes and no.

This is part of the new nodejs stream API, introduced in version 0.10. But many third-party “streams” don’t implement it, and will silently ignore any arguments you pass to end(). Which means your “after end” code may never run, depending on how the stream is implemented.

Ok, so let’s use events! Events are The NodeJS Way(tm)!

stream.on('end', function() {
	console.log("OK, now it's done!");
});
stream.end();

We’re getting close. And in fact, I believe this is correct for streams that are part of the code API in nodejs<0.10. But it’s wrong if you’re on nodejs 0.10 or later, and it’s also wrong if you’re on nodejs 0.8 but using a third-party stream which supports nodejs 0.10.

In nodejs 0.10 and later, streams emit end when they’re ending, but not necessarily done with whatever I/O needs to happen after that. So you need to wait for the finish event, instead:

stream.on('finish', function() {
	console.log("OK, now it's done!");
});
stream.end();

Great!

Oh, except that (just like the callback version of end()), some APIs don’t implement this. So again, your code just sits there waiting forever, and you don’t know why it’s stuck.

But it turns out some (most? I have no idea) of those stream implement close instead[1]. So no worries, you can just wait for both close and finish.

var done = function() {
	console.log("OK, now it's done!");
} ;
stream.on('close', done);
stream.on('finish', done);
stream.end();

..and now you’re running done (a.k.a “the rest of your program”) twice in some circumstances, because a lot of streams emit both close and finish.

var alreadyDone = false;
var done = function() {
	if(alreadyDone) return;
	else alreadyDone = true;
	console.log("OK, now it's done!");
} ;
stream.on('close', done);
stream.on('finish', done);
stream.end();

Ok, so now we’re listening for close or finish, and only running the rest of our code once.

..but of course that won’t work on nodejs 0.8.

So you have to make a choice: do you wait for end? For some streams, this leaves you with incomplete data. For other streams, it’s the only correct thing to do. On the other hand you can wait for both finish and close, and make sure you don’t accidentally run the rest of your program multiple times. That’ll work, as long as you’re not dealing with a stream that only emits end, in which case your code will never continue.

Are we having fun yet?

What happens if you do it wrong?

Well, if you’re incredibly lucky, your code will stop doing anything and you’ll have a clue why.

If you’re less lucky, your code will stop doing anything and you’ll have no idea why.

..and if you’re even less lucky, everything will look like it’s working. Except that data will be lost because you’re not waiting for the right event, and your code will prematurely continue. One fun example of this is unpacking a tar file: if you wait for the wrong end event, files that should have been extracted are simply missing from your disk. Which is always a good feature of archive extraction. And since it depends on timing, it may even work properly 90% of the time. Even better, right?

Oni Conductance

This past week, we (Oni Labs) announced Conductance, the next-generation web app server built on the StratifiedJS language (which we also built, and which has seen a number of steadily improving public releases over the past couple of years).

For a long time, I’ve been convinced that plan JavaScript is simply inappropriate for building large scale, reliable applications. That doesn’t mean it’s impossible, but the effort required to correctly write a heavily-asynchronous application in javascript involves a frankly exhausting amount of careful error checking and orchestration, and there are whole classes of confusing bugs you can get into with callback-based code which should not be possible in a well-structured language.

So I was extremely happy to join the Oni Labs team to help work on StratifiedJS, because it’s a much more extensive (and impressive) attempt to solve the same problems with asynchronous JavaScript that I was already trying to solve.

Conductance is a logical progression of this work: now that we have StratifiedJS, we’ve used its features to build a new kind of app server: one which maintains all the performance benefits of asynchronous JavaScript (it’s built on nodejs, after all), but which makes full use of the structured concurrency provided by StratifiedJS for both server and client-side code. And not just for nice, modular code with straightforward error handling - but actually new functionality, which would be impossible or extremely ungainly to achieve with normal JavaScript.

If you’re interested in building web apps (whether you already do, or would like to start), please do check out conductance.io for more details, and plenty of resources to help you get started building Conductance applications.

StratifiedJS 0.14 released

Today we (Oni Labs) released StratifiedJS 0.14. This is the first release since I started working here full-time, and it’s a big one: loads of useful new syntax, as well as a thoroughly kitted-out standard library.

StratifiedJS is a Javascript-like language that compiles to Javascript, but which supports advanced syntax and semantics, like:

  • blocking-style code for asynchronous operations (no callbacks!)
  • try/catch error handling works even for asynchronous code
  • a structured way of managing concurrent code (waitfor/or, waitfor/and, waitforAll, waitforWirst, etc).
  • ruby-style blocks
  • lambda expressions (arrow functions)
  • quasi-quote expressions

Check it out at onilabs.com/stratifiedjs.

Module resolution with npm / nodejs

NodeJS’ require() method is special. npm is special. Some of that is good - its efforts to dissuade people from installing anything globally are commendable, for a start. But some of it is bad. It’s probably better to be aware of the bad parts than to learn them when they bite you.

Let’s run through a quick example of what happens when I install a package. For example, installing the bower package will:

  • install bower’s code under node_modules/bower
  • under node_modules/bower, install each of bower’s direct dependencies.

Of course, this is recursive - for each of bower’s direct dependencies, it also installs all of its dependencies. But it does so individually, so you end up with paths like (this is a real example):

node_modules/
  bower/
    node_modules/
      update-notifier/
        node_modules/
          configstore/
            node_modules/
              yamljs/
                node_modules/
                  argparse/
                    node_modules/
                      underscore

Unlike pretty much every package manager I’ve encountered, npm makes no attempt to get just one copy of a given library. After installing bower, NPM has unpacked the graceful-fs package into 4 different locations under bower. I’ve also installed the karma test runner recently, which indirectly carries with it another 10 copies of graceful-fs. My filesystem must be exceedingly graceful by now.

JSON encoding