2

AMD, Can we have our cake and eat it too?

requirejsThanks to Node JS and NPM (that’s Node Package Manager for those not in the know), there’s a ton of talk and effort centered on the concept of JavaScript modules, modular code deployment and streamlined dependency management.

Here’s one particularly interesting exchange I came across regarding the pros and cons of AMD that occurred a couple years back.

It began with a post by James Burke, the creator of the RequireJS library, in a post called Simplicity and JavaScript modules. It’s a long but good read where James expounds on the benefits of modular JavaScript design and his take on the efforts and leanings of the CommonsJS project.

Tom Dale of Ember JS responded with a piece highlighting what he feels are AMD’s deficiencies called AMD is Not the Answer.

Dave Geddes followed up with a counter argument that AMD is the Answer.

And James Burke also followed up with a direct reply to Tom on Tangento, Reply to Tom on AMD.

There are many, many more threads out there regarding AMD but this particular discussion caught my eye. Even though it’s a bit out of date (seeing how 2 web years is like 10 normal years), it’s still an interesting read.

Each side makes some very convincing points both for and against AMD which are:

Pros

  • AMD helps promote a clean namespace by limiting your dependencies to a local variable scope. This is nice because sometimes you have no choice but to mix two (or more) disparate libraries on a page and while most libraries now feature a noConflict() function that returns the automatic placeholder variable back to the global namespace in favor of assigned a user selected custom one, it’s not, in my opinion, the best method for mixing and matching libraries and noConflict then opens the floor for individual developers to begin tailoring variables to their own preferences.
  • AMD allows you to locally manage and streamline dependencies without having to rely on a build process or build management workflow. Any developer who’s had to stop and wait for an excruciatingly long build to finish in between even the simplest of changes (like one character small) knows how simply saving and hitting refresh on the browser can keep the productivity flowing when your plugged in deep to the code. So this one is hard to argue against from a standpoint of sheer productivity.
  • Reduce clutter on your initial HTML page download. The require JS library allows you to gather and name all your dependencies within one blanket object. The nice thing is that this define statement can be included with your app bootstrap code or defined as a separate standalone file. The way you want to bootstrap your application is up to you. This wins big points for me. I’m highly opinionated in the way I design my code and structure my architecture, so a softly opinionated solution always strikes the right chord for me.
  • Supports the CommonJS define() method. James highlights that Require implements the CommonJS define method and follows a similar philosophy. I particularly like this approach and give extra marks to developers who cross use conventions. CommonJS has thus far mainly been applied to Node, but this is a good example of an additional application of the recommendation.

Cons

  • Addition of AMD definition syntax promotes an unnecessarily verbose method for defining dependencies. As someone who’s coded Java for a good part of his career, used Dojo for JS app development and plunged my foot into the waters of JS AMD, there is something to be said for being able to include dependencies simply by adding one line and either the package path or the path to the file. As things do change in the lifecycle of applications, it is nice to abstract those dependencies into custom abstractions as AMD does so personally this one does not bother me much. But even those require statements can add up depending on the module being developed. This gets into splitting that into sub-module and so forth, so this one to me becomes somewhat negligible. Some will argue with me on this, but that’s my stance.
  • A good workflow for a large enterprise app should already have a build process, but not necessarily need one to compile the JS. Tom makes a valid point here. Any enterprise shop whether its Starwood, ESPN, Adobe or Microsoft is going to have some process for taking all their server side programming code and building it for release to production. In the Java world, this is needed to convert Java source code to compiled byte code and remove any development only dependencies that you don’t want cluttering up your production environment. It’s true that even a good sized JavaScript web applications doesn’t need a build process to be pushed to production, but when your beginning to add layers like pre-compiled languages and templates, front end unit testing and minification/uglification steps, a build process does become a necessary step to deliver the smallest cleanest download footprint that you can muster. I don’t disagree with Tom on this point, but I’m not personally sold on this particular argument either.
  • Many HTTP Requests. Tom expounds on one of the biggest knocks I’ve heard against Require which is the litany of HTTP requests it facilitates. Breaking a JS app into a modular structure does create a larger footprint of individual files. The benefit of this approach is loose coupling, module interoperability, not loading unnecessary JavaScript if it’s not needed and the ability to separate tasks amongst development teams without having to worry about overwriting changes within the same JS files. If you incorporate r.js or the RequireJS plugin for Grunt into your development process to compile your common JS resources into download bundles, this argument goes away. But again, Tom is not a proponent of builds for front end resources. He only favors them for server side back end resources so he’s against this part of the RequireJS philosophy.

From a solution standpoint, Tom recommended using the old Gmail trick of downloading all JS resources in one collection with the code that is not required to be parsed and evaluated at download time commented out and eval()’d when needed. This is a pretty cool idea and I think is worth more evaluation and consideration, though it comes with its own type of additional steps within your workflow. Such as mapping out your application to decide what gets loaded up front and what should be commented out. Then, you need to create some way to manage what needs to get loaded and eval()’d and when. This almost falls into the same philosophy of breaking out your app into modular pieces and loading what you need when. So in some ways, it feels like six of one, half dozen of the other to me.

I’m personally curious in the memory and CPU benchmarks between having the JS interpreter parse the code when it’s first loaded versus eval()ing on demand it. Having come from the old school where improper use of the eval() function often led to confusing and resource consuming code, I’d love to see if this is as beneficial of a solution as it sounds. It certainly is a great idea in terms of getting your app up and running on initial download, so I do see the merit in actually building a test solution out to at least determine that benefit if the size of the app is big enough.

In terms of where I net out overall on the debate, I’m personally not sold on Tom’s arguments against having s JS build process and the loss of elegance when it comes to code style. I personally like the JSCommons approach to defining dependencies and having used a production build process at a previous development shop that delivered an enterprise grade JS driven web application, integrating a Grunt build script to compile, minify and uglify JS code to make it production ready is an acceptable trade off to having a well designed and built web application. I am also a proponent of Unit testing and integrating that into the workflow so consolidating and automating all those tasks via a build tool such as Grunt makes perfect sense to me.

Personally I wonder if there isn’t a way to combine the best of both worlds and help r.js be smart enough to either know how to support that type of compile and runtime evaluation workflow discussed by Tom or at least recognize hooks in the code to build a more efficient way to support it into the developers workbench of tools. That might be the sweet spot the AMD world needs to finally convince those on the other side of the fence that modular code design and delivery CAN work after all.

What is your take?

jfox015

Jeff Fox is an over twenty-year web developer and digital user experience technology leader. Jeff cut his teeth in the Web's early days and is mainly self-taught in his professional skills. Having worked for a broad number of companies has helped build skills in development, organization and public speaking. In addition to being a passionate developer and technical speaker, Jeff is a dedicated tech and sci-fi geek gladly indulging in Doctor Who and Star Wars marathons. He is also a talented musician, writer and proud father of three little Foxies. And don't get him started about his San Francisco Giants.

2 Comments

  1. I tend to favor a Grunt workflow, and using Require.js to handle module dependencies, even with the added need for a build process. I like having all of my Javascript (app or test) under a watch command, and not having to refresh every time I make a change. The speed gained during development make the further overhead worth it for me.

  2. I tend to agree. Having modular components with dependency management included is a big gain in terms of overall site’s code design. Require JS is not perfect, but there’s some certainly opportunity to make it really useful in large scale production environments such as including the Gmail latency buster hack.

Leave a Reply

Your email address will not be published.

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.