Building Ember Apps with GraphQL

GraphQL has become increasingly popular over the past few years. Unlike much of the hype coming from Facebook this technology isn’t specific to any one language, it really can be used to connect a number of servers and clients. This also made is particularly interesting to me that it is aimed at uprooting existing API REST conventions that we’ve been using for many years.

Luckily, at work I was able to experiment with the integration of Ember with GraphQL and I have a few takeaways to share. I would recommend checking out the conference talk my friend Rocky and I gave on this subject at EmberFest 2018. I also prepared an accompanying Ember sample repo showing how to integrate a few different clients with Ember, test techniques and other GraphQL goodies within an Ember project.

What is GraphQL?

This topic has been covered quite well by the GraphQL website but it’s basically a new way of doing API’s that isn’t based on REST or specifics defined by the HTTP spec. It’s typed, and it’s a query language so your request is infinitely flexible in what you can ask for. Well, flexible in terms of what is defined by your schema.

Why is GraphQL popular?

There’s so many new ideas and it’s important to at least to try to understand the appeal. It could be the re-packaging of an existing idea, marketing, or something that truly redefines the approach to the problem. In my opinion GraphQL’s popularity is justified. While not a radical new idea the execution of the idea is done pretty well. The team at Facebook has put in great effort into standardizing the query language and providing the tooling to make implementation a relative breeze. It’s not easy to try replace something as spec’d as HTTP and REST or to convince developers that it’s worth switching to, but I think Facebook has made a good case.

I think GraphQL is also really popular because of types and introspection (you always know what you’re working with and what you get), it’s adaptive (add fields to a type as you go),  and requests mimic the response so you only get what you’ve asked for (it so closely looks like an extension of JSON that the language itself feels very natural).

Lastly, I think part of the appeal to GraphQL is due to the fact that it’s strict in the right ways. All of your queries in GraphQL are a POST to the same endpoint. This means that you don’t need to worry about http status codes and URL structures. Most of the RESTful API’s I’ve used don’t match the idea of the perfect RESTful API, although still very valuable. Most API’s evolve organically and GraphQL describes these connections more naturally. In GraphQL your types, the connections between types, and what data is expected in requests is more strictly defined which ends up giving you flexibility in the request itself.

To sum it up, GraphQL feels like a more natural extension of what we as developers are looking for wrapped in definition where it matters. However, there is a tradeoff, I do believe that because it is a query language with this flexibility that a lot of the complexity is pushed upstream into how the data is resolved on the backend.

Ember with GraphQL

So let’s get to the meat and potatoes of Ember and GraphQL. GraphQL is just a way of requesting data and there are a few options from your GraphQL API.

Apollo Client

In terms of the Ember ecosystem the most opinionated integration is ember-apollo-client. If you are interested in using the Apollo client for GraphQL this is the way to go. The Apollo client is great because it abstracts away things like request caching and middleware “links” in a way that you can rely on a community effort. One downside is that any changes that happen with Apollo upstream have to be integrated and managed in an Ember Way™️ through this ember addon, and the community.

Lightweight 3rd party clients

There are a number of clients that are lightweight and easy to integrate thanks to ember-<a href="https://github.com/ef4/ember-auto-import">auto-import</a>. A few of the popular ones I’ve tried are Lokka and graphql-request. You can import them directly but if your API is protected you’ll probably want to wrap them in a service so that you’re always supplied a fresh client with the current access token. These are very simple and offer fewer features, but are easier to integrate and probably require less effort upfront. Because your queries don’t change you can always move from one of these clients to Apollo down the road, and easily migrate your queries along.

Ember Data and the Elephant in the Room

Ember Data is so closely matched to a JSON API spec, and adapters that somewhat match a resource on URL structure ala REST, that the mapping of Ember Data on to GraphQL is much more difficult. The closest implementation of this idea is ember-graphql-adapter which works best if you are also using the Ruby and the corresponding GraphQL gem, as mentioned in the docs. Any exceptions don’t map too nicely. Behind the scenes it’s doing a custom parse and compilation step to map to a GraphQL query without using some of the great tooling that exists for dealing with the GraphQL language. Due to the fact that resources are “typed” and the Ember Data model properties are well defined I think a mapper could be created using   graphql-tools. Although this kind of breaks the paradigm GraphQL creates.

Testing

The other difficult part with using GraphQL within Ember apps is creating acceptance tests. Ember’s acceptance tests are done so well that it’s actually closer to something like end-to-end testing done with Selenium or Puppeteer, but without the heavy cost. In order to handle these acceptance tests though there is often a need to “stub” the network layer.

With GraphQL all requests are a POST to the same endpoint. This means that stubbing requires understanding the payload of the request itself, and its embedded query. You aren’t able to rely on just checking the http method and stubbing the resource endpoint.

I think the easiest way of doing things this with GraphQL and Ember is to use graphql-tools to understand the queries and variables within the requset, and something like pretender.js to intercept the network requests themselves, or to use PollyJS.

Using graphql-tools gives you total control of the understanding the query and how it maps within your schema. You could even map these handlers to an in-memory database so that mutations actually persist across multiple mutuation queries.

PollyJS on the other hand is a library from Netflix that will record all requests and responses between your frontend and the backend. The next time the test request is made it can just dig up the recorded response and replay it. These saved recordings are committed via git so that they’re available within your CI environment, too. It’s very similar to the VCR Ruby gem. It’s not specific to GraphQL either, I think it’s a fantastic tool and something that fits the complexity of GraphQL very well. The only downside is that your test cases have to exist within a real API, otherwise you’re left to modify the request/response (thanks to PollyJS’s great hook system) to meet your use case, or just use graphql-tool’s technique as mentioned above.

Tooling

In terms of tooling I would recommend using  graphq-cli, along with a matching   .graphqlconfig. It’s becoming more or less a standard and it will make fetching your GraphQL schema easier. Even better, this configuration can hook into a graphql eslint plugin that will make sure that all your queries are validated against a valid schema and also checked during your ember eslint tests.

Give it a try

If this sounds interesting I would recommend giving it a try. I think there is enough maturity in the ecosystem, both with Ember and GraphQL, that it works quite well. If you have any questions reach out to me on twitter @chadian, also check out my Ember sample app that has some documented examples with different clients and testing techniques.

Scaling CSS with mixin-backed class names

I started my career working for digital agencies where sharing styles across bigger content systems was a big part of my day-to-day. Starting with SMACSS and BEM we were able to create logical systems to tame our cascading styles . These efforts were made easier with the rise of style preprocessors like less, sass and stylus. Now we have the introduction of various tooling kept our CSS even leaner; we could check for unused styles, statically determine if styles are being used, and run transforms against a CSS AST with any number of plugins.

One idea that changes the approach of how CSS is introduced are the flavours of CSS-in-JS, popularized by the React community. The functional approach to folding reusable CSS together and applying it to the target, in this case usually a component, had an elegance packed with a huge productivity win.

This post aims to combine a few of these ideas that when used together can help keep CSS manageable when not using a scoped solution (CSS Modules, CSS-in-JS).

What’s the main issue with CSS?

I believe the main issue with CSS is the cascade. In theory the following should work well and scale:

As soon as one class from another definition gets introduced though, search-field-button for example, we increase the chances of the fighting cascade. It’s definition may not play nice with the previous button  definitions.

The Solution

The solution I propose relies on a few conventions and is something that can be implemented in an existing project as you go. The idea requires little overhead, and basically no opinion on your class naming convention.

The solution aims to:

  1. Make style definitions simpler to track
  2. Allow styles to be easily composed

The idea does rely on a few assumptions like:

  1. You are using a preprocessor, or PostCSS, with support for something similar to a sass @mixin
  2. You are using, or can introduce, something like PostCSS to perform some clean-up, see Handling conflicts

One Class – Easy to track definitions

Simply, use one class. This ensures a simple definition of your style rules.

This removes the ambiguity and confusion of how the cascade is applied (specificity, position of css in the file, css file load order, yikes, etc…). The class name is simply the only hook to your list of style definitions, nothing more.

Reusability via the mixin – Easy to compose

Using the mixin for reusability isn’t a new concept, it’s what it was designed for. I am, however, proposing a convention and workflow that should make it easier to follow the One Class guideline.

Consider the contrived example:

Now within the _button-primary.scss I can import and use my mixins as needed:

These mixins are expanded and extend the definition of a single class that would be used in html by <button class="branded-button"></div>

It’s nice if the class name corresponds with the file since there is a one-to-one relationship. This makes updating the class and tracking any mishaps pretty easy. Removing code also becomes easy because it’s a simple find/replace for the class name and file name.

This file structure keeps the mixins separate from their implementation hooks à la class names.

Extending to variants

Let’s imagine now that we want a variant of this “branded button” with a large font size. I’m sorry for the contrived example, but maybe you can see the extension in some real world scenario.

Since we knew we wanted to extend branded_button it was easy to track down its definition, and copy the definition. If we wanted to we could make a mixin that includes other mixins. That said we should really try to keep mixin-in-mixin nesting as flat as possible until it’s really necessary to group common mixin definitions together, but for example it would look something like:

My example doesn’t leverage the fact that mixin’s can be defined with arguments which allows you to lean on variables and configuration that can be passed in to give your variant some nicely tweaked variability.

Handling conflicts

Let’s say you have two mixins that both try to claim the same key/value space.

and this generates a few conflicting properties:

We are left with a mash of these two mixins, and for the most part is what exactly what we want except for the conflicting background-color and color. Let’s say we want the background-color from main-button and color from special-button. Well, instead of relying on some hacky cascading overrides we get the opportunity to resolve the dispute ourselves.

Now you’ve explicitly resolved how you want the landing-page-button to look. Maybe you see the follow-up issue though, the resulting compilation looks like:

We’re left with three declarations for each property. So while it’s clear what we are left with (and it’s even clearer in chrome dev tools with the strikethrough’s of overridden properties), we don’t want to ship all these extra declarations.

Luckily if you allow me to introduce a postcss-combine-duplicated-selectors, a PostCSS plugin, then it’s all covered. This plugin has the option of removeDuplicatedProperties that when set to true will squash these extra definitions in our final css. In a development build I would leave them in so that it’s easier to see the layering of the definitions, but then clean everything up for production.

States

Your style definition might have a :hover pseudo-class, or an data-attribute that is scoped within your main definition, or swap the class out entirely for another class that includes some of the same style definitions as the default state.

Sharability

If you wanted to share your definitions all you need to do is share your mixins. You don’t have to worry about the cascade, fighting shared classes from your “corporate shared stylesheet”, or bootstrap. You can import, include, and manage conflicts. If you wanted to go so far as including these mixins in an npm package you could share them across your all your front-end’s in a versioned manner. At the end of the day you’re just hooking the key/values together in a way and applying to the a single classname you’ve chosen as the hook.

The execption to the “one class” rule

If you have a series of utility classes that are composed by singular properties to prototype a style then you might have multiple classes. Once you need to repeat these classes though you could move them into a style definition, backed by their individual properties or their individual mixin definition ultimately backed by just one class.

might become:

This idea mirrors the design pattern that is being used with Tailwind CSS.

Testing, an aside

Treating this squashing of key/value pairs is much like the equivalent in javascript:

If we ended up with some final result that should really contain a certain style we could use a javascript test suite to assert that squashed.color === 'red' and squashed.backgroundColor === 'blue'. If we had a style guide with our styles in practical usage we could ensure that conflicts and critical styles were asserted on with a window.getComputedStyle.

This is something that should probably be explored in another blog post or side project though…

So that’s about it

So my proposal for beating the cascade is to not fight it. Use singular classes and mixins and make your life easier. Maybe we will have ways of raising exceptions when classes fight so that we can catch them in a dev environment runtime. Who knows? The tooling around the front-end and how the awareness of how styles are being used will only continue to get better. The CSS Object Model is being opened up through a browser API, and the idea of Houdini takes it a step further in what a few years ago would have been a pipe dream. The frontend is a fun place to be right now! Thanks for reading, if you have any comments/questions/suggestions reach out to me on twitter. 👋🏽

Deploying Ember apps on the cheap with Dokku, Docker and Digital Ocean.

The rough idea

With Ember we are spoiled with an excellent ember-cli-deploy tool. Need to deploy somewhere, you can go shopping for many of the supported deploy plugins. One company that has made deployment dead simple is Heroku. When I was looking to show off some local Ember apps I wanted something cheap and easy to setup. Heroku would be nice but I think we could go cheaper.

Enter Dokku. It’s project aimed at providing Heroku support by wrapping a docker heroku-friendly project called Herokuish. Dokku gives you a PAAS by wrapping containers with an nginx proxy router. It has
great settings and plugins that help you extend it for a number of use cases. Because Dokku can detect buildpacks and leverage herokuish we can deploy via a git push, using heroku buildpacks, and get a deployed container. With buildpacks you don’t actually need to know Docker or setup the container.

The last piece of the puzzle is Digital Ocean. It provides affordable virtual machine hosting with an easy to understand interface and luckily for us a one-click install of Dokku on a droplet.

With this rough outline let’s get started.

Create your Ember project

Feel free to skip this step if you’ve already got an ember project.

We’ll use a stock ember project.
1. Go into a fresh folder, and run ember init
2. Let’s make sure we’re tracking this in a git repo, run git init
3. Let’s commit the empty ember project:

Setup your digital ocean droplet

Now let’s get your Dokku digital ocean droplet going.

  1. Login to Digital Ocean.
  2. Click ‘create droplet’.
  3. Click the “One-click apps” tab.
    ScreenShot2017-04-17at11.47.02AM
  4. Choose Dokku 0.8.0 on 16.04
    ScreenShot2017-04-17at11.46.50AM
  5. Choose a size at $5/mo (let’s keep this cheap!)
  6. Pick your preferred region
  7. Add your ssh keys if you got them, it’ll make ssh’ing in easier.
  8. Pick 1 droplet, and pick a hostname if you like.
  9. That’s it! Click “Create”

Under Droplets, check that your droplet is being created.

ScreenShot2017-04-17at11.47.49AMScreenShot2017-04-17at11.48.26AM

You should get an IP address for your droplet, in my case it gave me 162.243.242.65. Go ahead and ssh into your newly created droplet.

Let’s make sure dokku is installed alright by running:

which should return with how to use dokku and available commands.

Setup Dokku

In your browser go to http://your.ip.address with “your.ip.address” being the IP address of your digital ocean droplet above.

You should see a screen similar to:

Screen Shot 2017-04-18 at 7.51.06 AM

Paste your public key which may be the same public key you use for things like github, unless you’ve generated a different one. It might have already filled it in if you had supplied digital ocean with a public key for the droplet. Make sure you have pasted something into the public key. This page is only available once after clicking “Finish Setup”. If you are trying to keep this cheap and plan on only using an IP address make sure you leave “virtualhost naming” unchecked.

If you ever need to change any of this configuration you can do so while ssh’ed into your droplet from the dokku command and pouring over the decently written dokku documentation. Or ask me on twitter, I might be able to help, too.

Click “Finish Setup” when everything is configured.

Gotchas

Before we continue let’s take care of a few gotchas.

Firewall

Your security concerns may differ but in order to not worry about the ports picked by dokku for the running applications I’m going to go ahead disable it.

It’s not hard to manage the ports on ufw , if you’re interested you can check up on managing Ubuntu’s firewall.

Memory swap

During your build you may run into memory issues which prevent it from finishing. Since we’re going the cheap route I’m going to add some swap, but if you wanted to you could use a droplet with more memory.

instructions via dokku advanced installation and digital ocean guides

Creating your app on dokku

While we are still ssh’ed into our digital ocean box let’s go ahead and setup the application on dokku.

Configure your Ember project for dokku

This is kind of the cool part. Because dokku can be treated like Heroku we can use the wonderful work the people at Heroku have done.

  1. First, let’s install ember-cli-deploy by running  ember install ember-cli-deploy .
  2. Now install ember-cli-deploy-build by running  ember install ember-cli-deploy-build . This is the basic build plugin that takes care of the build process upon deployment.
  3. package.json  will have been modified and config/deploy.js  added. Let’s commit these files.
  4. Dokku tries to do its best to automatically determine the heroku buildpack for a given application but given ours is Ember it needs a bit more setup than a regular node app. There are many different ways to specify the buildpack for an app with Dokku but I prefer setting the .buildpacks  file, because then it’s checked into git. In your project root run

    which should create a .buildpacks file with the buildpack URL inside. If the file already existed the buildpack URL should be added to the bottom.
  5. Commit your .buildpacks  file
  6. The last step is to tell our project where to deploy. Dokku follows the Heroku-easy model of just git push . So we will add our dokku digital ocean droplet by adding it to our git remotes by running

    With “your.ip.address” being your digital ocean droplet’s IP address. Note: The user for the push is dokku, not root. After the IP address is a “:project-name”, in our case it is “ember”. So if you’re curious the breakdown is:
  7. The last step is to deploy, run

    You should see lots of scrolling text and after a 3-4 minutes you should see one of the last lines say.

    Again, with “your.ip.address” being your droplet’s IP address.
    Screen Shot 2017-04-18 at 8.09.08 AM
    and there it is, we can see in the markup that we have an Ember application with our production build fingerprinted .js files.

Bonus steps

Who wants to remember a random port number? Not me. So let’s go ahead and swap that for something we can choose. Login to your droplet via ssh.

If we wanted to access it on port 80 we would do:

Each command will reconfigure the nginx, and after the second command you should be able to access the application at the given port.

That’s all folks

And that’s it. Hopefully you were able to get your Ember application deployed. There are probably some easier solutions, like just using Heroku itself, but it’s nice to know that there are options if you’re on a budget. Also, this can scale with you for other projects across other platforms and help introduce you to the world of Docker. You can access any of the running containers that Dokku sets up for you which is pretty neat and great if you absolutely need to tail some logs or access the environment directly for debugging.

Thanks

Thanks to the developers at dokku, herokuish, heroku, and ember-cli-deploy. This was made pretty easy thanks to the work done by people from these projects. ✌🏽❤️

EmberConf 2017, being realistically optimistic

Update: While drafting this post over the last couple of weeks EmberConf 2017: State of the Union has been posted on the Ember Blog. It provides better context so go read that first.

I discovered Ember early on in the fabled times before ember-cli. Those were the days of globals, views, and lots of googling. As I’ve grown in my career I’ve carefully watched as Ember has risen to a slew of new challenges we’ve seen on the web, and quite successfully.

I was lucky enough to attend the first EmberConf, even before I moved to Portland. From that first EmberConf it seemed as if Ember was always on target, hitting home runs. ember-cli was announced at the first EmberConf and it was obvious this ecosystem was going to be something special. In no particular order the following years unveiled things like ES6 modules early on, component-driven design, the addon ecosystem, data down actions up, a powerful first-class router, htmlbars, glimmer, glimmer 2 and a slew of other wonderful magic. Ember was able to even provide a realistic upgrade path from 1.x to 2.0. They said it and then it happened.

It wasn’t until watching meet-up talks and seeing releases unfold that it became apparent that this was the year that maybe there were a few misses. Ember had scaled to a point where it had to slow down and consider the landscape as it continued to build the bark.

To nobody’s surprise things like pods, routable components, and angle-braket components didn’t make it. Glimmer turned out to include a regression in its initial render performance and Glimmer 2 ended up being a total (amazing) rewrite. This wasn’t to say there wasn’t huge wins this year. We saw Engines released into core, we have a quite-stable FastBoot nearing 1.0, and have tons of evidence of a growing ecosystem and community. We saw new design-patterns like ember-concurrency emerge, which was also mentioned in almost every EmberConf 2017 talk.

Yehuda made an interesting point about Ember’s future, learning from this year’s shortcomings. In the way I interpretted it, Ember has matured to a point where instead of promising new features it has to be more realistic in its optimism. And instead of new features, by continuing to expose stable internals and primitives  the community is able to grow Ember’s bark itself. The most obvious example of this was the announcemnent of glimmer (as a library). This would not have been possible without the rewrite of glimmer 2 as a standalone drop-in replacement rendering engine. They built within the existing walls of their Ember APIs an entirely new “unopinionated” rendering engine that could stand alone. Godfrey is continuing this work with a new RFC that aims at exposing lower level primitatives to the glimmer engine itself.

Every year EmberConf has given me confidence in the project’s ability to tackle new problems. Despite a few missed deadlines Ember continues to revaluate the landscape and adapt to be better suited for the challenges to come. Sometimes it’s important to take a step back before you can continue forward, something I think Ember has been exceptional at.

I’ll have a follow up post with some of my favourite talks and specific takeways from this year’s conference.

Ember PSA: modules and scopes

I could have saved an hour of head scratching had I kept in mind a few basic principles. Hopefully this lesson of modules and shared scope will save someone else in the future.

Within Ember we get used to creating modules and exporting these object definitions.

We’re lucky cause these automatically get picked up and registered the way we need them. Components and models, for example, end up getting registered as factories so that we can easily create many instances on the fly.

We can see in super-component  that we have a few properties, label  and container . For each instance of the component we can do whatever want to the label  and only that instance’s label  would be modified. However when it comes to doing a pushObject  (Ember’s version of push  so that it can track changes) into the array all shared instances receive the value pushed since they are all pointing to the same array reference. This would also apply if we were modifying properties on an object that was created in the module’s object definition.

Another way to look at this is that we aren’t maintaining changes to a string as changes to a string produce a new string in javascript, they’re immutable. However we can maintain the reference to the object or array, and change the things they point to, ie: adding another object into the array while maintaining reference to the array.

We  can combat this by doing a few things. When we do need a shared reference, be explicit and put it outside so that it’s obvious like containerInScope  in the example.

When we don’t want a shared reference either pass it in on the component in the template, or by using .set . When it’s the responsibility of the component to a fresh instance available set it explicitly on init  like:

While these lessons aren’t specific to Ember and essentially anybody exporting modules and relying on scoping could run into the same issue, I do feel Ember provides some magic that it’s easy to fall that there are still basic javascript pitfalls.

Lastly, I can’t blame Ember for this. It’s documented:

Arrays and objects defined directly on any Ember.Object are shared across all instances of that object.

It’s a silly mistake but one that got the best of me and was a good chance to review what is actually retained across module boundaries, and how these files are (re)used.

Why I made Vouch, my own A+ promise library

Javascript promises have been around for a bit (especially in terms of “internet years”). On both the frontend and the backend we use them as a way of control-flow, handling asynchronous craziness, and taming the once dreaded callback hell. They’ve been so great that we are starting to explore many other ideas around asyncronousity and complex UIs.

But let’s hold up for a second. We’ve all used promises, but how well do we really understand them? Myself, I thought I had a pretty solid understanding. After seeing a number of design patterns and using them as a first class citizen in Ember for awhile, what secrets could remain?

With that question in mind I went off to make my own promise library. I was lucky to find that Domenic had written a test suite that I could use to TDD my way through my little experiment. Now if you look a bit closer at at the test suite you’ll find folding of functions on functions to generate tests in a way that cover many different use cases. Yes it’s flexible, but it wasn’t always the easiest to debug. My solution was to re-write a simple surface layer of quick gotcha tests that covered some of the baseline functionality I expected. These were easier to debug and quicker to fail in a way that was easy to reason about.

It wasn’t as smooth sailing as I had expected. Early on I realized that there were a few architectural roadblocks and decided to branch off to explore other ideas within my existing boundaries. One of the awesome things about having a test suite was seeing how tackling one internal concept would unlock blocks of passed unrelated tests. Over the course of a few weeks I felt my understanding of promises and thenables level up.

Why stop there? So, I decided to go through the notions of adding it to npm, setting up the travis ci, and playing with some package release tools. I have a few outstanding issues to publish transpiled versions for older versions of node, and the browser. It would be great to submit it to the official A+ spec list of libraries, too. I’m also curious how well my implementation holds up performance-wise.

Alright, so it works and the test-suite passes, but what was the point? I don’t actually expect people to use it, in fact I hope people don’t. Vouch, was simply a chance for me to get a peek at what goes into making a library, and appreciate the depths of the spec. And most obviously I was able to level up my knowledge of thenables and promises. I would highly encourage anyone to borrow a test suite and test their implementation and understanding with a pet project, you might be surpised with what you’ll learn.

If you like the idea behind vouch go ahead and give it a star!

Quickly serve html files from node, php, or python

There will be times where you need to server some files through a browser and instead of setting a local instance of apache or MAMP, it might just be easier to use something in the command-line. I would recommend the node option as it seems to have a few more options. Mac ships with python and PHP which make those easier in some cases. Also, the PHP server will handle php files along with standard html.

Node

First install http-server globally via npm.

Then it’s as easy as

PHP

php -S <domain>:<port>

ie:  php -s localhost:8000

Python

ie:  python -m SimpleHTTPServer 8000

js puzzle

I came across a tweet that had this bit of puzzling sample code:

js puzzle

Most of this made sense to me, except for the part of the properties being assigned and then either accessible or being undefined. I had a hunch that it was related to something I blogged about previously.

Turns out when using the  .call it’s actually returning an object. That first line is the equivalent of  var five = new Number(5); . This means:

While it’s an object, you can add your properties but as soon as it’s autoboxed/primitive wrapped by the  ++ , it loses it’s abilities to hold those properties. This is shown by the fact that the  instanceof and  typeof values are now different:

The rest of the puzzle is playing with the timing of return values and the difference of an assignment from number++ and ++number.

At least that’s the way I understand it, let me know if I’ve missed anything.

PhantomJS 1.9 and KeyboardEvent

PhantomJS is awesome and one common use case is to use it as a headless browser for running a test suite. I noticed that I was getting different results in my tests where code was relying on fabricating a KeyboardEvent and dispatching it on an element. Well it looks like others have noticed that some of their events are missing, too. One proposed solution controls the type of event that is dispatched, but in all other cases I am pretty happy to use  new KeyboardEvent() , I would prefer not to write special code just to appease my tests.

As a workaround I did this:

This could be pretty dangerous depending on your use case, but at least it’s isolated to your test. I wasn’t sure what other method to use, but if you have one I would love to hear it in the comments. Also, Phantom 2.x should fix this, but it wasn’t an option in this case.

Javascript: The Good Parts

It’s been several months since I read Javascript: The Good Parts but I thought it was worth mentioning that this old classic is an excellent read. It also takes offers a more traditional look at javascript, which is important in understanding why certain changes are being made today.

We are really quite lucky that things like package managers, module loaders, and javascript features have matured to the point where they are being standardized, and browsers are iterating (as well as the spec) at a rate that is making the language more of a pleasure to use. There are things that are also being added that are difficult or impossible to polyfill like WeakMap and generators, that will be fun to play with. Crawford’s follow-up coverage “The Better Parts” I think is best shown at the Nordic.js 2014 conference, check it out:

ps: his take on  class as being a “bad part” is interesting. I don’t currently have an opinion since it makes sense how it works behind the scenes. I do like the idea of  Object.create, and it’s interesting how he finds this  as a security flaw and by not using it he didn’t need to use  Object.create  which made things even simpler. This might be a bit more of  a “functional” approach which is made easier with modules and exports.