Scaling CSS with mixin-backed class names

I started my career working for digital agencies where sharing styles across bigger content systems was a big part of my day-to-day. Starting with SMACSS and BEM we were able to create logical systems to tame our cascading styles . These efforts were made easier with the rise of style preprocessors like less, sass and stylus. Now we have the introduction of various tooling kept our CSS even leaner; we could check for unused styles, statically determine if styles are being used, and run transforms against a CSS AST with any number of plugins.

One idea that changes the approach of how CSS is introduced are the flavours of CSS-in-JS, popularized by the React community. The functional approach to folding reusable CSS together and applying it to the target, in this case usually a component, had an elegance packed with a huge productivity win.

This post aims to combine a few of these ideas that when used together can help keep CSS manageable when not using a scoped solution (CSS Modules, CSS-in-JS).

What’s the main issue with CSS?

I believe the main issue with CSS is the cascade. In theory the following should work well and scale:

As soon as one class from another definition gets introduced though, search-field-button for example, we increase the chances of the fighting cascade. It’s definition may not play nice with the previous button  definitions.

The Solution

The solution I propose relies on a few conventions and is something that can be implemented in an existing project as you go. The idea requires little overhead, and basically no opinion on your class naming convention.

The solution aims to:

  1. Make style definitions simpler to track
  2. Allow styles to be easily composed

The idea does rely on a few assumptions like:

  1. You are using a preprocessor, or PostCSS, with support for something similar to a sass @mixin
  2. You are using, or can introduce, something like PostCSS to perform some clean-up, see Handling conflicts

One Class – Easy to track definitions

Simply, use one class. This ensures a simple definition of your style rules.

This removes the ambiguity and confusion of how the cascade is applied (specificity, position of css in the file, css file load order, yikes, etc…). The class name is simply the only hook to your list of style definitions, nothing more.

Reusability via the mixin – Easy to compose

Using the mixin for reusability isn’t a new concept, it’s what it was designed for. I am, however, proposing a convention and workflow that should make it easier to follow the One Class guideline.

Consider the contrived example:

Now within the _button-primary.scss I can import and use my mixins as needed:

These mixins are expanded and extend the definition of a single class that would be used in html by <button class="branded-button"></div>

It’s nice if the class name corresponds with the file since there is a one-to-one relationship. This makes updating the class and tracking any mishaps pretty easy. Removing code also becomes easy because it’s a simple find/replace for the class name and file name.

This file structure keeps the mixins separate from their implementation hooks à la class names.

Extending to variants

Let’s imagine now that we want a variant of this “branded button” with a large font size. I’m sorry for the contrived example, but maybe you can see the extension in some real world scenario.

Since we knew we wanted to extend branded_button it was easy to track down its definition, and copy the definition. If we wanted to we could make a mixin that includes other mixins. That said we should really try to keep mixin-in-mixin nesting as flat as possible until it’s really necessary to group common mixin definitions together, but for example it would look something like:

My example doesn’t leverage the fact that mixin’s can be defined with arguments which allows you to lean on variables and configuration that can be passed in to give your variant some nicely tweaked variability.

Handling conflicts

Let’s say you have two mixins that both try to claim the same key/value space.

and this generates a few conflicting properties:

We are left with a mash of these two mixins, and for the most part is what exactly what we want except for the conflicting background-color and color. Let’s say we want the background-color from main-button and color from special-button. Well, instead of relying on some hacky cascading overrides we get the opportunity to resolve the dispute ourselves.

Now you’ve explicitly resolved how you want the landing-page-button to look. Maybe you see the follow-up issue though, the resulting compilation looks like:

We’re left with three declarations for each property. So while it’s clear what we are left with (and it’s even clearer in chrome dev tools with the strikethrough’s of overridden properties), we don’t want to ship all these extra declarations.

Luckily if you allow me to introduce a postcss-combine-duplicated-selectors, a PostCSS plugin, then it’s all covered. This plugin has the option of removeDuplicatedProperties that when set to true will squash these extra definitions in our final css. In a development build I would leave them in so that it’s easier to see the layering of the definitions, but then clean everything up for production.


Your style definition might have a :hover pseudo-class, or an data-attribute that is scoped within your main definition, or swap the class out entirely for another class that includes some of the same style definitions as the default state.


If you wanted to share your definitions all you need to do is share your mixins. You don’t have to worry about the cascade, fighting shared classes from your “corporate shared stylesheet”, or bootstrap. You can import, include, and manage conflicts. If you wanted to go so far as including these mixins in an npm package you could share them across your all your front-end’s in a versioned manner. At the end of the day you’re just hooking the key/values together in a way and applying to the a single classname you’ve chosen as the hook.

The execption to the “one class” rule

If you have a series of utility classes that are composed by singular properties to prototype a style then you might have multiple classes. Once you need to repeat these classes though you could move them into a style definition, backed by their individual properties or their individual mixin definition ultimately backed by just one class.

might become:

This idea mirrors the design pattern that is being used with Tailwind CSS.

Testing, an aside

Treating this squashing of key/value pairs is much like the equivalent in javascript:

If we ended up with some final result that should really contain a certain style we could use a javascript test suite to assert that squashed.color === 'red' and squashed.backgroundColor === 'blue'. If we had a style guide with our styles in practical usage we could ensure that conflicts and critical styles were asserted on with a window.getComputedStyle.

This is something that should probably be explored in another blog post or side project though…

So that’s about it

So my proposal for beating the cascade is to not fight it. Use singular classes and mixins and make your life easier. Maybe we will have ways of raising exceptions when classes fight so that we can catch them in a dev environment runtime. Who knows? The tooling around the front-end and how the awareness of how styles are being used will only continue to get better. The CSS Object Model is being opened up through a browser API, and the idea of Houdini takes it a step further in what a few years ago would have been a pipe dream. The frontend is a fun place to be right now! Thanks for reading, if you have any comments/questions/suggestions reach out to me on twitter. 👋🏽

Deploying Ember apps on the cheap with Dokku, Docker and Digital Ocean.

The rough idea

With Ember we are spoiled with an excellent ember-cli-deploy tool. Need to deploy somewhere, you can go shopping for many of the supported deploy plugins. One company that has made deployment dead simple is Heroku. When I was looking to show off some local Ember apps I wanted something cheap and easy to setup. Heroku would be nice but I think we could go cheaper.

Enter Dokku. It’s project aimed at providing Heroku support by wrapping a docker heroku-friendly project called Herokuish. Dokku gives you a PAAS by wrapping containers with an nginx proxy router. It has
great settings and plugins that help you extend it for a number of use cases. Because Dokku can detect buildpacks and leverage herokuish we can deploy via a git push, using heroku buildpacks, and get a deployed container. With buildpacks you don’t actually need to know Docker or setup the container.

The last piece of the puzzle is Digital Ocean. It provides affordable virtual machine hosting with an easy to understand interface and luckily for us a one-click install of Dokku on a droplet.

With this rough outline let’s get started.

Create your Ember project

Feel free to skip this step if you’ve already got an ember project.

We’ll use a stock ember project.
1. Go into a fresh folder, and run ember init
2. Let’s make sure we’re tracking this in a git repo, run git init
3. Let’s commit the empty ember project:

Setup your digital ocean droplet

Now let’s get your Dokku digital ocean droplet going.

  1. Login to Digital Ocean.
  2. Click ‘create droplet’.
  3. Click the “One-click apps” tab.
  4. Choose Dokku 0.8.0 on 16.04
  5. Choose a size at $5/mo (let’s keep this cheap!)
  6. Pick your preferred region
  7. Add your ssh keys if you got them, it’ll make ssh’ing in easier.
  8. Pick 1 droplet, and pick a hostname if you like.
  9. That’s it! Click “Create”

Under Droplets, check that your droplet is being created.


You should get an IP address for your droplet, in my case it gave me Go ahead and ssh into your newly created droplet.

Let’s make sure dokku is installed alright by running:

which should return with how to use dokku and available commands.

Setup Dokku

In your browser go to http://your.ip.address with “your.ip.address” being the IP address of your digital ocean droplet above.

You should see a screen similar to:

Screen Shot 2017-04-18 at 7.51.06 AM

Paste your public key which may be the same public key you use for things like github, unless you’ve generated a different one. It might have already filled it in if you had supplied digital ocean with a public key for the droplet. Make sure you have pasted something into the public key. This page is only available once after clicking “Finish Setup”. If you are trying to keep this cheap and plan on only using an IP address make sure you leave “virtualhost naming” unchecked.

If you ever need to change any of this configuration you can do so while ssh’ed into your droplet from the dokku command and pouring over the decently written dokku documentation. Or ask me on twitter, I might be able to help, too.

Click “Finish Setup” when everything is configured.


Before we continue let’s take care of a few gotchas.


Your security concerns may differ but in order to not worry about the ports picked by dokku for the running applications I’m going to go ahead disable it.

It’s not hard to manage the ports on ufw , if you’re interested you can check up on managing Ubuntu’s firewall.

Memory swap

During your build you may run into memory issues which prevent it from finishing. Since we’re going the cheap route I’m going to add some swap, but if you wanted to you could use a droplet with more memory.

instructions via dokku advanced installation and digital ocean guides

Creating your app on dokku

While we are still ssh’ed into our digital ocean box let’s go ahead and setup the application on dokku.

Configure your Ember project for dokku

This is kind of the cool part. Because dokku can be treated like Heroku we can use the wonderful work the people at Heroku have done.

  1. First, let’s install ember-cli-deploy by running  ember install ember-cli-deploy .
  2. Now install ember-cli-deploy-build by running  ember install ember-cli-deploy-build . This is the basic build plugin that takes care of the build process upon deployment.
  3. package.json  will have been modified and config/deploy.js  added. Let’s commit these files.
  4. Dokku tries to do its best to automatically determine the heroku buildpack for a given application but given ours is Ember it needs a bit more setup than a regular node app. There are many different ways to specify the buildpack for an app with Dokku but I prefer setting the .buildpacks  file, because then it’s checked into git. In your project root run

    which should create a .buildpacks file with the buildpack URL inside. If the file already existed the buildpack URL should be added to the bottom.
  5. Commit your .buildpacks  file
  6. The last step is to tell our project where to deploy. Dokku follows the Heroku-easy model of just git push . So we will add our dokku digital ocean droplet by adding it to our git remotes by running

    With “your.ip.address” being your digital ocean droplet’s IP address. Note: The user for the push is dokku, not root. After the IP address is a “:project-name”, in our case it is “ember”. So if you’re curious the breakdown is:
  7. The last step is to deploy, run

    You should see lots of scrolling text and after a 3-4 minutes you should see one of the last lines say.

    Again, with “your.ip.address” being your droplet’s IP address.
    Screen Shot 2017-04-18 at 8.09.08 AM
    and there it is, we can see in the markup that we have an Ember application with our production build fingerprinted .js files.

Bonus steps

Who wants to remember a random port number? Not me. So let’s go ahead and swap that for something we can choose. Login to your droplet via ssh.

If we wanted to access it on port 80 we would do:

Each command will reconfigure the nginx, and after the second command you should be able to access the application at the given port.

That’s all folks

And that’s it. Hopefully you were able to get your Ember application deployed. There are probably some easier solutions, like just using Heroku itself, but it’s nice to know that there are options if you’re on a budget. Also, this can scale with you for other projects across other platforms and help introduce you to the world of Docker. You can access any of the running containers that Dokku sets up for you which is pretty neat and great if you absolutely need to tail some logs or access the environment directly for debugging.


Thanks to the developers at dokku, herokuish, heroku, and ember-cli-deploy. This was made pretty easy thanks to the work done by people from these projects. ✌🏽❤️

EmberConf 2017, being realistically optimistic

Update: While drafting this post over the last couple of weeks EmberConf 2017: State of the Union has been posted on the Ember Blog. It provides better context so go read that first.

I discovered Ember early on in the fabled times before ember-cli. Those were the days of globals, views, and lots of googling. As I’ve grown in my career I’ve carefully watched as Ember has risen to a slew of new challenges we’ve seen on the web, and quite successfully.

I was lucky enough to attend the first EmberConf, even before I moved to Portland. From that first EmberConf it seemed as if Ember was always on target, hitting home runs. ember-cli was announced at the first EmberConf and it was obvious this ecosystem was going to be something special. In no particular order the following years unveiled things like ES6 modules early on, component-driven design, the addon ecosystem, data down actions up, a powerful first-class router, htmlbars, glimmer, glimmer 2 and a slew of other wonderful magic. Ember was able to even provide a realistic upgrade path from 1.x to 2.0. They said it and then it happened.

It wasn’t until watching meet-up talks and seeing releases unfold that it became apparent that this was the year that maybe there were a few misses. Ember had scaled to a point where it had to slow down and consider the landscape as it continued to build the bark.

To nobody’s surprise things like pods, routable components, and angle-braket components didn’t make it. Glimmer turned out to include a regression in its initial render performance and Glimmer 2 ended up being a total (amazing) rewrite. This wasn’t to say there wasn’t huge wins this year. We saw Engines released into core, we have a quite-stable FastBoot nearing 1.0, and have tons of evidence of a growing ecosystem and community. We saw new design-patterns like ember-concurrency emerge, which was also mentioned in almost every EmberConf 2017 talk.

Yehuda made an interesting point about Ember’s future, learning from this year’s shortcomings. In the way I interpretted it, Ember has matured to a point where instead of promising new features it has to be more realistic in its optimism. And instead of new features, by continuing to expose stable internals and primitives  the community is able to grow Ember’s bark itself. The most obvious example of this was the announcemnent of glimmer (as a library). This would not have been possible without the rewrite of glimmer 2 as a standalone drop-in replacement rendering engine. They built within the existing walls of their Ember APIs an entirely new “unopinionated” rendering engine that could stand alone. Godfrey is continuing this work with a new RFC that aims at exposing lower level primitatives to the glimmer engine itself.

Every year EmberConf has given me confidence in the project’s ability to tackle new problems. Despite a few missed deadlines Ember continues to revaluate the landscape and adapt to be better suited for the challenges to come. Sometimes it’s important to take a step back before you can continue forward, something I think Ember has been exceptional at.

I’ll have a follow up post with some of my favourite talks and specific takeways from this year’s conference.

Ember PSA: modules and scopes

I could have saved an hour of head scratching had I kept in mind a few basic principles. Hopefully this lesson of modules and shared scope will save someone else in the future.

Within Ember we get used to creating modules and exporting these object definitions.

We’re lucky cause these automatically get picked up and registered the way we need them. Components and models, for example, end up getting registered as factories so that we can easily create many instances on the fly.

We can see in super-component  that we have a few properties, label  and container . For each instance of the component we can do whatever want to the label  and only that instance’s label  would be modified. However when it comes to doing a pushObject  (Ember’s version of push  so that it can track changes) into the array all shared instances receive the value pushed since they are all pointing to the same array reference. This would also apply if we were modifying properties on an object that was created in the module’s object definition.

Another way to look at this is that we aren’t maintaining changes to a string as changes to a string produce a new string in javascript, they’re immutable. However we can maintain the reference to the object or array, and change the things they point to, ie: adding another object into the array while maintaining reference to the array.

We  can combat this by doing a few things. When we do need a shared reference, be explicit and put it outside so that it’s obvious like containerInScope  in the example.

When we don’t want a shared reference either pass it in on the component in the template, or by using .set . When it’s the responsibility of the component to a fresh instance available set it explicitly on init  like:

While these lessons aren’t specific to Ember and essentially anybody exporting modules and relying on scoping could run into the same issue, I do feel Ember provides some magic that it’s easy to fall that there are still basic javascript pitfalls.

Lastly, I can’t blame Ember for this. It’s documented:

Arrays and objects defined directly on any Ember.Object are shared across all instances of that object.

It’s a silly mistake but one that got the best of me and was a good chance to review what is actually retained across module boundaries, and how these files are (re)used.

Why I made Vouch, my own A+ promise library

Javascript promises have been around for a bit (especially in terms of “internet years”). On both the frontend and the backend we use them as a way of control-flow, handling asynchronous craziness, and taming the once dreaded callback hell. They’ve been so great that we are starting to explore many other ideas around asyncronousity and complex UIs.

But let’s hold up for a second. We’ve all used promises, but how well do we really understand them? Myself, I thought I had a pretty solid understanding. After seeing a number of design patterns and using them as a first class citizen in Ember for awhile, what secrets could remain?

With that question in mind I went off to make my own promise library. I was lucky to find that Domenic had written a test suite that I could use to TDD my way through my little experiment. Now if you look a bit closer at at the test suite you’ll find folding of functions on functions to generate tests in a way that cover many different use cases. Yes it’s flexible, but it wasn’t always the easiest to debug. My solution was to re-write a simple surface layer of quick gotcha tests that covered some of the baseline functionality I expected. These were easier to debug and quicker to fail in a way that was easy to reason about.

It wasn’t as smooth sailing as I had expected. Early on I realized that there were a few architectural roadblocks and decided to branch off to explore other ideas within my existing boundaries. One of the awesome things about having a test suite was seeing how tackling one internal concept would unlock blocks of passed unrelated tests. Over the course of a few weeks I felt my understanding of promises and thenables level up.

Why stop there? So, I decided to go through the notions of adding it to npm, setting up the travis ci, and playing with some package release tools. I have a few outstanding issues to publish transpiled versions for older versions of node, and the browser. It would be great to submit it to the official A+ spec list of libraries, too. I’m also curious how well my implementation holds up performance-wise.

Alright, so it works and the test-suite passes, but what was the point? I don’t actually expect people to use it, in fact I hope people don’t. Vouch, was simply a chance for me to get a peek at what goes into making a library, and appreciate the depths of the spec. And most obviously I was able to level up my knowledge of thenables and promises. I would highly encourage anyone to borrow a test suite and test their implementation and understanding with a pet project, you might be surpised with what you’ll learn.

If you like the idea behind vouch go ahead and give it a star!

Quickly serve html files from node, php, or python

There will be times where you need to server some files through a browser and instead of setting a local instance of apache or MAMP, it might just be easier to use something in the command-line. I would recommend the node option as it seems to have a few more options. Mac ships with python and PHP which make those easier in some cases. Also, the PHP server will handle php files along with standard html.


First install http-server globally via npm.

Then it’s as easy as


php -S <domain>:<port>

ie:  php -s localhost:8000


ie:  python -m SimpleHTTPServer 8000

js puzzle

I came across a tweet that had this bit of puzzling sample code:

js puzzle

Most of this made sense to me, except for the part of the properties being assigned and then either accessible or being undefined. I had a hunch that it was related to something I blogged about previously.

Turns out when using the  .call it’s actually returning an object. That first line is the equivalent of  var five = new Number(5); . This means:

While it’s an object, you can add your properties but as soon as it’s autoboxed/primitive wrapped by the  ++ , it loses it’s abilities to hold those properties. This is shown by the fact that the  instanceof and  typeof values are now different:

The rest of the puzzle is playing with the timing of return values and the difference of an assignment from number++ and ++number.

At least that’s the way I understand it, let me know if I’ve missed anything.

PhantomJS 1.9 and KeyboardEvent

PhantomJS is awesome and one common use case is to use it as a headless browser for running a test suite. I noticed that I was getting different results in my tests where code was relying on fabricating a KeyboardEvent and dispatching it on an element. Well it looks like others have noticed that some of their events are missing, too. One proposed solution controls the type of event that is dispatched, but in all other cases I am pretty happy to use  new KeyboardEvent() , I would prefer not to write special code just to appease my tests.

As a workaround I did this:

This could be pretty dangerous depending on your use case, but at least it’s isolated to your test. I wasn’t sure what other method to use, but if you have one I would love to hear it in the comments. Also, Phantom 2.x should fix this, but it wasn’t an option in this case.

Javascript: The Good Parts

It’s been several months since I read Javascript: The Good Parts but I thought it was worth mentioning that this old classic is an excellent read. It also takes offers a more traditional look at javascript, which is important in understanding why certain changes are being made today.

We are really quite lucky that things like package managers, module loaders, and javascript features have matured to the point where they are being standardized, and browsers are iterating (as well as the spec) at a rate that is making the language more of a pleasure to use. There are things that are also being added that are difficult or impossible to polyfill like WeakMap and generators, that will be fun to play with. Crawford’s follow-up coverage “The Better Parts” I think is best shown at the Nordic.js 2014 conference, check it out:

ps: his take on  class as being a “bad part” is interesting. I don’t currently have an opinion since it makes sense how it works behind the scenes. I do like the idea of  Object.create, and it’s interesting how he finds this  as a security flaw and by not using it he didn’t need to use  Object.create  which made things even simpler. This might be a bit more of  a “functional” approach which is made easier with modules and exports.

A future without boot2docker, featuring Docker Machine

Docker has always had a few unofficial documented steps to getting things going on non-linunx environments. It usually went along the lines that if you weren’t on linux, get linux. This is understandable as docker is using LXC behind the scenes, and that requires linux. A lot of web developers are using Apple hardware with OSX, like myself, and probably felt like it was a little more setup than necessary. Projects like boot2docker made this way easier but that only solved setting up a docker host, or engine, for windows and Mac OS X. What about all those cloud providers? Pre-built images offered by Digital Ocean, etc…

Luckily Docker saw this challenge and abstracted a way to easily setup, from the client. a way to setup any docker engine. It’s called Docker Machine. They even provide migration steps for the boot2docker folk. Now we have an official resource that will work in tandem with docker’s future plans.

Getting things going on a Mac is easier than ever, and Docker provides a Toolbox installation that is easy to download and run as an installer. I prefer to avoid installers and as much as possible let homebrew handle my dependencies, and manage updates. Assuming you have homebrew already installed (and if you don’t go get it, your life will get easier).

Easy Installation with Homebrew

Prerequisite – Homebrew Cask installed? Cask lets you install installers via homebrew.

With Homebrew Cask installed, in your terminal run:

VirtualBox will run the virtual machine that runs linux, which will run Docker. Docker machine supports other means of virtualization, but I’ve only used VirtualBox as it’s free and been used for similar purposes by projects like Otto, Vagrant and boot2docker.

With VirtualBox ready, we just need Docker Machine, in your terminal run:

Now you should have access to docker-machine  on the command line, and we can go ahead and setup a virtual machine that docker can use. Let’s create a docker engine, in your terminal run:

This will create a docker machine named dev. You can take a look at the docker machines at your disposal by putting   docker-machine ls  in your terminal.

Now we have a virtual machine, configured with docker, and running. If you restart your computer, or notice that after running docker-machine ls that the STATE isn’t Running , then all you need to do is run docker-machine start dev (in this case dev denotes the name of the engine).

The last step is to be able to actually execute commands against our Docker engine, and do things like create containers. As Docker Machine will let you run commands against any number of Docker engines, whether you have multiple virtual machines, cloud instances, your local docker  command needs to be wired up so that its commands are directed at the correct engine. This is an important distinction so I’ll repeat that, your local docker client that you access by the docker  is completely agnostic to the docker engine it is running commands against. The commands could be running against an engine locally, in the cloud, it just needs to be setup to point at the right engine. This makes it really powerful by having one API to manage containers across a slew of engines.

Docker-machine makes setting up the your docker client easy by:

(dev refers to the docker engine name that you can get from  docker-machine ls). This command is setting up environment variables that your local docker uses, to see exactly what the eval  is running behind the scenes put just  docker-machine env dev in your terminal.

With all this setup, you should be able to type docker info and see all the information your local docker client has about what its current docker engine. At this point you’re free to use the docker command and have fun with containerizing your apps.

Hope this made the Mac OS X with Docker setup a little clearer and easier, and provided a homebrew way of setting things up. If anything didn’t make sense, or if I need to fix something please let me know! Thanks and Happy Dockering!