A future without boot2docker, featuring Docker Machine

Docker has always had a few unofficial documented steps to getting things going on non-linunx environments. It usually went along the lines that if you weren’t on linux, get linux. This is understandable as docker is using LXC behind the scenes, and that requires linux. A lot of web developers are using Apple hardware with OSX, like myself, and probably felt like it was a little more setup than necessary. Projects like boot2docker made this way easier but that only solved setting up a docker host, or engine, for windows and Mac OS X. What about all those cloud providers? Pre-built images offered by Digital Ocean, etc…

Luckily Docker saw this challenge and abstracted a way to easily setup, from the client. a way to setup any docker engine. It’s called Docker Machine. They even provide migration steps for the boot2docker folk. Now we have an official resource that will work in tandem with docker’s future plans.

Getting things going on a Mac is easier than ever, and Docker provides a Toolbox installation that is easy to download and run as an installer. I prefer to avoid installers and as much as possible let homebrew handle my dependencies, and manage updates. Assuming you have homebrew already installed (and if you don’t go get it, your life will get easier).

Easy Installation with Homebrew

Prerequisite – Homebrew Cask installed? Cask lets you install installers via homebrew.

With Homebrew Cask installed, in your terminal run:

VirtualBox will run the virtual machine that runs linux, which will run Docker. Docker machine supports other means of virtualization, but I’ve only used VirtualBox as it’s free and been used for similar purposes by projects like Otto, Vagrant and boot2docker.

With VirtualBox ready, we just need Docker Machine, in your terminal run:

Now you should have access to docker-machine  on the command line, and we can go ahead and setup a virtual machine that docker can use. Let’s create a docker engine, in your terminal run:

This will create a docker machine named dev. You can take a look at the docker machines at your disposal by putting   docker-machine ls  in your terminal.

Now we have a virtual machine, configured with docker, and running. If you restart your computer, or notice that after running docker-machine ls that the STATE isn’t Running , then all you need to do is run docker-machine start dev (in this case dev denotes the name of the engine).

The last step is to be able to actually execute commands against our Docker engine, and do things like create containers. As Docker Machine will let you run commands against any number of Docker engines, whether you have multiple virtual machines, cloud instances, your local docker  command needs to be wired up so that its commands are directed at the correct engine. This is an important distinction so I’ll repeat that, your local docker client that you access by the docker  is completely agnostic to the docker engine it is running commands against. The commands could be running against an engine locally, in the cloud, it just needs to be setup to point at the right engine. This makes it really powerful by having one API to manage containers across a slew of engines.

Docker-machine makes setting up the your docker client easy by:

(dev refers to the docker engine name that you can get from  docker-machine ls). This command is setting up environment variables that your local docker uses, to see exactly what the eval  is running behind the scenes put just  docker-machine env dev in your terminal.

With all this setup, you should be able to type docker info and see all the information your local docker client has about what its current docker engine. At this point you’re free to use the docker command and have fun with containerizing your apps.

Hope this made the Mac OS X with Docker setup a little clearer and easier, and provided a homebrew way of setting things up. If anything didn’t make sense, or if I need to fix something please let me know! Thanks and Happy Dockering!

Getting a handle on Ember’s templating and rendering story

I was at EmberConf 2015 where Tom and Yehuda announced Glimmer in their keynote. They showed off a dbmon implementation of Ember with HTMLbars. While many probably didn’t notice the performance issues with basic renders prior, this demo quite easily showed the problem and that it could be better. React was proof that a solution was already in hand. Luckily they had been work and an Apple-esque (maybe from Tom’s days at Apple) announcement of the new rendering engine, Glimmer. Just like that Ember was back in the rendering game as quickly as I had known it wasn’t.

Ember’s templating has seen modest improvements over in recent history, from removing annoying metamorphosis script tags, to HTMLbars, and removing the bindAttr and classBinding, and now with a diffing engine that was even better in implementation than React’s. What was neat is that Ember was able to pull out specific parts of the template that it could denote as “dynamic” and when these values, recursively if necessary, re-render the parts that changed.

This all seemed like magic, and because behind the scenes the way you wrote your handlebar syntax hadn’t change (much), it wasn’t quite obvious what had changed to make Ember so much better. Luckily some recent videos have come out that dive deep into the magic. So deep in fact that I might have to watch the “HTMLBars Deep Dive” video again to get a better idea of what is going on architecturally . Take a look at these videos, ordered from least to most granularity:

“Inside Glimmer” is a perfect precursor to the “HTMLBars  Deep Dive”.

It’s obvious that these are not easy problems to solve, and it’s great to see Ember continue to to evolve. These ideas aren’t even completely original, and giving credit to other frameworks, Ember is able to adapt these ideas in a way that makes even better. Thanks to everyone on the core team for their continued work on connecting all the pieces of Ember making it a full featured front-end stack.

Monolithic docker containers for local development environments

This post has an companion github repo using wordpress as an example, feel free to take a look.

In agency work there isn’t the same liberty to be able to deploy our lovely isolated docker containers. Often those environments are the clients, and they just want the git repo and the mysql database. This does not excuse developers from doing everything possible to try and match the production environment in their local development environment.

Often what developers ends up with is a working version of a stack consisting of an apache, mysql and php on their machine. Trying to add extensions or debug someone’s stack is usually pretty difficult because everyone ends up doing it a different way. Couple this with the fact that working on one environment, with one set of binaries and configuration, often is not going to reflect production in every project. Often things work well enough so these shortcomings are ignored. Configurations to support multiple projects running out of sub-directories with one vhost and .htaccess hacks are often culprits of easily avoided bugs, too.

What is the solution? I think vagrant comes really close but it’s a little too heavy and doesn’t do enough to abstract things like the memory, storage, networking of the vm. Essentially most people just want a container with exposed ports, mounted volumes, and an isolated environment, and that’s docker. Docker advocates for splitting up your services across multiple containers and that makes a lot of sense. However, I think it might be overkill for these basic php projects that a lot of web agencies get. I think there is a use-case for docker and running everything in a single container that is vm-esque.

This single-container approach gives you a lot of advantages like tracking your Dockerfile (and any future changes to the docker image) in your git repo, being able to run Docker with your mounted project directory, and just an overall quick and snappy setup. I have an example repo if you’re curious about an implementation of how this would work. Ideally, you would use composer or a some other package manager to track the framework and its dependencies leaving only a Dockerfile, your manifest file declaring your packages, and your application code, in your repo.

Be careful with those primitives types

This is probably a refresher for most, but I was curious about how js is handling typing. After all we have String , Number , and Boolean  global objects that have these wonderful prototypes that get us some really handy functions. So we can do something like:

Neat, and because we have these global objects we can augment their prototypes to give us access to extra methods on every instance. For example:

Ember.js augments prototypes to make things a little easier and to quickly get access to methods without having to pass it into an Ember object. This is something that requires a lot of responsibility, as there is some overhead involved and generally you don’t want to surprise people who share the environment with you.

Augmenting prototypes is also useful to polyfill functionality that might not exist, like Array.map in older versions of IE.

Prototypes also help with defining inheritance in javascript. We have useful operators like instanceof and  typeof to help us make sense of these. Where things get tricky is when you have a primitive like "hello, I'm talking loudly" being a string primitive, but also having access to the String  prototype, like how I added the exclamation  method.

We would expect that since we are using a method on the String prototype, and that "hello, I'm talking loudly" was able to access it, that “hello, I’m talking loudly” instanceof String would equal true, but it doesn’t. Oddly enough, typeof “hello, I’m talking loudly” equals “string”, and new String(“hello, I’m talking loudly”) instanceof String equals true.

If all that seems a little confusing it did to me, too. Here is a quick summary of what we’re looking at:

What is happening is that when you use “quotes” or call the function without the operator  new you actually are working on the primitive. The primitive isn’t an Object and therefore can’t be an instance. When you aren’t using the new operator, the function is simply returning the primitive. As you might recall, using the new operator on a function uses that function as a constructor and creates an instance that inherits from the function’s prototype. Check out the MDN resource for a better explanation, but essentially we’re dealing with an object.

How are we able to access these method’s on a prototype then? There is something called “autoboxing”, or more commonly as “primitive wrapper types”, happening behind the scenes that wires up methods of a literal to its appropriate Function. You can also do things like this transparently where these “objects” are handled appropriately:

Interestingly, the typeof  for each of these ends up being 'number'

The way these work also affect comparison operators:

In general, everything works the way you would expect it to except when you’re dealing with matters of “type”. If what you have is a string, number, or boolean and it’s created with the new operator you need to check instanceof String, Number and Function. If it was created as a primitive you need to check typeof value returning ‘string’, ‘number’, ‘boolean’. You could check if either returns true in a helper function, too.

Check out this stackoverflow post for some good discussion about checking types of string, and how using new shouldn’t be used considering the confusion and how unnecessary it is. While it might be in bad style I think it illustrates some of the details behind javascript’s inner workings. Also, I remember reading old javascript examples in books that make use of the new operator to try to ease people in from OOP backgrounds, so you can’t really escape that this exists. There are some interested reads on autoboxing/”primitive wrapper types” to check out, too.

Like many parts of javascript there’s always little gotchas that keep things interesting. Luckily as standards move forward and as people create libraries/frameworks/polyfills to pave the cowpaths, we will end up with an easier way to write javascript. I hope this made sense and if I made any mistakes, or need some clarify anything, please let me know in the comments.

Raspberry Pi wifi sleep issue with Edimax EW-7811Un

Although I have been reasonably happy with my current airplay speaker setup I ran into some issues where I couldn’t find the airplay speakers listed. I also couldn’t  ssh in, and on the pi directly  ifconfig wasn’t reporting an IP address either. I found myself having to continually run  ifconfig wlan0 up.

Turns out the Edimax EW-7811Un wifi adapter on Raspbian has an issue with very conservative power management. A quick google search turned up this forum post that worked me.

Essentially create a text file at  /etc/modprobe.d/8192cu.conf with the contents (may require sudo to write):

Save that, restart, and hopefully all is fixed.

Another small quirk, but still very happy with everything. In all this I also discovered that shairport-sync had been updated to 2.4 with some bugfixes, yay.

Finally, AirPlay Speakers

I’ve been toying with getting my airplay speakers setup with right combination of pieces and parts over the past few years and I’m happy to report that my tinkering has paid off. My last attempt which I didn’t report involved using Rune Audio, an ambitious attempt to consolidate the OS, audio drivers, media integrations (including airplay and spotify!), all wrapped in an easy to access web interface. Who wouldn’t want a one-stop install and get everything setup with all the bells and whistles.

Unfortunately, even with pushing through to the latest on their master branch I wasn’t able to get a solid experience out of Rune Audio. The UI was slightly buggy, and more frustrating was the dropping of airplay, and although it seems they might have fixed some of these issues I wanted something more barebones. The more I thought about it the more I realized I just wanted airplay. I didn’t need a web interface for this thing, I would use my phone or mac and simply use that for controlling, so much easier. If I want spotify, I’ll airplay spotify.

So, it was back to the drawing board. I started with a fresh install of raspbian, setup the wifi dongle, setup the audio configuration to default to my USB DAC, and lastly setup shairport. My previous instructions were for shairport 1.0, but since then there’s a brand new fork in town and it’s awesome. It’s called Shairport Sync and it allows you to sync to multiple sources (ie: multi-room setup) so that every airplay receiver running shairport sync is kept, well, synchronized. I only have the one device, so this wasn’t something I really needed but aside from this new feature it was just so much easier to install and setup. My previous instructions for the 1.x required some interpretation, but this setup worked perfectly based on the github instructions.

Happy to say that my idea of a stable, custom AirPlay speaker setup is finally complete with a raspberry pi, raspbian, audio USB DAC, wifi dongle, and shairport sync.

Callback currying, and futures (or a preface to Promises)

I have been diving into the different patterns that can be used to organize functional code. One pattern being curry, a nod to Haskell Curry, that arises from the need of generating a function that has one, or more, of its parameters already setup. I was looking for some good examples in javascript logic where this is applied and came across this blog post, and I saw a really interesting pattern at the section “Currying the callback”:

This little pattern looked really interesting to me, and I saw some parallels with promises. Essentially, returning a function that you can call at any time to handle the response, just like promises when you call  .then and pass in a function as a parameter.

To satisfy my curiosity and whether I was on the right track, I left a comment and Bruno was kind of enough to leave a response. Bruno confirmed my assumption, and left me with the realization that as we work on these concepts one of the best way forwards is to formalize these patterns. These ideas start with libraries, move to specs, then browser implementation, and maybe even push the language to include new features that can’t simply be polyfilled.

In any case, if you want to get an idea of how currying is used in this particular instance, and to get a little bit of insight in how promises are achieving this management of async, check out Bruno’s blog post. It’s also refreshing to see that the community is supported by people like Bruno to answer questions like mine on his blog.

Cleaning Up Whitespace in your Github Diffs [Bookmarklet]

Ever had those pesky whitespace changes show up in your github diffs? They’re usually just trailing whitespace, or from converting to tabs (or spaces if you prefer). They often aren’t what you are trying to focus on, and luckily Github has a way of removing them.

But adding a ?w=1 query param is a hassle. So here’s a way to shave a few seconds off with a bookmarklet. Drag and drop this link to your bookmarks bar 〉 Remove GitHub Whitespaces

Here’s the code un-minified in case you’re wondering.

While we’re on the topic of whitespace, if you are using SublimeText, there’s a handy plugin called TrailingSpaces that can highlight and delete trailing whitespace from your files.


These days it’s hard to represent all the moving parts of a webpage into a single PDF. We have animations, layout, state changes, and many other dynamic pieces on the page that are too easy looked over when approving a concept. The solution has been to spend more time on the wireframe process and ensure that the idea is built right into the browser to capture what is on the screen, but what is happening on the screen. Our hopes are that this extra time gives us the ability to stay on track with development, and not have one of those “Oh, I didn’t see x in the wireframe” moments.

These wireframes are often without aesthetic treatment so that the idea can be fleshed out before it’s decorated and polished. It is common that any layout or design will anticipate the need for images. At the wireframe stage it’s not likely these images are ready, so we put placeholders.

Placeholder images are easy enough for the designers who build things in photoshop, a grey box takes a few seconds. But what about for us, developers? There are services that are highlighted in this “Top 8 Placeholder Services” article, but they all rely on a physical images.

I can foresee a few problems with these services:

  1. They load from an online resource.
    We aren’t always connected to be able to pull these dynamically created assets
  2. The image is available offline as a physical asset, but requires setup and referencing a file in markup.
    I have to save the file, add it to the web directory, then jump back to my editor and reference the file in-line, track the physical file in source control, etc. Create a placeholder should be fast.
  3. We are left with pixel dimensions.
    We won’t always know how we want the image to fill the space, and often it won’t be confined to specific width and heigh pixels. Why get hung up on dimensions when you can just get something that symbolizes a placeholder, and represents what you do know about your layout (a width/height ratio, width/height percentages, etc).
  4. It’s an image.
    We don’t need an image, we need a placeholder that represents an image. Images bring with it baggage that we aren’t ready for, we can do more and still get the idea of the image.
  5. To address these issues I have come up with a side-project that I am calling, for lack of a better name, placeholder. You can check it out on github.

    In a nutshell this gives you the following ways to define placeholders:

    • SASS mixin
    • Javascript
    • Polymer Element

    Each of these methods give you multiple ways to define the way that the placeholder can take shape, including:

    • Width and Height
    • Width and ratio
    • Just a ratio

    Also, on top of creating these placeholders there are different styling helpers that can be applied to the containers to have the placeholder fit the visual language of your wireframes.

    The biggest benefits of doing it this way is that you are doing everything in code and it’s tracked in source control, it’s fast to bring in, it’s flexible in definition of the placeholder, and it can become part of the process that you and your team use to consistently create and use placeholders.

    I still have features I want to add, and features that need to be documented. If you feel like it’s a good idea give the project a ‘star’ on github, fork it, or just give it a try and give your feedback. I will continue to develop the idea further and smooth out any kinks.

    Thanks for taking a look!

On the go in 2014

It was my intention to have a 2014 resolutions post of various topics and progress I wanted to make for the year. Well, things move quickly and we’re already well into 2014 with lots happening, so I’m going to do a short look back instead.

In January 2013 I started with the guys at Hybrid Forge, here in Edmonton, Alberta. I started off as an E-Commerce Analyst, and ended the year as an E-Commerce Lead, specializing in setting up, theming and developing Magento. What a platform! Magento, based on the Zend Framework, builds upon many layers that can easily have your head spinning. As my start into the web as a profession Magento had no shortage of challenges. As time went on, I ended up working on other projects in the company, including some Drupal work. Outside of work, I was learning Symfony for a project, and also launched a few sites on Drupal. All the different ways these platforms accomplish similar goals gave me the foundation I needed to grasp the mechanisms used by larger codebases and frameworks.

About two months ago, I left Hybrid Forge to try and take the next step into the big world of web development. I was excited to see what else I could learn and accomplish. My friend Adam Braly, who works at Snapshot had a fun opportunity for me to go down to Southern Oregon to work on a website. The guys at Snapshot were great, and gave me a chance to practice my skills with a new team, and bounce ideas about development and process. The website was being built in wordpress and was my first time having seriously touched the platform. It was immediately apparent that there were some parallels between it and Drupal, although Drupal arguably with a more established foundation in the CMS space.

My 5 week Oregon adventure ended in Portland, which is neighboring the town my family lives in. Portland is an amazing and awesome city, which I love to visit. With it’s fun and laid back culture it was the perfect place for the Ember.js conference which I was able to attend. Back in August 2012, when I first decided to get back into web development, I wanted to play with a new idea and javascript seemed like the perfect candidate. I decided to dive into Ember.js making a basic app, getting a feel for how MVC could live in the browser, and how you could rely on data-binding to make your life much, much easier. The guys behind Ember, who also live in Portland, are big names and have contributed to some serious open source projects over the years. It was brilliant to see how far they and their team had come with Ember and where they plan to go. One of the best parts about attending the conference, which is the first tech conference I’ve been to, was to see how welcoming everybody was. It’s easy to feel like you know the least in the room, but it’s empowering when that isn’t what matters.

These past few months, and even through the past year, I have come to realize that everyone is always learning. As we learn, we develop, which is often consumed by others, causing them to learn and develop. It’s this gigantic ever-hungry feedback loop, accelerated by the web and systems like Github. It’s alright not to know, as long as you’re humble and hungry to be better. I am looking forward to 2014 and being able to talk more about what I have learned and contributed. Stay tuned.