:: Click for info about me

My name is Colby DeHart. I live in Nashville, Tennessee. I love dogs, music, programming, bikes, all games, solving problems, and learning new things. Feel free to get in touch!


Setting up Neovim for CLJS :: Permalink - 21 Aug 2018

In this post I’ll show you how i set up Neovim for ClojureScript development. First of all make sure you have the latest version of Neovim, lein, and clojure.

Next you need to install vim-fireplace. I use vim-plug so my setup looks like this

Plug 'tpope/vim-fireplace'
" autoconnect to repls
Plug 'tpope/vim-classpath'
" static support for lein.
Plug 'tpope/vim-salve'

In the fireplace readme, it says to also set up cider-nrepl, so we also go ahead and set up that by editing ~/.lein/profiles.clj so it looks like this:

{:user {:plugins [[cider/cider-nrepl "0.18.0"]]}}

Setting this up in this file will allow us to use cider in every project. Next we need to set up Piggieback. We also need to edit profiles.clj to add the dependencies for piggieback, so the whole file now looks like this:

{:user {:plugins [[cider/cider-nrepl "0.18.0"]]
        :dependencies [[cider/piggieback "0.3.8"]]
        :repl-options {:nrepl-middleware [cider.piggieback/wrap-cljs-repl]}}}

Now we need to go into a ClojureScript project. You can create new CLJS project with live-reloading using the Figwheel template.

lein new figwheel-main my-app

Move into the directory and start a repl.

cd my-app
lein repl

Next we need to start a ClojureScript repl with Piggieback. Just run this code in the Clojure repl.

user=> (do (require 'cljs.repl.nashorn) (cider.piggieback/cljs-repl (cljs.repl.nashorn/repl-env)))

Now, if everything has been setup properly, when you open vim you should be autoconnected to the REPL. If not you might need to run the :Connect and/or :Piggieback commands. (see the fireplace docs if you get tripped up here)

You will also need to run a figwheel repl in another terminal to get live code reloading while you develop.

lein fig:build


Easy Phoenix Contexts :: Permalink - 20 Feb 2018

The new version of Phoenix introduces a concept called contexts. Contexts are use to separate your app into specific domains as well as house the business logic for your CRUDy interactions. I am a big fan of contexts, but the way that they are presented in the templates include a good amount of duplicated-ish code. For example, if i had an Accounts context that controlled a User schema and a Profile schema, i would need to create functions get_user and get_profile which both take in an id and return a User.

I was finding myself rewriting a lot of the same code every time i added a new schema to a context, so I made this little module that can be used with any phoenix app to make contexts a little more concise.

defmodule MyAppWeb.Context do
  @moduledoc """
  The context module provides a few methods used throughout
  all of the contexts through its using macro. You can then
  override methods for certain schemas like so.

    defmodule MyApp.Users do
      use MyAppWeb.Context
      alias MyApp.Users.User

      get(User, id), do: User |> super() |> Repo.preload([:url])

      # call `context_fallbacks\0` to call defaults for overriden fns

  defmacro __using__(_opts) do
    quote do
      import Ecto.{Query, Changeset}, warn: false
      import MyAppWeb.Context
      alias MyApp.Repo

      @spec list(Ecto.Queryable.t()) :: [Ecto.Schema.t()]
      def list(schema), do: Repo.all(schema)

      @spec get(Ecto.Queryable.t(), integer | binary) :: Ecto.Schema.t() | nil
      def get(schema, id), do: Repo.get(schema, id)

      @spec get_by(Ecto.Queryable.t(), keyword | map) :: Ecto.Schema.t() | nil
      def get_by(schema, clauses), do: Repo.get_by(schema, clauses)

      @spec create(Ecto.Queryable.t(), map) :: {:ok, Ecto.Schema.t()} | {:error, Changeset.t()}
      def create(schema, attrs \\ %{}) do
        |> struct
        |> schema.changeset(attrs)
        |> Repo.insert()

      @spec update(Ecto.Queryable.t(), Ecto.Schema.t(), map) ::
              {:ok, Ecto.Schema.t()} | {:error, Ecto.Changeset.t()}
      def update(schema, %schema{} = entity, attrs) do
        |> schema.changeset(attrs)
        |> Repo.update()

      @spec delete(Ecto.Schema.t()) :: {:ok, Ecto.Schema.t()} | {:error, Ecto.Changeset.t()}
      def delete(entity), do: Repo.delete(entity)

      defoverridable list: 1, get: 2, get_by: 2, create: 2, update: 3, delete: 1

  @doc """
  Call this macro at the end of your context file
  to automatically call any of the main CRUD functions
  that you have overridden with the defaults
  defmacro context_fallbacks() do
    quote do
      def list(other), do: super(other)
      def get(other, id), do: super(other, id)
      def get_by(other, clauses), do: super(other, clauses)
      def create(other, params), do: super(other, params)
      def update(other, schema, params), do: super(other, schema, params)
      def delete(other), do: super(other)

The module has comments to explain how to use it, but basically you just use using MyAppWeb.Context at the top of your context module. Then you get all the basic CRUD operations for your ecto schemas (this is assuming you are using ecto). Then, for the example Accounts module, you would just do Accounts.get(User, 1) and Accounts.get(Profile, 1) to get a user or account, respectively.

If you need to add any methods, go for it, for example Accounts.get_current_user. If you need to override any of the basic functions for a schema (to provide preloads or any extra functionality when creating or updating a schema for example), just make sure that you call context_fallbacks at the end of your module file, so that any of your overrides won’t destroy the functionality of other schema methods.


Type Safe Phoenix Controllers with Dialyzer :: Permalink - 31 Jul 2017

Phoenix 1.3 was just released. We have been using the rc version for a while at my work and loving it. The addition of contexts has really cleaned up the way we think about structuring code. Another addition is the notion of a Fallback Controller. Just in case you haven’t tried out Phoenix 1.3 yet, the fallback controller allows you to only code the happy path in your controllers, and anything that is not a Plug.Conn struct fallback to a different controller to be handled. Using this along with dialyzer, we have been able to add a bit of type safety to our application.

Say, we have a Phoenix controller that gets a user by id

# lib/my_app_web/controllers/user_controlller.ex

defmodule MyAppWeb.UserController do
  use MyAppWeb, :controller

  # new in Phoenix 1.3, this is our context for our Accounts entities
  alias MyApp.Accounts

  action_fallback MyAppWeb.FallbackController

  # notice this unhelpful spec, we'll fix this soon
  @spec index(Plug.Conn.t, map) :: any
  def get(conn, %{"id" => id}) do
    with user when not is_nil(user) <- Accounts.get_user(id) do
      render(conn, "show.json", user: user)

This is a pretty standard controller in Phoenix 1.3. The two noticeable changes from 1.2 are the alias of a context, which is basically just a module which handles your business logic for different domains, and the action_fallback macro, which sets the fallback controller for this controller. The fallback controller would look something like this.

# lib/my_app_web/controllers/fallback_controlller.ex

defmodule MyAppWeb.FallbackController do
  @moduledoc """
  Translates controller action results into valid `Plug.Conn` responses.

  See `Phoenix.Controller.action_fallback/1` for more details.
  use MyAppWeb, :controller

  def call(conn, nil) do
    |> put_status(:not_found)
    |> render(MyAppWeb.ErrorView, :"404")

so here, we know that Accounts.get_user/1 will return us a User Ecto Schema, or nil. Knowing that if we don’t find the user, the nil will pass through to the fallback controller and hit the call/2 function with nil as the second argument, rendering an ErrorView. This is the basic idea of the fallback controller.

So now, we want to add some type safety to this application so that we can make sure that we handle all of the unhappy paths in our fallback controller. Now we are going to edit the lib/my_app_web.ex file and add a controller_error type to the controller using macro, so that the type is accessible in all of our controllers.

# lib/my_app_web.ex

defmodule MyAppWeb do
  # ...
  def controller do
    quote do
      use Phoenix.Controller, namespace: MyAppWeb
      import Plug.Conn
      import MyAppWeb.Router.Helpers
      import MyAppWeb.Gettext

      @type controller_error :: nil

You will need Dialyxir to be able to check your specs and describing the installation and configuration of this tool is outside of the scope of this post, but the docs are good and it is not too difficult. Now with this type, we will add accurate specs to our user and fallback controllers.

# lib/my_app_web/controllers/user_controlller.ex

  @spec index(Plug.Conn.t, map) :: Plug.Conn.t | controller_error
  def get(conn, %{"id" => id}) do
# lib/my_app_web/controllers/fallback_controlller.ex

  @spec call(Plug.Conn.t, controller_error) :: Plug.Conn.t
  def call(conn, nil) do

See, now we make sure that our controller actions will return either a plug, or a controller_error and that our call/2 function in our fallback controller is able to handle any controller error we have.

Say we add a new action to the user controller to create a user

# lib/my_app_web/controllers/user_controlller.ex

  @spec create(Plug.Conn.t, map) :: Plug.Conn.t | controller_error
  def create(conn, %{"user" => user_params}) do
    with {:ok, user} <- Accounts.create_user(user_params) do
      |> put_status(:created)
      |> render("show.json", user: user)

We know that Accounts.create_user/1 will always return either {:ok, user} with the User Schema, or {:error, changeset}, where changeset is an Ecto Changeset. If we run Dialyxir, we will get an error in the controller, because it will see it is possible to return a value that is not either a Plug.Conn.t or nil. All we need to do to fix this is update our controller_error type, as well as handle this type of error in our fallback controller.

# lib/my_app_web.ex

  # ...
      @type controller_error ::
        | {:error, Ecto.Changeset.t}
# lib/my_app_web/controllers/fallback_controlller.ex

  def call(conn, {:error, %Ecto.Changeset{} = changeset}) do
    |> put_status(:unprocessable_entity)
    |> render(MyAppWeb.ChangesetView, "error.json", changeset: changeset)

Nice, now we should get no errors from dialyzer. Basically just keep adding error types to controller_error and handlers in your fallback controller as you go and you can feel confident only coding the happiest paths in your controllers.


Redux Side Effects In 12 to 16 Lines :: Permalink - 07 Jan 2017

I’ve been thinking (and perhaps overthinking) a bit about my redux workflow. Specifically how to handle side effects, such as async requests. I have used redux-thunk and redux-saga in the past. While they solve the problems of async redux well, something never felt quite right and I couldn’t put my finger on it.

Last week I came across this article on Mark’s Dev Blog that made me realize why I don’t like these solutions. This, along with using Elm for the last month or so, made me seek out a simpler solution. I got turned onto redux-loop which was closer to what I wanted but was a bit bulky and also allows batching actions, which I see as not so great (see this tweet). So I started writing a blog post titled…

Redux Side Effects Middleware in 12 lines

I was so young at this point. So foolish and bright-eyed. I posted this untested snippet into slack at an attempt to handle async actions like Commands in elm. Here’s the (totally nonsense) code.

const cmdMiddleware = store => next => action => {
  const res = next(action)
  if (!Array.isArray(res)) return res
  let [ state, command ] = res
  if (typeof cmd === 'function') {
    Promise.resolve(command()).then(a => store.dispatch(a))
  } else if (Array.isArray(command)) {
    let [cmd, ...args] = command
    Promise.resolve(cmd(...args)).then(a => store.dispatch(a))
  return state

You’ll spot my error pretty quick. I forgot what the return value of next is in a middleware, which is just the returned action and not the updated state, also the return value of this function has no bearing on state.

The middle of this function (lines 3-10) were where I was on the right track. I wanted to be able to dispatch actions that were one of three things:

  • state - the updated state, just like normal
  • [state, cmd] - the updated state and a command, which will return an action to be dispatched, possibly async through a promise
  • [state, [cmd, ...args]] - same as before, but the cmd and args to be pass to it in an array.

I still needed to figure out how to intercept the actual reducer though and not the dispatch function. With great hubris, i titled a new blog post

Redux Side Effects Enhancer in 16 lines

Here I actually made an example application using create-react-app and tried a few things here, but then found out about the store’s replaceReducer and got pretty close

const cmdEnhancer = createStore => (reducer, preloaded, enhancer) => {
  const store = createStore(reducer, preloaded, enhancer)
  store.replaceReducer((state, action) => {
    const next = reducer(state, action)
    if (!Array.isArray(next)) return next
    const [ newState, command ] = next
    if (typeof command === 'function') {
      Promise.resolve(command()).then(a => store.dispatch(a))
    } else if (Array.isArray(command)) {
      const [ cmd, ...args ] = command
      Promise.resolve(cmd(...args)).then(a => store.dispatch(a))
    return newState
  return store

Keyword here is pretty close. I loaded this enhancer into my simple application and it worked! I could return commands in my reducer that would get fired off. Everything worked exactly as expected. I even began to publish my blog post and begin to enjoy the rest of my weekend when I saw the error.

What happens when you use combineReducers or reduceReducers or anything that a normal person using redux would use? This enhancer assumes that you have a single reducer that returns one of the three possible return types. I fiddled with the enhancer and shut my laptop case. It was too complicated to do in any lines of code worth bragging about. That is until I changed the title the second time.

Redux Side Effects in 14 Lines

I came back and discarded enhancers and middlewares. I realized that I needed access to all of the user’s reducers to make this actually work. And the only place I thought of to do that would be in the reduceReducers function. And then I came up with this.

const reduceCommandReducers = (reducers, store) => {
  return (state, action) => reducers.reduce((s, r) =>{
    const next = r(s, action)
    if (!Array.isArray(next)) return next
    const [ newState, command ] = next
    if (typeof command === 'function') {
      Promise.resolve(command()).then(a => store.dispatch(a))
    } else if (Array.isArray(command)) {
      const [ cmd, ...args ] = command
      Promise.resolve(cmd(...args)).then(a => store.dispatch(a))
    return newState
  }, state)

This works with multiple reducers. All the async actions dispatch just as expected. You can achieve a similar approach with combineReducers as well, I just was uninterested in doing it. The part of this that is strange is that you have to reduce reducers after you create your store and then use the replaceReducer function like so

const store = createStore(state => state, {}, enhancer)
store.replaceReducer(reduceCommandReducers([...reducers], store))

This makes sense because you have to give your reducer access to dispatch to let it produce more actions. This goes against a lot of the main ideas of redux, but this pattern is inherently such.

All of this goes with the same caveats in redux-loop. Is it a good idea? Maybe. Does it put side effects in your reducers? absotively. I just wanted to see if I could get a reasonable approach to async actions in an afternoon and learn a bit more about enhancers and the createStore function.

I have put up a repo that uses this function just to show that it works for a simple use case. It is probably broken. It probably doesn’t play well with other middlewares and reducers. It most likely introduces some strange race conditions. I did not test it and won’t. The reason is that I had already figured out how to do all of this much more simply.

Redux Side Effects Middleware in 12 Lines: Redux

I forgot to mention my very first attempt at this was a middleware that put the commands in the action creators and not the reducer, which was much simpler and did not break the core tenets of redux. You would basically dispatch an [action, cmd] pair instead of just an action to get the same effect.

const cmdMiddleware = store => next => action => {
  if (!Array.isArray(action)) return next(action)
  const [ act, command ] = action
  const res = next(act)
  if (typeof command === 'function') {
    Promise.resolve(command()).then(a => store.dispatch(a))
  } else if (Array.isArray(command)) {
    const [ cmd, ...args ] = command
    Promise.resolve(cmd(...args)).then(a => store.dispatch(a))
  return res

This approach is probably better. You don’t have to put side effects in reducers. Putting async bits in action creators isn’t too far off from thunks/sagas that folks are already used to. Also, it is 12 lines, which means I wouldn’t have had to change my post title. Three times.


Overcoming Bookmarking Syndrome in the New Year :: Permalink - 01 Jan 2016

I save a lot of bookmarks for tech, Like, a lot. I went through all of my saved tutorial bookmarks, YouTube ‘Watch Later’ videos, Udemy courses, Instapaper feed, and unread tech books on my kindle and calculated that I have about 90 hours of learning content that I have on queue.

This list has been getting out of control for a while now. A hacker news article saved in Instapaper, a 2 hour conference talk posted on twitter gets added to ‘watch later’, someone in slack mentioning a new technology gets thrown in my haphazardly labeled TECH STUFF bookmark folder. It’s really easy to do this, but the more I do it, the larger the queue gets, the more intimidating it gets and, sadly, the less likely I am to even try to whittle down this goliath.

My first action of the new year, this morning, was to catalog every nook and cranny of this mountain, filter out things that aren’t relevant or that I probably never cared about in the first place (Still not certain why I bookmarked a digital signal processing library in Haskell). After this, I set a goal: two months. Any longer and the amount of new tech I would want to learn would clutter up my bookmarks again and create the same problem, any briefer and the effort per day would be too unpalatable.

In two months by doing about an hour and a half a day, I can get through learning Docker, figuring out org-mode, SICP, finally understanding what the heck machine learning is, two Elixir books, building an operating system in Rust data science for python, Rich Hickey’s apparently amazing talk that I still haven’t had time for, and about 20 other interesting articles, videos, and tutorials.

An hour and a half a day isn’t easy. Some days you don’t have that. Some days you don’t feel like it. Some times you forget that you should only focus on this iliadic Bloomberg article and not look up and fret over the mountain that you have set out to climb. I think I can handle it though. I keep my Sundays free and can catch some of the slack of fall-behind days through the week. I’m setting a recurring event in my calendar and plan on batching out what articles and tutorials I am going to get through each week. And I want to handle it because it is important to me.

It is important because I really want to keep learning. I want to learn new languages. I want to learn how to build an AI that plays Street Fighter. I want to build my own digital synthesizer. I want to never again be bamboozled by what git command I want. I want to learn to make new things and how to distribute and deploy them. And I don’t want to get scared by the amount of stuff I want to learn.

I feel this ‘bookmarking-syndrome’ puts too much confidence on a mythical ‘one day’ and is a poor coping mechanism for dealing with information overload. Maybe something like clearing out a bunch of bookmarks and a video playlist seems trivial, but for me right now it’s kind of important that I don’t keep forming a habit of being so overwhelmed by all that I don’t know that I don’t even try to learn, and instead start clearing off my feeds, tinkering around with new tools, and grokking in the new year.


Remote PHP Debugging in Vim :: Permalink - 19 Nov 2015

I love vim. Most people who use vim feel the same. It feels pure and simple. The commands make sense (after you learn them) and everything is configurable through plaintext files. It’s not for some people but for me it’s everything I need. Well, almost everything I need.

I tend to get envious of an IDE’s integrated debugger when I really need it, so I went searching for how to get the same functionality in my vim set up. I quickly found VDebug which also seems to be the only useful plugin for debugging in vim. I’m going to quickly walk through my setup for PHP debugging in vim (You can also use it for ruby, node, perl, and python though I have not tried these yet)

First, you need to install the plugin and configure a few settings. I have recently switched to neovim and have replaced Vundle with Vim-Plug, so my setup looks like this.

Plug 'joonty/vdebug'
let g:vdebug_options = {}
let g:vdebug_options["port"] = 9000
let g:vdebug_options["break_on_open"] = 0

I found that I had to initialize the options dictionary or I ran into problems assigning properties. I set the port to 9000 and turned off the break_on_open setting so that it doesn’t break on the first line. I use vagrant and a virtual machine to do my PHP development so I need to tell vim how to map from my home filesystem to the virtual machine’s. I have a line later on in my .nvimrc which sources a local config file so I can use project specific settings.

"Local Vimrc
if filereadable("./.lnvimrc")
    execute "source ./.lnvimrc"

So in my php project i have a .lnvimrc that looks like this

let g:vdebug_options["path_maps"] = {
\    "/vagrant": "/Users/colby/Code/project-directory"

You will need to change the path to your project of course. Just to be clear, that is the location of the project on my virtual machine on the left and my host machine on the right. Okay, cool that’s all you need on the vim side of things. You will now need to set some things up on the php side of things.

So ssh into your VM and install XDebug. This is the php module that will allow remote debugging. On an ubuntu box, simply running sudo apt-get install php5-xdebug should be good enough. You need to go to the xdebug site for instructions for your particular distro. This should automatically create an file at “/etc/php5/conf.d/xdebug.ini” which you will need to add the following to.


The zend extension part should be autopopulated, don’t copy the one above because the location of your .so file could be different depending on your version of php. You need to put your host machine’s IP address in the remote_host parameter. You can get this just by running ifconfig (or ipconfig on Windows). Now you should be ready to debug!

You can press <F10> to toggle a breakpoint in your code and then press <F5> to start the debugger, which will wait for 20 seconds for a connection. You will need to send a special signal in your request to tell PHP to start debugging. You can download a Chrome XDebug Helper plugin to toggle this, or just send a query string parameter of XDEBUG_SESSION_START=1 in your request. After this, you should have a debugging window pop up in your editor and you can see the VDebug docs for instructions on how to run through the script and evaluate code. Happy debugging!


Make console apps with node :: Permalink - 24 Apr 2015

So you’ve got an idea for the next Ack, but you don’t know how to write console applications! No worries, you can write console applications with JavaScript and publish them to npm pretty easily. I recently did this with a project called sfold which allows you to quickly scaffold files and folders for a project.


First you need to make an empty directory and run npm init. If you’ve never done this, it simply sets this directy up to hold a node project and will create a package.json file. For the rest of this tutorial, let’s assume we want to make a console application called salute which takes in a name and then prints to the console “Hello, your_name”.

Let’s now make a main.js file which will hold our application. This will be the main file for our console app. These are the full contents of the file.

#!/usr/bin/env node
'use strict';

console.log('Hello ' + process.argv[2]);

The first line is a shebang which says that we should use the node program to run this script. Then we just print to the console the string “Hello “ and the 3rd argument. The reason we want the 3rd is because when you call this from the command line using npm, the first argument will be ‘node’ and the second will be the absolute path to your main.js file so that when calling salute colby, the 3rd argument is actually ‘colby’.

##Running it

Now we need to edit the package.json a little bit. Delete the property called ‘main’ and add one called ‘bin’ which should look like this.

    "name": "salute",
    "bin": {
        "salute": "main.js"

The bin attribute contains key-value pairs where the key is the name of the command called from the command line, which we want to be salute. If you wanted to call your application by typing ‘say_name’, you would change salute to that here. And the value is the location of the script that will be run, which is just main.js for us.

Now we need to hop back into your terminal. To test this, first we need to link this package, which will allow you to run it locally. just run npm link. Now your app should be linked to your system so you can just run salute colby and it will print out “Hello, colby” back to you. Great! now we need to publish it.


If you haven’t already, you need to go to the npm website and register an account. Then from your terminal you can login with npm login with your credentials. After that, all you have to do is run npm register ./ and your application will be publically available. All people will have to do is run npm install --global salute (or whatever you name your app), and they can use your awesome command line application!


When v.1.0 :: Permalink - 05 Apr 2015

So I’m finally fully ready to announce When V.1.0 is ready! When was my first capstone at Nashville Software School and is a group activity planner. It’s powered by Firebase with an Angular frontend and you can go ahead and log in and use it here

Basically the idea is that you login and can create events for groups. You pick a name and a time range when the event can possibly happen and the app generates a link. You can send the link to whoever you want to attend. They put in their name and email and then edit their availability on a calendar widget. Then you, the creator of the event, can view the merged calendar of everyone’s availability. In the case where there is no possible way that every participant can attend the event, the app will sort the participants by busyness and then find the most optimal number of participants.

Feel free to give it a spin and if you have any issues, you can submit an issue on the GitHub repo or put a comment below.


Functional JavaScript with Lodash :: Permalink - 08 Mar 2015

EDIT: I’ve redone my whole website since this post, so the game is no longer on here, but you can check it out by looking at the code.

I’ve been getting into breaking functions down into smaller chunks and write more functional type JavaScript. This was prompted by wanting to learn and utilize lodash better as well as I have been teaching myself Python, which highly values collection manipulation and more compressed, functional type methods. So inspired by pythonic coding, I wrote a the classic game snake in Javascript with lodash.

You can see the game here. I will be referencing it throughout the rest of the post. Now this post title is a bit misleading, the code I wrote isn’t super functional, but some parts of it do emphasize how utilizing set theory can help write more concise code. Whatever, let’s have a look.

First a super simple example, When the user presses a key on the page with snake, I want to act upon it if it is an arrow key and return early otherwise. This is very easy in lodash.

$(window).on('keydown', function(e){
    if(!_.contains([37, 38, 39, 40], e.keyCode))
    //38, 37, 40, 39 : up, left, down, right

_.contains is a lodash function that takes in an array and an item and returns true if the array contains the item. So if the array [37, 38, 39, 40] (The character codes of arrow keys) does not contain the keyCode of the event, return early. This is much simpler than checking for each key with equality. Alright, more complicated/cooler example now.

My game of snake is based on a 16x16 grid. The snake and apple are just a collection of x,y coordinates. I also keep track of the head of the snake and the direction in a dir variable which is an x,y vector so if the snake was moving up, the dir would be [0,-1] (Move horizontal 0 and vertically upwards 1)

Whenever the snake moves I have to see if the snake dies and restart the game. Here is the code for that.

var head = snake[snake.length-1],
    next =, i){return el + dir[i]});
if( _.any(next, function(val){return val < 0 || val > 15}) ||
    _.any(snake,function(val){ return _.isEqual(next, val) })){
    //Kill that snake

So first I get the next position the snake will be moving to by mapping the position of the head of the snake with the dir vector (in the map function, el is the coordinate and dir[i] will be the corresponding vector direction)

Next I check and see if the snake is about to go off the map. So the map is 16x16 with coordinates from 0 to 15, inclusive. So I use lodash’s any method to see if either of the coordinates of next are greater than 15 or less than 0. _any will return true if any item in the collection satisfies the condition in the function, which makes sense.

Then I have to find out if the snake has run into itself, which would end the game. This is a bit more complicated because I have to make sure the next coordinate is not equal to any of the snake’s body part’s coordinates, but lodash makes this easy.

_.any(snake,function(val){ return _.isEqual(next, val) })

lodash’s isEqual gives us a deep equals so we can compare arrays which is just awesome. With that known, it almost reads like English. If any item in snake isEqual to the next coordinate, return true. Okay, one more example.

function styleAt(x, y){
    if(_.any(snake, function(e){return _.isEqual(e, [x,y])}))
        return '#74D13D';
    if(_.isEqual(apple, [x,y]))
        return '#ED9898';
    return '#ccc';

This function is for coloring each cell on the canvas, I pass in an x and y, and the function returns green if it is a snake cell, red if it is the apple cell, or grey if it is empty. The first if statement checks to see if any element, e in the snake isEqual to the [x,y] coordinate.

Hopefully these examples give you a few ideas on how you can use lodash and JavaScript’s built in collection methods like _map and reduce. Set theory posits that through simple functions like these that any one collection or set of data can be transformed into any other set of data. That makes them very useful and I would encourage everyone to make use of them. It will make your life much easier.


Writing Node Scripts :: Permalink - 24 Feb 2015

You can write simple scripts in your package.json for your projects that will run simple commands like jshint *.js or karma start, but you can also write your own js scripts and run them with node so that npm run new_post will run node ./scripts/new_post.js for anything you need.

Here is the simple script I wrote to make a new post in Wintersmith.

var fs = require('fs'),
    prompt = require('prompt'),
    path = require('path'),
    changeCase = require('change-case');

prompt = prompt.start();

prompt.get(['title'], function(err, result) {
    var title = result.title
        cleanTitle = changeCase.snake(title); 

    fs.mkdirSync('contents/articles/' + cleanTitle);

    var content = '---\n';
    content += 'title: "' + title + '"\n'
    content += 'author: colby-dehart\n'
    content += 'template: article.jade\n'
    content += 'date: ' + printDate() + '\n'
    content += '---\n'

        'contents/articles/' + cleanTitle + '/index.markdown',

function printDate(){
    var d = new Date(),
        res = '';

    res += d.getFullYear() + '-';
    res += (d.getMonth()+1) + '-';
    res += (d.getDate()+1);

    return res;

I prompt the user (myself) for a title, then create a directory that is named the title in snake case. Then I create some Yaml front matter for the post and write it in a file in the new folder called index.markdown. I have this script loaded in a folder in my root named scripts and then in my package.json I have.

"scripts": {
    "new_post": "node bin/new_post"

Now whenever I want to make a new post, i just run npm run new_post and I am prompted for a title and all of the directory making and front matter generation is handled for me. Using this method is great for one-off tasks that wouldn’t necessarily make sense in an automated task runner like gulp.