The Enthusiast Trench

25 Nov 2016

trench outside

The Enthusiast Trench is a metaphor for a topic/hobby/community/pastime that can’t easily observed and understood from outsiders without a similar amount of interest or involvement of the curious party.

There isn’t just one Enthusiast Trench. There are many trenches and they are easily to find if you are walking on the surface of the earth. It’s like the concept of rabbit holes but rabbit holes or rabbit holing is usually a pejorative about wasting time. Enthusiast Trenches are about interest, enthusiasm and the hidden nature of the payoff in these things until you spend enough time to appreciate them. At that point, you are in the trench and now you are unable to explain to outsiders what you have learned and witnessed in the Enthusiast Trench. The trench in The Enthusiast Trench metaphor isn’t a pejorative. It isn’t related to dirt or digging. Enthusiast Trenches aren’t good or bad.

Anything that can’t be explained why it is fun is probably an Enthusiast Trench. When a person has to resort to metaphors, they are trying to think of things that surface people have seen and use those for stand-ins for things they have seen underground in the Trench.

trench inside

If you listened to someone talk about why they built a life-sized Lost in Space blinking computer replica they might tell you “it was fun” but if you probe “why” then they are going to have a rough time explaining it. The raw answer in their head is probably something like:

I didn’t think I’d be able to get the neon bulbs refresh time to be precise enough to look like the original Burrows props. But, after I did some tests and talked with some friends that I met (and have become good friends with since), I knew I could get the full scale version working. Then it was just a matter of time …

The Trench isn’t this project or this person. It’s the whole community of people doing projects like this. The Trench hides the real “why” behind a time and interest wall.

A community where mods, hacks or extensions exist and are plentiful is a strong indicator that it is an enthusiast trench. The important thing about Enthusiast Trenches is not that it is one or it isn’t one. It’s that it can’t be easily appreciated.

I can think of a lot of examples but some of the biggest trenches are the ones that are abstract and not physical. Photography is one but it can be demonstrated physically (maybe not the process but the product). The abstract trenches are really tricky. So, naturally, being a software person I can think of a lot of software trenches.

Examples

A working irc client in minecraft using mods. minecraft irc

A raid-proof base in Rust (a survival/building game), designed in an external CAD program with mods. rust base

A development board with the PCB shape of a Lego minifig lego pcb

These examples pictured above are easily demonstrable because they are visual or physical. Abstract things are not.

Libs

This is true for software libraries in every language I can think of. Maybe I’m not in some of these communities. Maybe I’m haven’t been in the communities for a long time. I might ask the question “what are modern libraries to use in Java these days”? This is like calling down to someone in the trench after you have left. People are extending tunnels that can’t easily be explained.

Python Trench: “oh, nobody uses urllib2, everyone uses requests and there’s this great requests addon that makes uploads so easy, it really ..” (etc etc).

Maybe software libraries aren’t purely fun. But people can be enthusiastic about them because they are amazing in their eyes. If you are an outsider, you won’t be able to see the fun in the interior tunnels of their trench.

Fear of Missing Out

There is definitely a relation to the fear of missing out (FOMO). You could feel bad about not being in all trenches and many times I do. I don’t want to encourage FOMO. I don’t want to give FOMO any more fuel. I don’t really have a solution to FOMO and really that’s a different topic.

I follow the City Skylines subreddit but I don’t play the game. I know people are having fun. I sort of understand the game mechanics and the game loop. But there are a lot of mods and deep mechanics I don’t get. This is true of a lot of games with “mods”. The community is digging its own trenches from within a trench by extending the game. But I really don’t grok the fun.

Sometimes, I just let the weight of the trenches flow over me and appreciate the complexity. Like looking at a landscape from really far away. It’s beautiful because it’s missing the details.

Pry and Slop

19 Apr 2016

If you are working on a gem that uses slop itself (your gem uses slop) then you might run into this error when adding pry. Because the latest published pry gem uses slop 3.6 but you are probably using slop 4. Slop 4 and 3 aren’t the same API.

require 'my_cool_gem_im_working_on'

Gem::ConflictError: Unable to activate my_cool_gem_im_working_on-0.2.0, 
because slop-3.6.0 conflicts with slop (~> 4.2)
from .../rubygems/specification.rb:2284:in `raise_if_conflicts'

On bundle install you’ll probably get a different error.

Resolving dependencies...
Bundler could not find compatible versions for gem "slop":
  In snapshot (Gemfile.lock):
    slop (= 4.2.1)

  In Gemfile:
    my_cool_gem_im_working_on was resolved to 0.2.0, which depends on
      slop (~> 4.2)

    pry (= 0.10.1) was resolved to 0.10.1, which depends on
      slop (~> 3.4)

Running `bundle update` will rebuild your snapshot from scratch, using only
the gems in your Gemfile, which may resolve the conflict.

This is true for pry 0.10.2 too. There are two options I’ve found that works:

Update Pry

tl;dr Do this

Install 0.10.3 or newer. Make sure your bundle is resolving to that exact version. This means

# your Gemfile
"pry", "= 0.10.3"

in your Gemfile. If you are working on a gem and don’t really have a Gemfile but have a gemspec file then put this dev dependency in your gemspec.

# your .gemspec file
spec.add_development_dependency "pry", '= 0.10.3'

Install From Master

You could also install pry from github master. This might show up as 0.10.3 depending on when you are reading this. Version numbers only increment when pry does a release. I found that pry git master did not have this issue.

Now the problem here is, if you are working on a gem yourself, you don’t have a Gemfile. Afaik, you can’t install a gem from github source instead of a gemspec (that wouldn’t make sense because you are going to distribute a gem!). But perhaps, you maybe want pry temporarily in your gemspec like this:

# your_gem.gemspec
spec.add_development_dependency "pry", '=0.10.3'

Here’s how you can install a gem from source in a gemspec temporarily.

# do what you want here but I clone into a global place called ~/src/vendor
mkdir -p ~/src/vendor
cd ~/src/vendor
git clone https://github.com/pry/pry
cd pry
gem build pry.gemspec
# it will spit out a pry gem with a version on it
gem install pry-0.10.3  # or whatever `.gem` file is created

Now we have pry 0.10.3. Bundle doesn’t care it came from pry master. So when it picks up on the spec.add_development_dependency it will install the version you already have. BUT BIG PROBLEM You probably don’t want to commit this because people will get the same error you got on bundle install if that version doesn’t resolve. As far as I can tell, this pry version works with slop so perhaps you just want to use 0.10.3 and be done with this. I just wanted to illustrate how you can manipulate bundler.

Pry Vendored Slop

The reason this is happening is because of the slop namespace. Pry fixed this in a commit associated with that issue. It’s fixed because they inlined the gem as Pry::Slop so now Slop (your version) doesn’t conflict/activate.

Hope this saves someone’s day! :)

Ruby Slop Example

06 Apr 2016

Slop 4

I had an older post about ruby and slop but that’s with Slop 3 which is basically locked to Ruby 1.9. No doubt, this post will bitrot too so please pay attention to the post date. The current ruby is about 2.3.0, slop 4.3 is current, it’s 2016 and Trump is going to be our next president.

It’s ok that you need help

I think the most confusing thing about slop is that it has great examples and documentation but when you try to break this apart in a real app with small methods and single responsibilities some things sort of get weird. I think this is because of exception handling as logic control but I’m not sure enough to say slop is doing something wrong that makes this weird. In my example

I refer back to MY OWN BLOG quite often for slop examples so it’s ok that you need help.

Slop’s example

Let’s look at the example from the README.

opts = Slop.parse do |o|
  o.string '-h', '--host', 'a hostname'
  o.integer '--port', 'custom port', default: 80
  o.bool '-v', '--verbose', 'enable verbose mode'
  o.bool '-q', '--quiet', 'suppress output (quiet mode)'
  o.bool '-c', '--check-ssl-certificate', 'check SSL certificate for host'
  o.on '--version', 'print the version' do
    puts Slop::VERSION
    exit
  end
end

I disagree with -h here for hosts. I think -h should always be help. This is especially true when switching contexts. When I switch to java or node or go or python, I have no idea what those communities’ standards are. I rely on what unix expects: dash aitch. I disagree also with this example because figuring out how to handle -h for help is the most confusing thing about using slop because you have to use exceptions as flow control (sort of an anti-pattern).

A real example

Let’s write a wrapper program called What the Fi? for our Internet connection. When Internet things get wonky there are a few sites and tools I use to see is it just me?. This wrapper will combine all those things into a CLI. We’ll use Slop 4.3 to parse the CLI options. We’ll even write tests!

The main structure of this program is subcommand based. It’s a particular type of CLI example similar to git where there are branches of main commands. After the main branching logic, you could have options on each of the subcommands but I’ll leave that as an exercise to you. I’d also recommend thor if you want to build a complicated CLI with subcommands. What I mean to say is, the following is just a CLI example that happens to follow this subcommand pattern.

Here’s how you use it.

# paste in the script below into a file named whatthefi
# put it in ~/bin if you want, or put it $PATH
wget -O ~/bin/whatthefi https://raw.githubusercontent.com/squarism/whatthefi/master/whatthefi

# make it executable
chmod u+x ~/bin/whatthefi
whatthefi -h

The relevant Slop options are in this bit.

require 'slop'
require 'net/http'
require 'json'

class CLI

  # set up defaults in its own method
  def cli_flags
    options = Slop::Options.new
    options.banner =  "usage: tubes [options] ..."
    options.separator ""
    options.separator "Options:"
    options.boolean    "-i", "--ip", "What is my ip?"
    options.string    "-p", "--port", "Can I get to a port?"
    options.string    "-d", "--down", "Is this URL down for everyone or just me?"

    options
  end

  def parse_arguments(command_line_options, parser)
    begin
      # slop has the advantage over optparse that it can do strings and not just ARGV
      result = parser.parse command_line_options
      result.to_hash

    # Very important to not bury this begin/rescue logic in another method
    # otherwise it will be difficult to check to see if -h or --help was passed
    # in this case -h acts as an unknown option as long as we don't define it
    # in cli_flags.
    rescue Slop::UnknownOption
      # print help
      puts cli_flags
      exit
      # If, for your program, you can't exit here, then reraise Slop::UnknownOption
      # raise a custom exception, push the rescue up to main or track that "help was invoked"
    end
  end

Notice that the rescue Slop::UnknownOption needed for Slop parsing is inside of a method called parse_arguments. On many tools/projects I’ve done this isn’t enough to handle all cases. Then, what I’ll do is roll a custom error class and throw that instead. You could also instead just not begin;rescue;end here and do it higher up in the main. If you find yourself losing data/context, it means you are at the wrong level of method calls. In the slop examples, this isn’t explicitly mentioned but I find this way to be the most unixy. It print help on an unknown option and it prints help if you don’t define -h or --help. If you have a -h option you want to use then use the on '--help' example Slop mentions.

o.on '--help' do
  puts o
  exit
end

Full Example

I hesitate to post the whole script here because it is very long. But here it is anyway. If you prefer a git repo to puruse like a sane and reasonable person then here it is.

#!/usr/bin/env ruby

require 'slop'
require 'net/http'
require 'json'

class CLI

  # set up defaults in its own method
  def cli_flags
    options = Slop::Options.new
    options.banner =  "usage: tubes [options] ..."
    options.separator ""
    options.separator "Options:"
    options.boolean    "-i", "--ip", "What is my ip?"
    options.string    "-p", "--port", "Can I get to a port?"
    options.string    "-d", "--down", "Is this URL down for everyone or just me?"

    options
  end

  def parse_arguments(command_line_options, parser)
    begin
      # slop has the advantage over optparse that it can do strings and not just ARGV
      result = parser.parse command_line_options
      result.to_hash

    # Very important to not bury this begin/rescue logic in another method
    # otherwise it will be difficult to check to see if -h or --help was passed
    # in this case -h acts as an unknown option as long as we don't define it
    # in cli_flags.
    rescue Slop::UnknownOption
      # print help
      puts cli_flags
      exit
      # If, for your program, you can't exit here, then reraise Slop::UnknownOption
      # raise a custom exception, push the rescue up to main or track that "help was invoked"
    end
  end

  def flags
    [:ip, :port, :down]
  end

  def flags_error
    switches = flags.collect {|f| "--#{f}"}
    puts cli_flags
    puts
    abort "please set one of #{switches}"
  end

  # In a cli app where you essentially have subcommands like git
  # this method makes sure that one of the main "modes" is set.
  # Something like:
  #   person --run
  #   person --walk
  #   person --stop
  def number_of_required_flags_set(arguments)
    # --ip isn't required
    minimum_flags = flags - [:ip]
    valid_flags = minimum_flags.collect {|a| arguments.fetch(a) }.compact
    valid_flags.count
  end

  # slop does not take on the job of requiring arguments to be set
  # this method represents our validation rules
  def validate_arguments(arguments)
    # --ip is false by default because it's a Slop boolean
    if number_of_required_flags_set(arguments) < 1 && !arguments.fetch(:ip)
      flags_error
    end
  end

  def set?(arguments, flag)
    !arguments.fetch(flag).nil?
  end

  # main style entry point
  def main(command_line_options=ARGV)
    parser = Slop::Parser.new cli_flags
    arguments = parse_arguments(command_line_options, parser)
    validate_arguments arguments

    # --ip is a boolean, it is set to false even if left off by slop
    if arguments.fetch(:ip)
      puts what_is_my_ip
    elsif set?(arguments, :port)
      puts portquiz arguments[:port]
    elsif set?(arguments, :down)
      puts is_it_up arguments[:down]
    end
  end

  def http_get(url)
    response = nil
    begin
      response = Net::HTTP.get(URI(url))
    rescue SocketError, Net::HTTPBadResponse, Net::HTTPHeaderSyntaxError, Net::ProtocolError => e
      puts e.inspect
    end
    response
  end

  # outside of scope but you can see these are action methods here
  # these could easily be broken out to classes
  def what_is_my_ip
    response = http_get("https://httpbin.org/ip")
    "Your IP is #{JSON.parse(response)['origin']}"
  end

  def portquiz(port)
    response = http_get("http://portquiz.net:#{port}")
    if response
      "I can get to port #{port} on the Internet.  :)"
    else
      "I can't reach port #{port} on the Internet.  :("
    end
  end

  def is_it_up(url)
    response = http_get("http://www.downforeveryoneorjustme.com/#{url}")

    # lazy html parsing to avoid nokogiri
    html_match = response.match(/class="domain"\>.*\<\/a\>(.*)\./)
    if html_match[1].include? "is up"
      "#{url} seems up.  :)"
    elsif html_match[1].include? "looks down"
      "#{url} seems down.  :("
    end
  end

end

# this kind of sucks, you shouldn't ever change your code to accomodate tests
# but this is a main
CLI.new.main if !defined?(RSpec)

If you look at main and validate_arguments, you’ll see that --ip being a boolean and not a string caused special logic to spew everywhere. It’s because it’s a switch and not a parameter with a string value (it’s not --ip=1.2.3.4, it’s just --ip or nothing). Because of this, we have to treat this option differently. Sometimes we need to know if it’s been set but because Slop will set an unset boolean to false, we can’t check for nil like all the other flags.

I hope this post helps the googlers write their CLIs. My older post about slop had bit-rotted and at the same time gotten high up on the google rankings. I hope I have avenged myself (against myself?). All hail the bit rot.

Serverspec and Packer

15 Mar 2016

Thoughtbot has an excellent and much desired article on getting Docker + Rspec + Serverspec wired up but I couldn’t find anything about images generated from Packer. Packer generates its own images and so we can’t just build_from_dir(.). Our images are already built at that point. We’re using Packer to run Chef and other things beyond what vanilla Docker can do.

The fix is really simple after I was poking around in pry looking at the serverspec API.

First of all, what am I even talking about? Serverspec is like rspec for your server. It has matchers and objects like

describe file('/etc/passwd') do
  it { should exist }
end

describe file('/var/run/unicorn.sock') do
  it { should be_socket }
end

So although we have application tests of varying styles and application monitors, serverspec allows us to test our server just like an integration test before we deploy. I had previously tried to go down this route with test kitchen to test our chef recipes but it was sort of picky about paths. Additionally, going with serverspec and docker doesn’t even require Chef. Chef has already been run at this point! What this means is fast tests. Just booting a docker image and running a command is fast.

# single test
$ time bundle exec rspec
1.415 total

Nice!

So how does this work? Well, like I said the thoughtbot article is really good but I wanted to add to the ‘net about packer specifcally. The critical piece to make Serverspec work with a Docker image created from Packer is in your spec itself (spec/yer_image_name/yer_image_name_spec.rb).

# spec_helper and a lot of spec/ came from `serverspec-init1`

require 'spec_helper'
require "docker"


describe "yer_packer_image" do

  before(:all) do
    image = Docker::Image.get("yer_package_image")

    set :os, family: :debian   # this can be set per spec
    # describe package('httpd'), :if => os[:family] == 'redhat' do
    #   it { should be_installed }
    # end

    set :backend, :docker
    set :docker_image, image.id
  end

  it "has bash installed" do
    expect(package("bash")).to be_installed
  end

end

See that image = Docker::Image.get("yer_package_image") bit in the before block? This is the difference between build my image (what the thoughtbot article uses) and run an existing image. Since packer builds the image, we can just reuse the one we have from our local store. Then later :docker_image, image.id sets the image to use during the test. It knows about docker because of require "docker" from serverspec. I’ll mention what versions of these gems I’m using at the time of this post since this might bit-rot.

docker-api (1.26.2)
rspec (3.4.0)
serverspec (2.31.0)
specinfra (2.53.1)  # from serverspec

An idea that didn’t work

Ok this is cool! How about we have packer run our tests after the packer build. Unfortunately this is mostly useless. :( The tests will run but they won’t do anything if the tests fail.

Here’s the post-processor bit of our packer config. It just tells Packer to do things after it’s done building. The first bit is tag our image so we can push it out to our registry.

  "post-processors": [
    [
      {
        "type": "docker-tag",
        "repository": "your-company/yer_packer_image",
        "tag": "latest",
        "force": true
      },
      {
        "type": "shell-local",
        "inline": ["bundle exec rspec ../../spec/yer_packer_image"],
        "_useless": "don't do this"
      }
    ]
  ]

The path structure is arbitrary above. We have a project we’re currently working on that I’ll explain in another blog post or talk. The only specifics about this file structure is that typically you’d want to do something like require 'spec_helper' but if you are building an image from a subdirectory and then running tests from another nested subdirectory then you’ll need to require_relative 'spec_helper'. I actually don’t know why this isn’t the default anyway.

But like I said, running tests with Packer as a post processor doesn’t do anything. You could run it with PACKER_DEBUG or something but I don’t like any of that. I’ll be following up with a more complete workflow as we figure this out. So you don’t need to do this last bit with the post-processors. I just wanted to leave a breadcrumb for myself later.

Sidekiq Rate Limiting

12 Feb 2016

birds_on_a_wire

Sidekiq Enterprise has a rate limiting feature. Note that this is not throttling. The perfect use case is the exact one that’s mentioned in the wiki: limit outbound connections to an API. We had a need for this between two of our own services. I spiked a little bit and I thought the behavior was interesting so I thought I’d share.

Continue Reading →

Rails Style Route Parsing

20 Jan 2016

At one point a while back, I had a config file outside a rails app and what I wanted was something like this:

Given this mappping definition /order/:meal/:cheese How can I turn these strings into parsed hashes? /order/hotdog/cheddar -> {meal:'hotdog', cheese:'cheddar'}

I knew that something in Rails was doing this. I just didn’t know what. I also didn’t know what assumptions or abstraction level it was working at.

Journey into Journey

The gem that handles parsing the routes file and creating a tree is journey. Journey used to be (years ago) a separate gem but is not integrated into action_dispatch which itself is a part of actionpack. So to install it you need to gem install actionpack (or use bundler) but to include it in your program you need to require 'action_dispatch/journey'. If you have any rails 4+ gem installed on your system, you don’t need to install anything. Action pack comes with rails.

require 'action_dispatch/journey'

# reorganize pattern matches into hashes
def hashify_match matches
  h = {}
  matches.names.each_with_index do |key, i|
    h[key.to_sym] = matches.captures[i]
  end
  h
end

pattern = ActionDispatch::Journey::Path::Pattern.from_string '/order/(:meal(/:cheese))'
matches = pattern.match '/order/hamburger/american'
puts hashify_match matches

matches = pattern.match '/order/hotdog/cheddar'
puts hashify_match matches

# {:meal=>"hamburger", :cheese=>"american"}
# {:meal=>"hotdog", :cheese=>"cheddar"}

We have to have hashify_match reorganize our objects because this is what pattern.match returns:

irb(main):001:0> matches = pattern.match '/order/hamburger/american'
=> #<ActionDispatch::Journey::Path::Pattern::MatchData:0x007f9d4d527aa0
 @match=#<MatchData "/order/hamburger/american" 1:"hamburger" 2:"american">,
 @names=["meal", "cheese"],
 @offsets=[0, 0, 0]>

So we have to turn these ordered matches into a hash.

irb(main):001:0> matches.names
=> ["meal", "cheese"]

irb(main):002:0> matches.captures
=> ["hamburger", "american"]

We could also zip the results together but we wouldn’t have symbolized keys.

irb(main):001:0> Hash[matches.names.zip(matches.captures)]
=> {"meal"=>"hamburger", "cheese"=>"american"}

You could symbolize them easily within a rails app or by including active support.

require 'active_support'
require 'active_support/core_ext'
Hash[matches.names.zip(matches.captures)].symbolize_keys

How to install a specific version of something in Homebrew

06 Jan 2016

cd /usr/local

# find some sha you want, I want mysql 5.6.26
git log -S'5.6.26' Library/Formula/mysql.rb
git checkout -b mysql-5.6.26 93a54a

brew install mysql
# oh no!  the CDN doesn't have 5.6.26 anymore!
# Homebrew pukes with a 404 error.  :(  :(  :(

# make homebrew's cache folder
mkdir ~/Library/Caches/Homebrew

# google for the tarball (the url doesn't matter as long as you trust it)
wget http://pkgs.fedoraproject.org/repo/pkgs/community-mysql/mysql-5.6.26.tar.gz/733e1817c88c16fb193176e76f5b818f/mysql-5.6.26.tar.gz -o ~/Library/Caches/Homebrew/mysql-5.6.26.tar.gz

brew install mysql
# This installs older versions of dependencies.
# You probably don't want to install old versions just for fun.
# Like, this will install some version of cmake for mysql 5.6.26 but
# idk what happens when you flip back to master and install
# something else that requires cmake.


# You can delete the branch when it's done.
cd /usr/local
git br -d mysql-5.6.26
git checkout master

# I assume you can use a newer version of cmake (or other deps)
# after the binary is built but I don't know.

Great dev log with vim and iTerm

13 Nov 2015

What did you do this week? Um. Uh. (remembering intensifies)

I have this problem a lot at work. I’m cranking on stuff, figuring things out day to day but if someone asks me what I’ve done, I have no clue. Being put on the spot sucks. When something sucks, it’s a problem. Put it on the tool sharpening list.

So what can we do? It’s pretty easy, just keep a diary. But there are some specifics that I’ve worked out because I’ve had Lists of Lists™ before. I’ve learned that Lists of Lists™ do not work.

I want to:

  • Keep it simple.
  • Have it be easy to use, non disruptive.
  • Actually use it. Something that I’m not going to hate, throw away or give up on.

A Nice Setup

iTerm allows you to launch a terminal with a global hot key and run a command. What’s better is that it stays out of your way when you click away.

iTerm Setup

Configure a new profile in iTerm. Set a command to run vim.

iterm_profile_creation

Make the profile pop up with a hot key.

iterm_hotkey

Voilà!

iterm_hotkey

Combine this with a quick vim script to insert the date headers (including knowing what weekends are), it’s pretty nice.

Vim Setup

(completely optional)

Here’s a shortcut that will add a header like # 1999-99-99 at the top of the file. Assign it to a shortcut and hit that at the beginning of the day. It requires a sister ruby script to figure out what ‘tomorrow’ means. It’s aware of the weekend.

#!/usr/bin/env ruby
# put this in your path like ~/bin/tomorrow.rb or something
# make it executable: chmod u+x ~/bin/tomorrow.rb

require 'date'

# pass a date string like 2020-12-25 to this script and it will increment it by
# a day
starting_date = nil
if ARGV[0]
  starting_date = ARGV[0]
else
  starting_date = Date.today.to_s
end

original_date = Date.parse starting_date

# if today is Monday, increment by 3 because of the weekend
if (Date.today.wday == 1)
  puts original_date + 3
else
  puts original_date + 1
end

Vim shortcut

" Increment the date from yesterday.  Used for my development log (journal).
" If today is Monday, this should jump ahead three days.
function! NextDate()
  " get top line number
  let topline = getline(1)
  " trim the '# ' from a markdown header.  '# 2015-02-12' becomes '2015-02-12'
  let trimmed_line = substitute(topline, '\v^\#\s+(.*)', '\1', '')

  silent let next_date = system("tomorrow.rb " . trimmed_line)
  " trim newline from output
  let trimmed_next_date = substitute(next_date, '\n\+$', '', '')
  call append(0, [ ("# " . trimmed_next_date), "" ] )
  call append(1, "")
  execute "normal! ggjo"
endfunction
map <leader>N  :call NextDate()<CR>

Awesome Things This Does

No more remembering during standups

During standups or retros, I can convert this quickly into a summary:

  • What I worked on
  • What I’m waiting on

Whatever your format is, your log is what you did and you won’t forget stuff.

As a bonus, after using this log for a while, it also can show you how hard you’ve been working and keep yourself from being too hard on yourself. That thing you really tried hard on that you forgot where you left it, maybe you chunked it as a failure when it was not a failure. Maybe you left yourself enough detail to show:

I could keep going on this experiment but the point was proven. I ran into a limitation beyond my control. I tried many different options and approaches but the technology isn’t ready or something else is up.

As time goes on, this chunking effect is more dramatic. Wait until you forget how hard you tried.

No more forgetting that cool command you typed

Sometimes I browse my history to find that curl that worked. But which one? In my dev log, I’ll just paste in a command or the thing that actually worked. Maybe I was debugging something because I forgot something silly. Writing that down is like a tiny “hurrah” but also a breadcrump to future me what the hold-up was.

Weekend Me

I don’t think about work on the weekend. Monday me hates this. With a dev log, this isn’t a problem. I just review Friday and that’s enough to jog my memory.

Advice Time

I’ve been using this for a year and it’s been amazing. I’ve done it everyday. So let me hand out some advice.

  • Don’t create multiple files. If you work in multiple teams, don’t try to orgranize your thoughts into teams. Just split it up by day. Embrace the chaos. This is quick. Hit key, brain dump, hit key, keep working. If you hate this and it keeps you from logging then change this advice. I think most people would hate having to categorize work into separate files.

  • Don’t worry about tagging or searching. I only tag things like TIL so it jumps out. Not even for retrevial. Text search works fine. I have 7500 lines from 1.5 years of content and I can find anything just with vim text search.

  • Make it yours. If you don’t want to call it dev_log.md, call it something else.

  • Whatever you hate about this blog post, change it. The real idea is: solve a problem for you, in my case and most people’s on my team it has been remembering what you did and remembering your wins.

Setting timezone with homebrew installed mysql

01 Nov 2015

I couldn’t find this information anywhere so I’m writing it. If you installed mysql (and I mean MariaDB) through homebrew then you might find some trouble when trying to set your timezone to UTC or GMT.

Edit /usr/local/etc/my.cnf. Add a new section at the bottom:

[mysqld]
default-time-zone='+00:00'

Restart mysql.

> select @@global.time_zone;
+--------------------+
| @@global.time_zone |
+--------------------+
| +00:00             |
+--------------------+

Alternatively, you could break this out into a file called /usr/local/etc/my.cnf.d/gmt.cnf

[mysqld]
default-time-zone='+00:00'

The plus sign is critical. If you find any tips/tricks related to this, send them to me on twitter. Contact information is there on the sidebar.

Encoding in Ruby and Everywhere

08 Jul 2015

I gave a lightning talk at pdxruby recently. I was trying to explain the gotchas but was doing live coding in pry and it wasn’t enough time for me to figure out some nice succinct take-aways. My bigger point was something like “our industry seems to keep forgetting certain things”. This is not to say Yer Doin It Wrong. I just think it’s interesting that some things keep coming up because they are very rare.

  • How to generate an SSL cert
  • Encoding and utf-8
  • Database salts
  • HTTP and RFCs - I personally have forgotten or misremembered something

Even if you’ve done it many times, you haven’t done it recently (like just now) so we all forget. This theme is interesting! Different teams, people, states and projects … some common patterns maybe? Many times with these hard subjects, I often come across as “wrong!” and that’s not what I’m trying to do. I just want to point out where the key things are so that you can remember where to look to google some more or trigger your memory.

So, this encoding thing. Ruby 2.x changed lots of things. First, your source file is utf-8. Your strings are utf-8 by default. There’s more to it than that but it’s all pretty much utf-8 now. There’s also no iconv in stdlib anymore. It’s just .encode off the string class (we’ll get to that in a second).

Your Encoding Friends

Open up pry (if you don’t have pry, gem install pry). It’s all you’ll need. If you do ls Encoding, you’ll see a list of encodings that Ruby supports. You get this for free in every process. You don’t need to do anything special. You’ll notice that "".encoding is => #<Encoding:UTF-8>. That inspected Encoding:UTF-8 bit is coming from that list.

pry> ls Encoding constants:
ANSI_X3_4_1968    Emacs_Mule
ISO8859_6         SJIS_DoCoMo     ASCII        EMACS_MULE
ISO8859_7         SJIS_KDDI       ASCII_8BIT   EUC_CN
ISO8859_8         SJIS_SoftBank   Big5         EUC_JIS_2004
ISO8859_9         SJIS_SOFTBANK   BIG5         EUC_JISX0213
ISO_2022_JP       ...             UTF_8

There’s also a shorthand versions of these encoding names that you can use but I like using the constants where I can because it’s namespaced with Encoding so it’s more intention-revealing. So let’s write a file as utf-8 so I can explain the shorthand thing.

File.open('/tmp/awesome.txt', 'w:utf-8') {|file| file.puts "awesome" }

This is pretty straight-forward. It creates a file with awesome in it, encoded in utf-8.

$ cat /tmp/awesome.txt
awesome
File.open('/tmp/awesome.txt', 'w:iso-8859-1') {|file| file.puts "awesome" }

You can’t say ‘w:latin-1’ here. That’s another name for iso-8859-1 but latin-1 doesn’t work here for the file writing mode.

You can write a few modes in different encodings and the bytes come out exactly the same. There’s a historical reason for this. EBDIC begat ASCII begat ANSI (sort of) begat Unicode. All along the way, the lowest bytes stayed backwards compatible.

# utf-8 written
$ xxd /tmp/awesome.txt
00000000: 6177 6573 6f6d 650a                      awesome.

# latin-1(iso-8859-1) written
$ xxd /tmp/awesome.txt
00000000: 6177 6573 6f6d 650a                      awesome.

# ascii written
$ xxd /tmp/awesome.txt
00000000: 6177 6573 6f6d 650a                      awesome.

This is also why English speaking programmers are surprised by encoding errors because you can get away with a lot by sticking with these low order bytes and remaining ignorant (slightly strong word but intended in its opportunity sense). It’s only when “weird” data comes in that we have to think about encoding right?

Here’s another friend. If you do Encoding::BINARY.to_s you’ll get ‘ASCII-8BIT’. This is the same as saying “I don’t know”. It’s not the same as Encoding::ASCII. You can tell because .to_s says ‘US-ASCII’. So .to_s can be handy here.

There is a method called .encode. This takes the place of Iconv in the stdlib. It works just like the unix command iconv. It takes one encoding and converts the bytes into another. This isn’t the same as .force_encoding as we’ll see in a second.

Now this is where culture/language trickiness comes in.

Lucky

All these things are the same bytes because we (sort of) got lucky on our history, where ASCII came from (A is for American) and kind of how computer keyboards and alphabets work. Someone had a good counter argument to this statement at the meetup and I agree. What I mean is, some of this is a bit culturally sensitive and complicated.

What I really mean is:

  • English works well on a keyboard
  • Keyboards are the fastest input device
  • ASCII was invented by English speakers
  • UTF-8 is extended ASCII
  • English was invented before the computer

So, world, I’m sorry (empathy not apology).

What Encoding Is

Take this string "\x20". It’s a space character. If you look at man ascii you’ll see that 20 is “ “ in ASCII. You might recognize this from %20 in URLs. 20 decimal is 20 in hex too. The \x bit means hex. URL encoding is hex too so 20 is the same 20. If I pick something higher in the codepage like "\xC3", things are going to get weird. “\xC3” by itself isn’t valid utf-8. And that’s fine until I try to do something with it. If I print it, it’s nothing. Puts just gives me the normal newline.

puts "\xC3"

=> nil

If I combine it with \x20, that’s not valid. ASCII space is at the top of the UTF-8 codepage. I can’t just make up stuff. Or maybe I can and get lucky. But in this case, it prints the unknown utf-8 symbol: <?> If I try something else, just a different error message shows up:

pry> "\xC3".encode('us-ascii')
Encoding::InvalidByteSequenceError: incomplete "\xC3" on UTF-8
from (pry):107:in `encode'

pry> "\xC3\x20".encode('us-ascii')
Encoding::InvalidByteSequenceError: "\xC3" followed by " " on UTF-8
from (pry):108:in `encode'

And not that this can’t be done. If I use something that definitely fits in the ascii range (low bytes), everything is fine by implicit coincidence.

pry> "\x20".encode('us-ascii')
=> " "

So what’s going on? Let’s look at this new string “YAY”.

"YAY".bytes
=> [89, 65, 89]

So 89 is what in hex … um … piece of paper

89.to_s(16)
=> 59

Right. So “YAY” is

"YAY".bytes.collect {|b| b.to_s(16) }
=> ["59", "41", "59"]

We can take this and get

"\x59\x41\x59"
=> "YAY"

"\x59\x41\x59".encoding
=> "\x59\x41\x59"

Because ASCII fits inside the beginning of utf-8.

"\x59\x41\x59".encode('ascii')
=> "YAY"
"\x59\x41\x59".encode('ascii').bytes
=> [89, 65, 89]
"\x59\x41\x59".encode('ascii').force_encoding('utf-8').bytes
=> [89, 65, 89]

We could do this all day and not flip a bit. It’s just not modifying the byte sequence and that’s really what the data is.

So that’s the happy path with ASCII. It just sort of luckily works because of history and other things that are complicated. The more complicated path involves a few things. First, what happens when Ruby loses control of the encoding it knows about and finally what happens when non-ASCII things start happening.

This is the Korean word for wizard. I don’t know Korean btw. It’s just an easy alphabet and I think it’s neat.

wizard = "마법사"
wizard.bytes
=> [235, 167, 136, 235, 178, 149, 236, 130, 172]

Nothing in .bytes is going to be over 255 because bytes are 8-bit. You’ll never, ever see .bytes return anything over 255. So what’s the deal? Why are there more bytes there? Is it because Korean has more letters inside each of those characters? No, that guess doesn’t make sense when I do this with a single “character”:

"ㅅ".bytes
=> [227, 133, 133]

It’s because utf-8 is dynamic. ASCII fits in 1 byte. If we force this to Encoding::UTF_16, it has four bytes. What we thing of as a letter is irrelevant. It’s bytes and codepoints in an encoding scheme. ASCII/English just happens to be lucky at the top of the number chart.

So let’s turn that single character into utf-16 (Java’s default).

"ㅅ".encode('utf-16').bytes
=> [254, 255, 49, 69]

But that doesn’t mean we should. And … if we force this the wrong way, we’ll have a bad time. Ruby won’t change the bytes if you do .force_encoding. But it will if you .encode, as you can see. It depends what you are trying to do.

Next, I’m going to show what you can do with all of this.

Data Corruption

Let’s take a more practical example. Let’s say a file was written in the wrong encoding. This could be a database backup file that you really care about. You could use iconv but let’s play in pry because it’s more fun and interactive.

Let’s set up the failure scenario.

File.open("/tmp/mysql-backup.sql", "w:UTF-8") {|file| file.puts wizard.force_encoding('iso-8859-1') }
import = File.open("/tmp/exported_garbage.txt", encoding:Encoding::ISO_8859_1).readlines.first
=> "\xC3\xAB\xC2\xB0\xC2\x94\xC3\xAB\xC2\x82\xC2\x98\xC3\xAB\xC2\x82\xC2\x98\n"

If you just try to .force_encoding it’s not going to work.

File.open("/tmp/mysql-backup.sql", "w:UTF-8") {|file| file.puts wizard.force_encoding('iso-8859-1') }
import.force_encoding('utf-8')
=> "ë°\u0094ë\u0082\u0098ë\u0082\u0098\n"
import.encoding
=> #<Encoding:UTF-8>

Interestingly, .force_encoding sticks. So let’s try again, knowing the path that the data took. We can reverse it:

  1. First the data was utf-8.
  2. Then it was forced to be latin1 but it’s in a utf-8 file.
  3. Then it was read as a latin1 file.

Since the read happened in Ruby-land, we can force_encoding the file reading mistake. Now it’s a utf-8 string that was forced to latin1 in mistake 2. So we just have to re-encode those bytes back to latin1. Finally, it was utf-8 in mistake 1. So we can just force_encoding the last step because it wasn’t written externally or re-encoded, the bytes were forced.

pry> import.force_encoding('utf-8').  # undo the wrong file read
pry* encode('iso-8859-1').            # undo the file write
pry* force_encoding('utf-8')          # undo the force in the file.puts block

=> "바나나\n"

You can do it as one big line and play with this. Just make sure to check your encoding of your play variables. The variable import is now utf-8 so weird things will happen if you think it’s latin1. Re-read the file with readlines to reset your playtime.

UTF-8 Doesn’t Just Solve Everything

Base64 encodes to ASCII. So you’ll have very similar problems like above.

require 'base64'
encoded = Base64.encode64 'bacon is great'
=> "YmFjb24gaXMgZ3JlYXQ=\n"
decoded = Base64.decode64(encoded)
=> "bacon is great"
# Yay for ascii?

# Wait a minute ...
encoded = Base64.encode64 'ºå߬∂˚∆ƒ'
=> "wrrDpcOfwqziiILLmuKIhsaS\n"
decoded = Base64.decode64(encoded)
=> "\xC2\xBA\xC3\xA5\xC3\x9F\xC2\xAC\xE2\x88\x82\xCB\x9A\xE2\x88\x86\xC6\x92"
decoded.force_encoding('utf-8')
=> "ºå߬∂˚∆ƒ"
# The bytes didn't change, so force_encoding is correct here

Conclusion

Encoding is hard. It comes up a lot. I forget what I have learned. I hope this is a beacon to myself and others about some lessons and tricks. Playing with this stuff now might save you stress later when something real pops up. I’ve seen backups be useless and then saved with iconv tricks and Ruby’s encode method is the same thing.

New Gigs, New Digs

26 May 2015

Started a new gig about three weeks ago. Sad to leave the old team and friends. It was awesome and I grew in a lot of ways. But this new place is probably what I was looking for.

It’s way too early to call or judge or even sum up because it takes me about three months to settle into any new job and place. You might think that’s ridiculous but I’ve unsciencely tracked this and it holds up. Slow burn man. It’s three months.

The new place is Goldstar. We sell discount tickets fill events and have amazing customer service in and around this domain. From the tech side, the app is Rails and mobile with a set of amazing devs and ops peeps. We have expanded the tech team very recently and I’m one of the new recruits.

Learning a codebase is rough. Building a codebase and learning along the way is much more natural and comes with an advantage that needs to be cared for and not abused. “Can you not code?” This isn’t happening at the new place. I’m more amplifying what Katrina Owen said on Ruby Rogues about a book that explains downhill synthesis, uphill analysis. It’s way easier to understand a system when you’ve built it. Not even because the code is fresh in your mind. But because you hold the structure and general layout and design, connected by memories and breadcrumbs. When starting from the outside, it’s code splunking. Even if there are tests. Most of the time, I’ll break the test and see what happens. And then fix it. This simulates the synthesis part! Take it apart and put it back together for the put it back together part. This didn’t really click until Katrina enlightened me. I thanked her on Twitter. She was happy. Happy time.

So let’s talk about something pretty serious. This perceived skill gap. It wasn’t as bad as I thought it would be. I’m hanging with smart people but have a massive case of impostor syndrome. But right before this job, I wondered if the small-shop world had accelerated way past the big-scary enterprise world. And it has. But it’s not a huge deal! Do you know why? Because C, Unix, TCP/IP, Sockets, CAP Theorem, I/O speeds, SOLID, ACID and all the other non-science-laws-this-is-the-best-we-can-do-guys stuff of computer science is forever. It’s the bedrock. It’s what’s really happening. And once you know or even have a previous story/tale about these things then learning today’s Hipsterware™ is no big deal.

What’s Riak? I don’t know! It’s consistent and available? Oh! It must be really slow. Yep! Great! Right there I can knock it out of a few use cases where Redis or Memcached would be put in there. I could blather on about this. It’s really not healthy. It’s pretty arrogant actually. Most of the work is not in the initial introduction and overview. It’s in the deep and long lived implementation where your cherished newlywed tech betrays you in your most dire moment of edge-case mortality. There are so many things that I think are really great because I haven’t seen them blow up in my face in prod. There are lots of things that I used to think are great, which now I say “yeah …” unsurely because I’ve seen them blow up or not be a good fit.

I still have many miles to walk. Here some things that I’m predicting WITH MY MIND POWERS that I’ll learn and or gain from this new gig.

  • vim - My vimfiles and dotfiles have been challenged. Not even an editor war. An editor civil war. Are leader keys evil? Is nerdtree evil? Yes!? What?! I submit. I yield! I see the speed at which you are navigating files. I have thought about your strategy before and not seen it in action. Fine. I will delete my .vimrc and use yours. I’ve done it before with janus. I can start over again. Each time, I learn something new. The goal now is to stay as close to vanilla vim as possible.

  • Ruby - looking forward to pairing with lots of folks. I’ve been hearing a lot of great discussion. Lots of end-game topics. “What is intention revealing? What is this actually doing? What is the difference between these two classes? Let’s measure how fast this runs if we try it this way.”

  • AWS - A bigger setup than I’m used to. VPC I’ve done. But not so many objects. Learning lots of integrations within AWS. Pulling and syncing to buckets and stuff. I’m sure I’ll be flexing the fog gem at some point.

  • Instrumentation - It’s a big deal. There are many cloud services in action. Some are overlapping. It’s neat. It’s real. Retros where we look at code climate scores. Custom dashboards. I donated a raspberry pi for the cause. “Hook it up to the TV! Give me real insight. Get it done. Yay!” Pretty sweet.

  • Automation and CM - Chef is being used in a really nice way. It’s changing and evolving. No sacred cows. Custom tooling. Chef server is a bit slow, so move everything out. Put state somewhere else. We need to beef up the custom bits of this. We’re also working on other tools around containers. There’s no single tool really. It’s very practical. No sacred tools. I’m very impressed with the ops folks. It’s kind of beautiful.

  • The Business - It’s so easy to drown in tech. I’m looking forward to seeing all the pieces come together and watch something real happen. Ernie Miller said it best:

    Humane Development, to me, means the acknowledgement that we are humans working with humans to develop software for the benefit of humans.

    To me, this is where you see the user story get run not by your test suite but by a real customer or person. It’s the best part for lots of reasons.

Everyone is really good. That’s the job. The digs part is, our rental is coming to a close and we’ve bought a house. The next time I post might be from a different location. But not so far from where I’m at now. We love Portland. I miss friends/family but we’re staying.

I hope this town becomes a tech sanctuary for Bay Area and Seattle burnouts.

Finding a Tree

01 May 2015

When I moved to Portland, I saw this tree on I-5 outside the city. It kind of stands by itself. Pokes out before you hit downtown. I always wondered where it was. Was it huge? Was it normal? How would I find it?

Since I’m trying to explore the city and I had a week off between jobs (oh yeah, that news), I thought I’d take a break from the tech blogging and tell you about this tree I finally found after all these months.

First, I can’t believe my luck on this new Google Maps feature. I had used the Google Earth desktop app before but I didn’t realize it’s the satellite view now. It’s pretty intense on the graphics card. My mac was overheating while my gaming PC was yawning. So it’s not laptop or old mac friendly. :)

So here’s a picture of the tree as you’d see it from I-5. tree_from_i5

Here’s the dead give-away shot from Google Earth. After that, it was easy to track down. All I had to do is zoom and pan to the tree poking out in the orange circle here: tree_from_google_earth

I went to the street it’s on and took a few photos. It looks just like it does from the highway. And it’s lined up with the city. tree_found

It actually turned out to be a bunch of trees all clumped together. tree_found_close_up

Awesome. What a fun use of the google earth view in google maps.

How to Run the Docker Registry 2.0

19 Apr 2015

The docker registry is a daemon/service that you can run privately to push and pull images locally. This is fantastic for sharing images within a team, caching images locally and hacking on docker without polluting the world. There’s been a python version of the docker registry for some time but recently there has been work to rewrite it in golang. I’ve been waiting for the golang port because go projects are usually very easy to install and provide binaries.

Running the docker registry 2.0 (hereafter called Distribution) isn’t intuitive. Typically on github you’ll do something like this:

git clone https://github.com/bleh/bleh
cd bleh
# ./configure && make | go run bleh.go | bundle && rake | pip install -r requirements.txt | bower
# or something ...

That doesn’t work here. Distribution is expecting to be in a go workspace. That means that this doesn’t work:

cd ~/src/vendor
git clone https://github.com/docker/distribution
cd distribution
# oooh there's a Makefile!
make

../../go/src/github.com/docker/distribution/manifest/verify.go:6:2:
cannot find package "github.com/Sirupsen/logrus" in any of:

Oh! It’s using godep. We’ll just grab the dependencies.

$ godep get
  can't load package: package ~/src/vendor/distribution: cannot find package "~/src/vendor/distribution"
  in any of:

Hmm. That doesn’t work either. Well I know we could probably go get the whole thing but that’s going to put binaries in a golang workspace path. I kind of just want binaries in the current working directory. I’m not really sure what the intent here is. Distribution comes with a Dockerfile so I’m assuming that this is the best option to getting it up and running on a server somewhere without having to set up a temporarity golang dev environment (language runtimes as program distribution anti-pattern).

More Than One Way To Do It

We can run docker build Dockerfile

$ git clone https://github.com/docker/distribution
$ cd distribution
$ docker build .

The problem with this approach is that it gives us an image that we have to further customize. The binary registry isn’t in the path and godeps is in a special work dir as specified by the Dockerfile.

$ ls /go/src/github.com/docker/distribution/Godeps/_workspace/bin
godep

So we could go down that road but you’ll have to commit the changes and tag it yourself.

We can go get it

If we do this:

$ go get github.com/docker/distribution

This will put a binary in our path somewhere but then we’ll have to put it on a server. This also requires us to cd into our gopath workspace and copy a binary. Alternatively we could clone Distribution into a tmp golang workspace but this is effectively the same thing just in a different location.

Originally this is how I tried but later realized it’s worse than below (thoughts anyone?).

We can pull the registry from docker hub

# create some directories - change to your tastes
$ mkdir /opt/docker_data/registry
$ cd /opt/docker_data
$ wget https://raw.githubusercontent.com/docker/distribution/master/cmd/registry/config.yml
$ mv config.yml registry.yml

$ docker run -d -p 5000:5000 -e STORAGE_PATH=/registry -v \
  /opt/docker_data:/data -v /opt/docker_data/registry:/registry \
  --restart=always --name docker_registry registry:2.0 /data/registry.yml

# set it to autostart, bob's your uncle
# see below for usage with boot2docker

This is pretty nice. We have to be careful to specify the registry:2.0 tag in order to get the golang version. Latest will grab the python version. You can see this if you run the image interactively (not sure the 2.0 can be run interactively because of ENTRYPOINT – but there’s your clue).

Boot2Docker and insecure-registry

Once you are running a private registry, it’s up to you to generate an SSL cert. If you are on mac, you don’t even have a docker daemon locally so this part is confusing. When you push to your registry, you’ll get a warning/error.

FATA[0000] Error response from daemon: v1 ping attempt failed with error: Get https://hostname:5000/v1/_ping: tls: oversized record received with length 20527. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry hostname:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/hostname:5000/ca.crt

This is like your browser warning message on a unknown self-signed cert. But how do we add an exception? The official boot2docker docs have an answer but I have an addition. Add the port number to the hostname. If your server is called bleep then add bleep:5000 to the boot2docker image file at /var/lib/boot2docker/profile

# on your mac
$ boot2docker ssh "echo $'EXTRA_ARGS=\"--insecure-registry bleep:5000\"' | sudo tee -a /var/lib/boot2docker/profile && sudo /etc/init.d/docker restart"

# now you can push
$ docker push bleep:5000/busybox
The push refers to a repository [bleep:5000/busybox] (len: 1)
8c2e06607696: Image already exists
6ce2e90b0bc7: Image successfully pushed
cf2616975b4a: Image successfully pushed
Digest: sha256:3cc6b183efb34ff773f81ce230362ef67288375fdeb9cc8d50c221820fbe5e3b

# testing
$ docker run -it --rm bleep:5000/busybox /bin/busybox env

# on the host
$ ls /opt/docker_data/registry/docker/registry/v2/repositories
busybox

Deleting Images

I couldn’t get this to work. I commented on an issue, someone had the same question as me.

$ curl -XGET http://private-host:5000/v2/busybox/manifests/latest
# GET included for effect only  :)

The API doc says that DELETE should work. So I just substitute http verbs above and get an unexpected error:

$ curl -XDELETE http://private-host:5000/v2/busybox/manifests/latest
{"errors":[{"code":"UNSUPPORTED","message":"The operation is unsupported."}]}

So I’m not sure what’s up there.

Update: this is being tracked at this issue and it seems that deletes (of some kind) will be released in version 2.1

Sidenote

I ran into a weird issue while testing Distribution on ubuntu.

FATA[0000] Error response from daemon: Cannot start container
aa2a7765d5c67625fea17f1c5f0d8b90216418d44db95df5268822d8e3bcf21e: write /sys/fs/cgroup/cpu/docker/
aa2a7765d5c67625fea17f1c5f0d8b90216418d44db95df5268822d8e3bcf21e/cgroup.procs: no space left on device

My disk has plenty of space. docker info shows it using aufs which I believe doesn’t have the same disk space limits that devicemapper (like CentOS) has. By that, I mean the docker host + kernel combination. I’m actually not sure of where that interaction lies. I just know that docker info on different kernels show different output.

So how do I start over? I have my docker data in a docker directory. For this case, when all your data is safe and secure then you can just blow away everything and start over. For me, this was the fix. I had some package garbage (docker.io and lxc-docker packages installed). So I apt-get remove --purge-d everything and installed docker 1.6 through apt-get. That fixed this croup problem and kept my containers/data in tact. I feel like this was an edge case.

Conclusion

The docker image is the best way to run the registry but it requires a tiny bit of setup beforehand. There’s also some things that aren’t covered here:

  • SSL cert
  • Logstash support is mentioned as an output format
  • Redis caching
  • Authentication / reverse proxy / webgate

Regardless, happy to see a golang version of the registry.

Nil, If and Collect in Ruby

18 Dec 2014

Ruby’s if statement returns the return value when matched. If it falls through, it returns nil. I sometimes forget this. It’s not a big deal until it bites you.

Here’s an overly simple example where this works very well.

numbers = [1,2,3,4,5].collect do |number|
  if number % 2 == 0
    "#{number}: even"
  else
    "#{number}: odd"
  end
end

numbers
# => ["1: odd", "2: even", "3: odd", "4: even", "5: odd"]

This is really handy because we don’t have to create like a temporary variable somewhere and use each. But this is also a really clean case because there’s only odd and even. In other words, our if statement never falls through to return nil. Nil is a pain in Ruby. Avdi’s book Confident Ruby is not just about nil but it talks about nil. Avdi’s screencasts about nil are a good place to learn more (and I often refer back to Avdi’s work).

Ok, back to our collect. When possible, I try to use collect to avoid mutation. Maybe it’s my concession to functional programming languages. Maybe it’s because state mutation in Ruby causes headaches. I think the most important reason for exploring this is so we know of one other way to skin a cat. So let’s look at a more real example.

users = [
  { name: "Jay", enabled: false },
  { name: "Joan", enabled: false },
  { name: "John", enabled: true }
]

# enable everyone!
users.each do |user|
  user[:enabled] = true if user[:enabled] == false
end

# users
# {:name=>"Jay", :enabled=>true}
# {:name=>"Joan", :enabled=>true}
# {:name=>"John", :enabled=>true}

Meh. Mutation. It’s good to know just one more way for us to do this. Return a new collection.

users = [
  { name: "Jay", enabled: false },
  { name: "Joan", enabled: false },
  { name: "John", enabled: true }
]

# enable everyone!
enabled = users.collect do |user|
  user.merge({enabled:true}) if user[:enabled] == false
end

# enabled
# {:name=>"Jay", :enabled=>true}
# {:name=>"Joan", :enabled=>true}

Whoops. Where did John go? Here’s that thing I was talking about. Our collect needs to handle the fallthrough from the if.

users = [
  { name: "Jay", enabled: false },
  { name: "Joan", enabled: false },
  { name: "John", enabled: true }
]

# enable everyone!
enabled = users.collect do |user|
  if user[:enabled] == false
    user.merge({enabled:true})
  else
    user
  end
end

# enabled
# {:name=>"Jay", :enabled=>true}
# {:name=>"Joan", :enabled=>true}
# {:name=>"John", :enabled=>true}#

# users
# {:name=>"Jay", :enabled=>false}
# {:name=>"Joan", :enabled=>false}
# {:name=>"John", :enabled=>true}#

Great! You can see we didn’t mutate state. Now the problem with doing this all the time in Ruby is tail call optimzation. If you go to the ends of the earth with this your stack will explode. But I still like this style when I can do it because I avoid changing state.

The end-game to this line of thinking is switching to or at least playing with a functional language like Clojure, Rust or Scala.

Mocking in Golang

28 Nov 2014

This was originally a stackoverflow but it got down voted and suggested it be a blog post. Ok! Here’s my toy program. I had it separated into package files but I figured that would be harder for you guys to run it yourself. Please bear with me while I step through this.

The first iteration:

charge.go

package main

import "fmt"

type VisaGateway struct {
    Name string
    Url  string
}

func NewVisaGateway() *VisaGateway {
    return &VisaGateway{
        Name: "Visa",
        Url:  "visa.com...",
    }
}

func (v *VisaGateway) Charge() {
    fmt.Println("I am charging Visa -->")
}

type PaymentGateway interface {
    Charge()
}

func ChargeCustomer(g PaymentGateway) {
    g.Charge()
}

func main() {
    gateway := NewVisaGateway()
    ChargeCustomer(gateway)
}

Running it.

$ go run charge.go
I am charging Visa -->

So we can test this with a mocking library (separate question) maybe but for now let’s just use this interface and make a test that passes in a fake gateway so our test suite doesn’t hit a real system.

charge_test.go

package main

import (
    "fmt"
    "testing"
)

type MockGateway struct {
    Name string
    Url  string
}

func (m *MockGateway) Charge() {
    fmt.Println("This is a fake gateway.  --> [no-op] <---")
    fmt.Println("Yay!  :) ")
}

func TestCharging(t *testing.T) {
    m := &MockGateway{}
    ChargeCustomer(m)
}

Great!

$ go test
This is a fake gateway.  --> [no-op] <---
Yay!  :)
PASS
ok      github.com/squarism/credit_card 0.010s

What I’d probably want to do is use a library to help with the mocking setup. Previously I was trying to use a dependency injection style without interfaces and it didn’t work out.

Imagine that my Charge() method signature looks more like this (from joneisen.tumblr.com.

func ChargeCustomer(args ...interface{})
// code to init args and defaults -- see blog post linked above

This doesn’t work because type checking args breaks when you pass in a mock object with no interface. I’m not even sure if interfaces would fix this.

I was hoping to have a default value of the real type/struct and then in my test pass in a mock object. This is one nice side effect of default parameters and dependency injection. But that’s dynamic language style that I have to teach myself to let go of.

Ok. Let’s now using testify for mocking. Let’s add a return value so we can test something.

charge.go

...

func (v *VisaGateway) Charge() bool {
    fmt.Println("I am charging Visa -->")
    return true
}

type PaymentGateway interface {
    Charge() bool
}

func ChargeCustomer(g PaymentGateway) bool {
    return g.Charge()
}

func main() {
    gateway := NewVisaGateway()
    ChargeCustomer(gateway)
}

...
charge_test.go

package main

import (
    "github.com/stretchr/testify/assert"
    "github.com/stretchr/testify/mock"
    "testing"
)

type MockGateway struct {
    mock.Mock
}

func (m *MockGateway) Charge() bool {
    args := m.Mock.Called()
    return args.Bool(0)
}

func TestCharging(t *testing.T) {
    m := MockGateway{}
    m.On("Charge").Return(true)

    r := ChargeCustomer(m)
    m.Mock.AssertExpectations(t)

    assert := assert.New(t)
    assert.True(r, true)
}

The run result:

$ go test
PASS
ok      github.com/squarism/credit_card 0.018s

In the mocking example we also added m.Mock.AssertExpectations there. That is additional test that captures and remembers the calls. If the wiring is wrong and the expected call is not called, the test will fail. For a while I was not testing this and I would have had a test coverage gap. Another mistake I made while figuring out the AssertExpectations test was not passing by reference. I continue to make this mistake because I’m pointer-nooby. For more information on this see my question on stackoverflow.

Ok, so that’s my first foray with mocking in Go. So here are some questions:

  • Do you use a mocking library?
  • Do you see how default arguments wouldn’t work to help with mocking? You can’t really DI. I’m ok with this. I just need to learn.
  • Do you like interfaces for (not only) testing reasons?
  • Do you like the interfaces version more than the mocking version?
  • It seems that mocking really needs an interface somewhere? Otherwise won’t you get a cannot use gateway (type *VisaGateway) as type PaymentGateway in argument to ChargeCustomer. I might have gotten this wrong from the testify docs. It wasn’t obvious until I wrote this question.

If you have anything to say or answers to this question tweet me at @squarism.

… and once again, rubber ducking on stackoverflow. Writing the question made me figure it out.

Hi, I am
Chris Dillon