Josh Bavari's Ramblings

tech babbles on Ruby, Javascript, Rails, Phonegap/Cordova, Grunt, and more

Test Coverage Reports in Elixir

less than a 1 minute read

Lately I’ve been learning a ton more about Elixir and really working towards refactoring and hardening the system.

On my current project, I’ve got about 200 tests that exercise various parts of the system. Lately though, I’ve been trying to analyze which parts of the system aren’t being covered, and of course, theres tools to help with that.

The two I looked at were Coveralls and Coverex. I’m going to be using coverex in this post.

Getting started is a breeze, check the readme for that. I’ll cover it briefly for a bit here, my modifying our mix.exs file:

1
2
3
4
5
6
7
  # in `def project`, we add test_coverage
  test_coverage: [
    tool: Coverex.Task
  ],

  # in deps, add the depedency for only test environment
  {:coverex, "~> 1.4.10", only: :test},

After setup, running mix test --cover generates some reports in your projects ./cover folder – with functions.html and modules.html. These give you your standard coverage reports with lines covered / ratio covered.

For my project, I had quite a bit of generated files using exprotobuf. The coverage report was getting butchered from not using these many files in my tests.

According to the docs, we can add a keyword for ignore_modules in the keyword list test_coverage and the coverage reports will ignore those modules.

However, for my generated list of modules, I had quite the growing list to ignore and it quickly became unwieldy to put that list of modules in my mix.exs file.

Since we can’t access other modules from our mix file, I had a quick solution. I created a .coverignore file in the project directory, lumped in all the modules I wanted to ignore (from the modules.html generated file) and put them all in the .coverignore file.

I ensured all the modules I wanted to ignore were all newline delimited (\n).

From there, I modified my mix.exs file as such:

1
2
3
4
5
6
7
8
  # Near the top
  @ignore_modules File.read!("./.coverignore") |> String.split("\n") |> Enum.map(&(String.to_atom(&1)))

  # in def project
  test_coverage: [
    tool: Coverex.Task,
    ignore_modules: @ignore_modules
  ],

Boom, that does it! Now we’ve got a manageable list of modules to ignore in a separate file so we can keep our mix file clean.

All in all, coverex is a great module, and I would suggest using it if you do not want to ship data to coveralls.

Hope this helps, happy coding. Cheers!

Multicast Service Discovery in Electron

less than a 1 minute read

I’ve been playing around with mDNS lately for broadcasting some services for applications to auto-connect with.

The first experiment I had was setting up a server that broadcasts a TCP endpoint for an Electron application to discover and connect for the application data.

This was so easily done that I challenged myself to see how fast I can whip out a blog post.

First, get an Ubuntu server up (I used a Vagrant VM).

Run the commands:

1
sudo apt-get install avahi-utils

From here, the service for avahi (mdns) should be auto-started. Edit the configuration to enable broadcasting:

vim /etc/avahi/avahi-daemon.conf – here’s a config that’s minimally broadcasting only the IPv4 address:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[server]
host-name=webserver
domain-name=local
use-ipv4=yes
use-ipv6=no
allow-interfaces=eth1
deny-interfaces=eth0
ratelimit-interval-usec=1000000
ratelimit-burst=1000

[wide-area]
enable-wide-area=yes

[publish]
publish-addresses=yes
publish-hinfo=yes
publish-workstation=no
publish-domain=yes

Now, create a service configuration: vim /etc/avahi/services/mywebserver.service, with this contents:

1
2
3
4
5
6
7
<service-group>
  <name>Webserver</name>
  <service>
    <type>_http._tcp</type>
    <port>80</port>
  </service>
</service-group>

Simple as that. Just restart the avahi-daemon – sudo service avahi-daemon restart.

This should now have your server broadcasting that it has a webserver running at port 80, named Webserver.

To check the service is broadcasting, run avahi-browse _http._tcp -tr – this should show your server as servername.local, with Webserver, pointing to its IP and port.

Example:

1
2
3
4
5
6
+   eth1 IPv4 webserver                              Web Site             local
=   eth1 IPv4 webserver                              Web Site             local
   hostname = [webserver.local]
   address = [192.168.0.101]
   port = [80]
   txt = []

Now for the electron portion, in your application, install the node mdns module: npm install --save mdns.

This will add the node module to your project, but since it has native compilation steps, you must build it with electron-rebuild. Do this: npm install --save-dev electron-rebuild.

Then run: ./node_modules/.bin/electron-rebuild – this will rebuild the mdns module for your electron node version correctly.

To do the DNS lookups, simply run the steps from the node mdns README. Set the discovery type to http and it will find your service. From there, you can grab the address and then get the data from the web server (or html page redirection) as you so wish!

Happy coding!

Using Erlang Observer on a Remote Elixir Server

less than a 1 minute read

I’ve been using Elixir a ton at work and in some fun side projects and I’m absolutely in love with it.

One tool I especially love is the Erlang Observer tool, that shows you IO, memory, and CPU usage used by your app and the Erlang VM.

Once I got some apps deployed, I wanted to observe them remotely. I found a few google forum posts and the IEx docs, but I wanted to wrap up this knowledge for when I need it in the future.

I’m going to monitor a Phoenix app in this quick blog post.

First, fire up your Phoenix server on say, a VPS, giving it a node name:

iex --name server@64.16.134.61 --cookie jbavari -S mix phoenix.server

Then on your remote viewing machine, say your Mac, run the following:

iex --name josh@192.168.1.1 --cookie jbavari

Now we’re set up to do some remote observations!

Fire up :observer.start on your local machine, which should open up the Erlang observer.

Now from the menu, select ‘Nodes’, then you should see your node there. If not, click the connect to node button, type in your server@64.16.134.61 node address and you should be able to view your node via the observer!

Enjoy!

Custom JSON Encoding in Phoenix

less than a 1 minute read

I recently have been working a lot using Leaflet.js to do some mapping.

In some of my models, I use the lovely Geo package for Elixir point and geospatial usage. I needed to add support for Poison to encode my model.

I’ve been serving geo json from my models, and I needed a way to use the JSON encoding way easier. I’m sending some data out to a ZeroMQ socket so I need to encode it by transorming my Geo module in a way that I could encode it with Geo JSON.

I modified my model in two ways – one by putting the @derive directive to tell Poison to encode only certain fields. That is one way.

In another way, I needed to encode it everytime by calling the Geo.JSON.encode method without me having to do it. You can see that in the defimpl.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
defmodule MyApp.Point do
  use MyApp.Web, :model

  # Option 1 - specify exactly which fields to encode
  @derive {Poison.Encoder, only: [:id, :name, :geo_json]}
  schema "points" do
    field :name, :string
    field :position, Geo.Point
    field :geo_json, :string, virtual: true

    timestamps
  end

  def encode_model(point) do
    %MyApp.Point{point | geo_json: Geo.JSON.encode(point.position) }
  end

  defimpl Poison.Encoder, for: MyApp.Point do
    def encode(point, options) do
      point = MyApp.Point.encode_model(point)
      Poison.Encoder.Map.encode(Map.take(point, [:id, :name, :geo_json]), options)
    end
  end
end

Cheers.

Adding Additional Static Paths in Phoenix

less than a 1 minute read

Phoenix is awesome.

A problem I ran into lately is how to add additional static paths to be served.

If you take a look in your lib/endpoint.ex file, you’ll see the plug used for adding static paths:

1
2
3
plug Plug.Static,
  at: "/", from: :electronify, gzip: false,
  only: ~w(css fonts images js favicon.ico robots.txt)

I wanted to add another folder to be served, ‘zips’, that I had to edit the only: line in the plug specification as such:

1
2
3
plug Plug.Static,
  at: "/", from: :electronify, gzip: false,
  only: ~w(css fonts images js favicon.ico robots.txt zips)

There you have it, now I can access the files in the zips folder in priv/static/zips through the URL. Cheers!

Shipping Data With Protocol Buffers in Elixir

less than a 1 minute read

Lately, I’ve needed some data shipped across to various nodes to exchange data in a variety of places on a problem I was working on. There were a few ways to get that data shipped across, as the usual suspects are JSON, XML, or Google’s Protocol Buffers.

For this specific problem, we were needing to get that data shared from C++ nodes to Elixir/Erlang.

Since the team was using Protocol buffers already, I decided to give them a run in Elixir using exprotobuf.

Note: the client for this experiement is on github.

The idea

The idea here is – we’ll capture pieces of data from one node and ship it to the server for processing. We define the data structure by a .proto file, then turn our data into binary form by encoding it, and finally shipping it to it’s destination. We could do the same thing with JSON, but we want the data as light as possible.

We’ll use ZeroMQ to ship the data and use the Elixir package exzmq to encode in protocol buffers.

The process

First we define our protocol buffer format for an image message we want to send with data, its width, height, and bits per pixel:

1
2
3
4
5
6
message ImageMsg {
  optional bytes data = 1;
  optional int32 width = 2;
  optional int32 height = 3;
  optional int32 bpp = 4;
}

We set up our application to use exprotobuf in our mix.exs file:

1
2
3
4
def application do
    [applications: [:logger, :exzmq, :exprotobuf],
     mod: {Zmq2, []}]
end

as well as including it as a dependency:

1
2
3
4
5
6
defp deps do
  [
    {:exzmq, git: "https://github.com/zeromq/exzmq"},
    {:exprotobuf, "1.0.0-rc1"}
  ]
end

Finally we create an Elixir struct from this proto file as such:

1
2
3
defmodule Zmq2.Protobuf do
  use Protobuf, from: Path.wildcard(Path.expand("./proto/imagemsg.proto", __DIR__))
end

Now that we have our protobuf file read in, let’s get an images binary data, create an elixir structure from our protobuf file, and send that data over a Zero MQ socket (using exzmq):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
def check_file(file_path, socket) do
  IO.puts "Sending image from file path: #{Path.expand(file_path, __DIR__)}"

  case File.read(Path.expand(file_path)) do
    {:error, :enoent} ->
      IO.puts "No file at the path: #{file_path}"
    {:ok, img_data} ->
      send_image_data(socket, img_data)
  end
end

def send_image_data(socket, img_data) do
  img_message = Zmq2.Protobuf.ImageMsg.new(data: img_data)
  encoded_data = Zmq2.Protobuf.ImageMsg.encode(img_message)

  IO.puts "The encoded data: #{inspect encoded_data}"

  Exzmq.send(socket, [encoded_data])

  IO.puts "Sent request - awaiting reply\n\n"

  # {:ok, r} =
  case Exzmq.recv(socket) do
    {:ok, r} -> IO.puts("Received reply #{inspect r}")
    _ -> {:error, "No Reply"}
  end

end

And there we have it, a message sent serialized with protocol buffers. We can now apply this same strategy over any different protocol buffer messages we define, and ship them over any protocl we’d like.

Some inspiration

Along the R&D process, I came across David Beck’s blog. David has an experiment where he was sending several million messages in TCP where he explores some ultra-efficient methods of sending messages, it’s a great read. He also covers zeromq and protocol buffers that goes more in depth into Protocol buffers and some lessons learned.

Alas, we move on!

Cheers

Scoreboard Forms in React and Angular 2

less than a 1 minute read

As a developer, you should be focused on spending some of your own time learning and staying up to date with technology that is always moving.

I wanted to find a reason to hop into some of the ‘newer’ front-end frameworks, React and Angular 2, as well as some of the module bundlers browserify and webpack.

I had the opportunity to try out Angular 2 while it was still in alpha. With the recent announcement of Angular 2 going out in Beta, I wanted to build a Scoreboard form that went along with my Scoreboard project to evaluate the two frameworks.

This post will aim to build a simple scoreboard form in both frameworks, so you can see the same DOM interactions and the code it takes to form them.

Please also note, I’m still very much learning, and some code may not be ‘ideal’.

We’ll cover:

  • The scoreboard form idea
  • Learning the ideas behind the frameworks
  • The bundling systems
  • Angular 2 implementation (TypeScript)
  • React implementation (ES6)
  • The differences between the two
  • Pros and Cons of each (in my eyes)

All of the source code is available on my Github profile in the scoreboard-form repository.

The Scoreboard Form

A scoreboard is simple – you’ll enter the two team names, then specify a touchdown or field goal for either team.

That means we will need a few components: a scoreboard and a team.

The idea will be to build these components in React and Angular2, having them use similar templates to render to the equivalent DOM structures.

Learning the ideas behind the frameworks

Both frameworks aim to contain all of the functionality and display into a component.

The idea will be to build a team component, that displays teams, and a scoreboard component, that will display both of those teams and have a submit method to post that data to our scoreboard API server.

The main difference we will see between the two frameworks is adapting to ES6 or TypeScript.

In either framework, we will create a class for the component in ES6/TypeScript, and then connect the template to it, and finally attach it in the DOM.

The bundling systems

We will use Browserify to pack up React into a single module, while using webpack to bundle up Angular 2 into a single bundle.

What is bundling you say? It’s taking all the source for our components and bundling them up with the framework code to have one JavaScript file to load the entire bundle. That way, we only need one JavaScript file instead of a whole load of <script> tags.

Angular 2 implementation

Angular 2 is built in TypeScript, which is a superset of JavaScript that allows types to have a ‘stronger type’ to work with. We will build our component in TypeScript, and transpile it down to ES5 JavaScript.

Building the Team component

In Angular 2, we need to use a decorator (see this blog post about TypeScript decorators) to specify a Team as a component that will render.

We will import the Decorator, Component, and then apply it to our class. The Component decorator specifies which DOM element it will attach to (in this case, team), any inputs our control may have, what template it will use (specified as the template key), and what other directives will be used to parse our template.

Then, we have our class that defines the component itself with a name and score, methods to increase score (touchdown and fieldGoal), a toJson method, and finally callbacks to update itself from the parent component.

The team component:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
import {Component, EventEmitter, Output} from 'angular2/core';
import {NgFor, NgIf, NgModel} from 'angular2/common';

@Component({
  directives: [NgFor, NgIf, NgModel],
  selector: 'team',
  template: `
    <div *ngIf="name == ''">
      <h3>No team set</h3>
      <input type="text" [(ngModel)]="nameInput" placeholder="Enter a name"/>
      <button type="submit" (click)="setName()">Set name</button>
    </div>
    <div *ngIf="name != ''">
      <h3></h3>
      <button (click)="touchdown($event)">Touchdown</button>
      <button (click)="fieldGoal($event)">Field Goal</button>
      <h4>Score: </h4>
    </div>
    `
    }
})
export class Team {
  @Output() updateTeam = new EventEmitter<Team>();
  constructor() {
    this.nameInput = '';
    this.name = '';
    this.score = 0;
  }

  fieldGoal(e) {
    e.preventDefault();
    this.score += 3;
  }

  touchdown(e) {
    e.preventDefault();
    this.score += 7;
  }

  setName(nameInput) {
    this.name = this.nameInput;
    this.nameInput = '';
    if(this.updateTeam) {
      this.updateTeam.next(this);
    }
  }

  toJson() {
    return { name: this.name, score: this.score };
  }
}

Defining the scoreboard component

Now we need to displays these teams in a side by side manner, a callback to update information from the team component, and a method to submit the scores to the API.

We’ll define the component as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
import {Component} from 'angular2/core';
import {Team} from '../team/team';

@Component({
  directives: [Team],
  selector: 'scoreboard',
  template: `
    <form (ngSubmit)="submitScore()">
      <div class="row">
        <div class="col-md-6">
          <h2>Home Team</h2>
          <team (updateTeam)="updateHomeTeam($event)" home="true"></team>
        </div>
        <div class="col-md-6">
          <h2>Visitor Team</h2>
          <team (updateTeam)="updateVisitorTeam($event)"></team>
        </div>
      </div>
      <div class="row">
        <button type="submit">Submit</button>
      </div>
      <div *ngIf="submitted">
        JSON payload: 
      </div>
    </form>  
  `
})
export class Scoreboard {
  homeTeam: Team = new Team();
  visitorTeam: Team = new Team();
  submitted: boolean = false;
  jsonPayload: string = null;

  constuctor() {
  }

  submitScore() {
    this.submitted = true;
    this.jsonPayload = JSON.stringify({ homeTeam: this.homeTeam.toJson(), visitorTeam: this.visitorTeam.toJson()});
  }

  updateHomeTeam(team: Team) {
    this.homeTeam = team;
  }

  updateVisitorTeam(team: Team) {
    this.visitorTeam = team;
  }
}

Pros and Cons

Pros

Cons

  • Angular 2 – docs are all over the place.
  • Main blogs that are linked from docs site are using old kabob style (e.g. *ng-if instead of *ngIf).
  • Webpack configuration – I didn’t include zone.js in my entries, and I could not get any DOM updates coming from my components changing.
  • When to use two-way bindings, and one-way bindings was good enough.
  • No ‘why’ to what im doing – it aims to just follow the same ‘idea’ as Angular.js.
  • Plunkers aren’t up to date.

React implementation (ES6)

Now that we have the basic idea of the team and scoreboard, you’ll see React is very similiar. Instead of having a decorator specify the template and DOM elements to attach to, we’ll specify a class that extends React.Component, a method that will render the markup, and finally, some bootstrap code to attach our class to a DOM element.

Defining the team component

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
import React from 'react';

export default class Team extends React.Component {
  constructor(props) {
    super(props);
    this.props = props;
    this.name = props.name;
    this.score = props.score || 0;
    this.setName = this.setName.bind(this);
    // this.state = {name: this.name, score: this.score};
    this.touchdown = this.touchdown.bind(this);
    this.fieldGoal = this.fieldGoal.bind(this);
  }

  fieldGoal(e) {
    e.preventDefault();
    this.score += 3;
    this.setState(this);
  }

  touchdown(e) {
    e.preventDefault();
    this.score += 7;
    this.setState(this);
  }

  setName(e) {
    e.preventDefault();
    this.name = this.refs.teamName.value;
    this.setState(this);
    this.props.update(this);
  }

  toJson() {
    return { name: this.name, score: this.score };
  }

  render() {
    if (!this.name) {
      return (
        <div>
          <h3>No team set</h3>
          <input type="text" ref="teamName" placeholder="Enter a name.." value={this.props.name}/>
          <button onClick={this.setName}>Set Name</button>
        </div>
      );
    } else {
      return (
        <div>
          <h3>{this.name}</h3>
          <button onClick={this.touchdown}>Touch Down</button>
          <button onClick={this.fieldGoal}>Field Goal</button>
          <h4>Score: {this.score}</h4>
        </div>
      );
    }
  }
}

Defining the Scoreboard component

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
import Team from './team.jsx';
import React from 'react';

export default class Scoreboard extends React.Component {
  constructor(props) {
    super(props);
    this.homeTeam = {};
    this.visitorTeam = {};
    this.url = this.props.url;
    this.submit = this.submit.bind(this);
    this.updateTeam = this.updateTeam.bind(this);
    this.submitted = false;
    this.jsonPayload = null;
  }

  submit(event) {
    event.preventDefault();
    this.submitted = true;
    this.setState(this);
    this.jsonPayload = JSON.stringify({ homeTeam: this.homeTeam.toJson(), visitorTeam: this.visitorTeam.toJson()});
  }

  updateTeam(team) {
    if (team.props.home) {
      this.homeTeam = team;
    } else {
      this.visitorTeam = team;
    }
  }

  render() {
    var jsonInformation = this.submitted ? (<div>JSON payload: {this.jsonPayload}</div>) : null;
    return (
      <form onSubmit={this.submit}>
        <div className="row">
          <div className="col-md-6">
            <h2>Home Team</h2>
            <Team home="true" update={this.updateTeam}></Team>
          </div>
          <div className="col-md-6">
            <h2>Visitor Team</h2>
            <Team update={this.updateTeam}></Team>
          </div>
        </div>
        <div className="row">
          <button type="submit">Submit</button>
        </div>
        {jsonInformation}
      </form>
    )
  }
}

Now you’ll see, theres no way we tell React to attach to a DOM node to attach our components to the browser DOM.

This happens by the bootstrapping code:

1
2
3
4
5
6
7
import React from 'react';
import ReactDOM from 'react-dom';
import Scoreboard from '../component/scoreboard.jsx';

window.app = (function() {
  return ReactDOM.render(<Scoreboard/>, document.getElementById('react-scoreboard'));
})();

Now, React knows to use our Scoreboard component (the one that was imported) to attach it to the react-scoreboard DOM element with an id of react-scoreboard. Internally for the Scoreboard, it specifies it’s render method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import Team from './team.jsx';
// .. snipped code ..
render() {
  var jsonInformation = this.submitted ? (<div>JSON payload: {this.jsonPayload}</div>) : null;
  return (
    <form onSubmit={this.submit}>
      <div className="row">
        <div className="col-md-6">
          <h2>Home Team</h2>
          <Team home="true" update={this.updateTeam}></Team>
        </div>
        <div className="col-md-6">
          <h2>Visitor Team</h2>
          <Team update={this.updateTeam}></Team>
        </div>
      </div>
      <div className="row">
        <button type="submit">Submit</button>
      </div>
      {jsonInformation}
    </form>
  )
}

Pros and Cons

Pros

React Dev tools – inspect react components, super handy. Dev docs talk about how to think in react – giving the why before the what, really helped understand the concepts.

Cons

Dev tooling is not straight forward – you have to decide yourself. Figuring how to plug in rendering steps between state changes. this.setState({}) with some state information.

Differences between the two

The main difference I can see if how Angular 2 specifies its selector to find out how it attaches to a DOM element you specify.

React just follows using JSX to specify the component, which you can pass in properties.

Angular 2 takes the approach of keeping state and doing stateful checks from its Virtual DOM diff’ing. However, the templating directives you can use, like *ngIf requires handling a template of some sort, where as React, you can just use JavaScript conditionals to render your DOM.

Conclusions

I really like the approach React takes. I also feel like it is a year early to the Virtual DOM party, and Angular 2 is really trying to keep up.

As far as intuition and ease of development goes, React was definitely easier. Even with my previous Angular 2 knowledge, it still took me longer to get up and going.

To give Angular 2 a fair shot, it is still in Beta. However, if I were to start a project today, it would be in React, due to the huge community that is building, the tooling available, and being backed by Facebook, one of the utmost leaders in User inface design and performance.

I hope this short write up helps! If you have any questions, please drop a comment and we’ll clear things up!

As a reminder, here is all of the code is available on Github, feel free to open an issue.

Cheers!

Using Brew to Install Old Versions

less than a 1 minute read

I just wanted to share a quick little tidbit on how to install older brew versions.

I was having some issues with an older version of Elixir failing tests (1.0.1), and the latest version (1.1.1) is working fine.

Just running brew install elixir gets latest.

To get 1.0.1 installed, I first went to the homebrew github repo, looked at the Library/Formula folder, found the elixir.rb formula to install elixir, looked in the history, found 1.0.1, and then executed the following line:

1
brew install https://raw.githubusercontent.com/Homebrew/homebrew/8506ced146655c24920f3cc5b20e6bc9e6e703cc/Library/Formula/elixir.rb

That did it, I easily got 1.0.1 installed, and going back to 1.1.1 was super easy.

Hope this helps, enjoy! Cheers!

Adding PostgreSQL Geometric Type Support to Elixir

less than a 1 minute read

In the last week or so, I’ve had a blast playing around with basic Postgres geometric types to do basic earth distance queries.

From my favorite blog, Datachomp shows how to use radius queries in postgres to find the closest place to get a burrito fix. Since I’ve been on an Elixir kick lately, I figured it was time to contribute back to the open source world by adding first class burrito, err, geometric type support.

Initial reaction

I immediately made an Ecto model trying to use the point type in my model:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
defmodule MyApp.LocationPoint do
  use MyApp.Web, :model

  schema "location_point" do
    field :name, :string
    field :date, Ecto.DateTime
    field :location, :point
    timestamps
  end

  @required_fields ~w(name date)
  @optional_fields ~w(location)

  @doc """
  Creates a changeset based on the `model` and `params`.

  If no params are provided, an invalid changeset is returned
  with no validation performed.
  """
  def changeset(model, params \\ :empty) do
    model
    |> cast(params, @required_fields, @optional_fields)
  end
end

Right away, when I ran the commands to retrieve this location from iex, it gave me some errors:

1
2
3
4
5
$ iex -S mix
$ alias MyApp.Repo
$ alias MyApp.LocationPoint
$ Repo.all(LocationPoint)
$ ** (ArgumentError) no extension found for oid `600`

Right away, I knew this mission was up to me to get point support into Postgrex.

In this post, I’ll outline how to add type support to Postgres via the Elixir package, postgrex. We will walk through adding the Point data type in Postgres.

This post will cover:

  • How to see how postgres stores its types (built in and composite)
  • How postgrex does its type lookups
  • Finding the source type – adding it to postgres senders
  • Looking up postgres source code for data mapping
  • Adding new type Point type
  • Adding built in Type structs
  • Adding encode method
  • Adding decode method

How Postgres stores its types

Postgres stores its types in a special system table called pg_type (docs). It defines a few things about the type:

  • Its typelem – how the type is stored – array, or otherwise
  • Its typsend – Output conversion function (binary format), or 0 if none
  • Its typarray – an oid to another array type that has its send method

How Postgrex does type lookups

Postgrex at it’s core is a simple data adapter into PostgreSQL from Elixir. It’s an awesome library, and if you’re using Ecto, you’re already using it!

First, let’s look at how they are loading most types, by looking them up in the pg_type table in postgres:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  ### BOOTSTRAP TYPES AND EXTENSIONS ###

  @doc false
  def bootstrap_query(m, version) do
    {rngsubtype, join_range} =
      if version >= {9, 2, 0} do
        {"coalesce(r.rngsubtype, 0)",
         "LEFT JOIN pg_range AS r ON r.rngtypid = t.oid"}
      else
        {"0", ""}
      end

    """
    SELECT t.oid, t.typname, t.typsend, t.typreceive, t.typoutput, t.typinput,
           t.typelem, #{rngsubtype}, ARRAY (
      SELECT a.atttypid
      FROM pg_attribute AS a
      WHERE a.attrelid = t.typrelid AND a.attnum > 0 AND NOT a.attisdropped
      ORDER BY a.attnum
    )
    FROM pg_type AS t
    #{join_range}
    WHERE
      t.typname::text = ANY ((#{sql_array(m.type)})::text[]) OR
      t.typsend::text = ANY ((#{sql_array(m.send)})::text[]) OR
      t.typreceive::text = ANY ((#{sql_array(m.receive)})::text[]) OR
      t.typoutput::text = ANY ((#{sql_array(m.output)})::text[]) OR
      t.typinput::text = ANY ((#{sql_array(m.input)})::text[])
    """
  end

You can see that under the hood, we’re querying Postgres and asking it for it’s types, so we can do OID lookups and call the appropriate encoder/decoder methods. From here, we can match up our newly added types encoding/decoding methods.

Finding the source type – adding it to postgres senders

Find information about the geometrics category:

SELECT * from pg_type where typcategory = 'G';

We will see the point type has an oid of 600, which is using a send specification of point_send. Other notable send types for geometries: point_send lseg_send path_send box_send poly_send line_send circle_send.

Thus, we’ll update the send types in postgrex, located in the binary.ex file:

1
2
3
4
5
6
@senders ~w(boolsend bpcharsend textsend varcharsend byteasend
            int2send int4send int8send float4send float8send numeric_send
            uuid_send date_send time_send timetz_send timestamp_send
            timestamptz_send interval_send enum_send tidsend unknownsend
            inet_send cidr_send macaddr_send point_send
            ) ++ @oid_senders

Boom, that gets us the oid to encode/decode off of!

Looking up postgres source code for data mapping

I hopped into the Postgres source code and looked up the struct type for point, found here.

1
2
3
4
5
typedef struct
{
  double    x,
        y;
} Point;

Great, its just two floats, no big deal.

Adding the point struct

Let’s craft our Postgrex stuct type in builtins.ex then!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
defmodule Postgrex.Point do
  @moduledoc """
  Struct for Postgres point.

  ## Fields
    * `x`
    * `y`
  """
  require Decimal
  @type t :: %__MODULE__{x: float, y: float}

  defstruct [
    x: nil,
    y: nil]
end

Adding the encode method

Now since we are sending PostgreSQL binary data, we need to take our data and map it to it’s binary form, via an encode method.

However, postgrex is going to do a type look up, based on the types that we used in that query above.

We’ll add the methods to encode, that does some pattern matching to decipher we are getting the correct sender value.

1
2
def encode(%TypeInfo{type: "point", send: "point_send"}, %Postgrex.Point{} = point, _, _),
  do: encode_point(point)

As you can see, we are encoding, when a TypeInfo tuple is passed with type point and send point_send! Great, we just pass that to this method to parse out the two floats passed in the binary object:

1
2
defp encode_point(%Postgrex.Point{x: x, y: y}),
  do: <<x::float64, y::float64>>

It just takes those two values, and serializes them down to their binary counterparts.

That now handles the test we’ve got to keep us honest:

1
2
3
test "encode point", context do
  assert [[%Postgrex.Point{x: -97, y: 100}]] == query("SELECT $1::point", [%Postgrex.Point{x: -97, y: 100}])
end

This test as promised, takes a Postgrex.Point type, and encodes it to the binary form, and sends it off to Postgres. How beautiful.

Adding the decode method

Now, when we get binary values from Postgres, we need to map that to our Point type we’ve created.

Adding the functions to decode in binary.ex:

1
2
3
4
5
6
7
8
def decode(%TypeInfo{type: "point"}, binary, _, _),
  do: decode_point(binary)

# ..snip..

defp decode_point(<<x::float64, y::float64>>) do
  %Postgrex.Point{x: x, y: y}
end

The real meat and potatoes is, receiving our binary parameter, mapping its individual segmets as two floats, sized 8 bytes, and then with the pattern matching mapping those to our Postgrex.Point struct. QED.

And the test:

1
2
3
4
test "decode point", context do
  assert [[%Postgrex.Point{x: -97, y: 100}]] == query("SELECT point(-97, 100)::point", [])
  assert [[%Postgrex.Point{x: -97.5, y: 100.1}]] == query("SELECT point(-97.5, 100.1)::point", [])
end

Conclusion

Once I finally figured out what pieces were what, I was able to run and create the point type, its mappings, and its senders it required, easily mapping to the struct in Elixir.

I plan to keep working on postgrex, to add first class support for Postgres geometric types.

Cheers!

The Scoreboard Project

less than a 1 minute read

Lately I’ve been wanting to dig more into some technologies I’ve been wanting to explore and gain more experience. Not only this, but I wanted to make sure my dev workflow was still improving, my tools were getting sharpened, and I was re-establishing the best practices as much as I could.

Those specific technologies I wanted to dig into was:

  • Building a CLI in Ruby, using Thor
  • A Sinatra Modular API
  • Solifying Sequel Model usage and JSON serialization
  • Building a dashboard using Dashing
  • Diving more into Rubocop for Ruby static code analysis
  • Automated Code Review using CodeClimate

I found a way to connect all the dots in what I’m calling the scoreboard project. I chose these technologies because it would let me shine up my ruby/sql skills without a framework carrying me the way. (Although they mostly did anyway!)

This blog post will go over the idea of making an API around scoreboards. There will be a simple CLI tool to gather scores on ‘games’. Those scores will be sent to the API, to store in Postgres. The dashboard project will then pull these values from postgres and display them in an easy to view manner.

This post

With this post, i’ll go over the individual pieces of the project, the interesting tidbits of each one, and finally go over a short retrospective over the entire project.

In entire time, the project took about a day and a half. It was fun, and I really enjoyed the experience over all.

The pieces

All of the projects are listed on my github profile. I’ve been trying to keep most issues in the github repo’s issue page for each respective project.

All of the projects are checked by CodeClimate, and I’ve been trying to keep up with rubocop’s rules on these as I go.

Scoreboard CLI

The idea for the CLI was to prompt the user for a home team and visitor team, then collect data about getting a touch down for the home team, for example: h:td.

It would keep prompting for more scoring plays until the user gives a SIGTERM or hits CTRL+D.

First I started by reading up on Thor, which was an absolute pleasure to work with. You can download it via gem install scoreboard.

To make it available via command line, I added this:

1
2
  spec.bindir        = "bin"
  spec.executables   = spec.files.grep(%r{^bin/}) { |f| File.basename(f) }

Then in bin/scoreboard link, we just require in our CLI and run it with the arguments:

1
2
3
4
5
6
7
8
9
10
11
12
13
#!/usr/bin/env ruby

require_relative "../lib/scoreboard/cli"

begin
  cli = Scoreboard::Cli.new
  cli.start(ARGV)
rescue => e
  raise e if $DEBUG
  STDERR.puts e.message
  STDERR.puts e.backtrace.join("\n")
  exit 1
end

A note on the SIGTERM exception handling

If you see in my STDIN.each_line loop where I read in scoring entries, you will see I rescue all Exception. This could be improved to find the exact SIGTERM exception is being thrown, but for simplicity, I left it catching the general exception.

Scoreboard API

The API has a few paths, based on the /api/v1 namespace for requests.

You can access its teams or the entire scoreboard, via GET /api/v1/teams or GET /api/v1/scores. You can see the core Sinatra Application on github.

It was absolutely easy to set up the Sequel migrations to define the team table and the scoreboard table in postgres.

The main tying point was getting the Sequel models to serialize, which was solved in another blog post.

Scoreboard Dashboard

Dashing was really easy to get started, a project set up, and out the gate.

First I had to include Sequel to get me my data, which I included an Api model to ease the SQL bridge for me.

The main point here was the scoreboard.rb file which was scheduled to run every 5 seconds, gather data from some crafty queries, and send that data to the dashboard. Other than the HTML markup, this was the chunky part of it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
require 'sequel'


DB = Sequel.connect('postgres://localhost/scoreboard')
scoreboard = DB[:scoreboard]
team = DB[:team]

send_event('games-played', { value: scoreboard.count })

def teams_played
  DB[<<-eos
      select
        sum(value)::int as value,
        label
      from (
        select count(home_id) as value, name as label from team inner join scoreboard on team.id = scoreboard.home_id group by scoreboard.home_id, team.name

        UNION ALL

        select count(visitor_id) as value, name as label from team inner join scoreboard on team.id = scoreboard.visitor_id group by scoreboard.visitor_id, team.name
      ) sub
      group by value, label
      order by value desc
    eos
  ]

end

def team_scoreboard
  DB[<<-eos
      select 
        s.id,
        t.name as home_team, 
        t2.name as visitor_team, 
        home_score, 
        visitor_score 
      from team t 
      inner join scoreboard s on s.home_id = t.id 
      inner join team t2 on t2.id = s.visitor_id
      limit 10;
    eos
  ]
end

SCHEDULER.every '5s' do
  teams = teams_played.map do |item|
    {:label => item[:label], :value => item[:value]}
  end
  send_event('games-played', { value: scoreboard.count })
  send_event('teams', { items: teams })
  puts "Scoreboard: #{team_scoreboard.to_a}"

  send_event('scoreboard', { items: team_scoreboard.to_a })
end

Retrospective

  • What went right
  • What went wrong
  • What could be improved

What went right

  • The CLI came together smoothly. Thor was easy to get running.
  • Getting data to post to the API was a breeze
  • Sinatra and Sequel were easy to hoist up a simple API to take POST data and serve GET requests as JSON
  • Getting data into the dashboard was SUPER easy with Sequel, no need to do the ORM dance
  • Dashing was easy to create my own scoreboard component, using the data- type DOM attributes

What went wrong

  • Had some issues handling SIGTERM in CLI
  • CLI still doesnt validate input
  • API for Sinatra was a little difficult to get JSON serialization off the bat
  • Dashing is very ‘opinionated’ and doesnt give you more room to fit into an existing app
  • No tests were made
  • Nothing is deployed to servers yet

What could be improved

  • Minitest suite for CLI, API, and the Dashboard
  • Dashboard process tasks could be broken out to be more DRY
  • CLI needs to check and validate input
  • API needs to add in rollbar, new relic, or other metrics to help find errors
  • Deploy all the things!

Future plans

The plan is to keep working on this project and continue improving tooling and getting other best practices in place. Finally, ship it to digital ocean and enjoy the conveniences they provide.