Josh Bavari's Thoughts

Thoughts on technology and philosophy

On the Clock

less than a 1 minute read

I was at the grocery the other day and I heard one worker asked another if they were on the clock. For some reason, it spurred some thoughts of my own.

I first was caught off guard by this seemingly small comment, because in my own head I was playing back an event from my current day at the work. What does being on the clock mean for me? For hourly workers, it’s clear. The thought I was playing with here was the idea that most people are usually on the clock most of the time.

Does our work even span futher into deeper spots of our minds? When you’re not at work, you may think of work, as I was, and you might also have work going on in your subconscious. Not necessarily stressful type events, it could even be positive things that you enjoy about your work.

There are many books on the procrastination and avoidance of work. I don’t aim to dive into that. I wanted to approach the idea from a stoic approach. Does work ‘work’ us by making think about it when we’re not doing it? If the mind were not consumed with work, what would we be doing otherwise?

In Meditations, Marcus states:

So you were born to feel “nice”? Instead of doing things and experiencing them? Don’t you see the plants, the birds, the ants and spiders and bees going about their individual tasks, putting the world in order, as best they can? And you’re not willing to do your job as a human being? Why aren’t you running to do what your nature demands?

That has sat deep with me lately, and I couldn’t stop from myself from exploring this idea of work being a dread. For some, their work is more thrilling than the warm bed. That is not to say, they do not still enjoy that. I am sure that is a natural instinct. What is the cause for that override in one’s mind?

It must drive further in, from the purpose center of the brain and the being. When someone can turn their work into a form of play, and others cannot, we must ask what that driving force is. By understanding it, we may understand ourselves, and empower others to help themselves. “What is good for the hive is good for the bee”.

Is this mindset really everything? Certainly it is not constant and certainly ebbs and flows, as is nature. As Epictetus states, “there is nothing good or evil save in the will”. Is work a negative thing while pleasure is good? The ying in the yang for happiness is sadness, so then the ying in work must have the yang in relaxation.

In todays hyper speed of intelligence work, do we ever get off of the clock? Is it possible to be off the clock, if one had some way to measure?

Or rather, is the clock not binary, but again an ebb and flow of time throughout ones life? In modern American society, most people work 40-60 hour work weeks. With life being half of your life, it aligns with one’s own self interest to focus on the balance as best as possible.

As I move forward in my working career, I aim to see it as my mission, my purpose, for what else would I be doing? I’m fortunate to be able to work on challenging topics, evolving problems, and being surrounded by others on their own same like-missions. I aim to take the mindset of being fortuate to do these things and experiencing them.

Embrace the clock. Find the flow.

The Victim or the Victor

less than a 1 minute read

When I was a younger boy, my grandma would tell me stories of my grandfather going to war. Like most men during world war 2, he did not want to go to leave his new bride and head to where death is ever looming. One thing I remember her saying to me as a life lesson is that you’re given a choice in life that most always boils down to your attitude. She said, “just like your grandpa, you can chose to be a victim, or go get the courage to become a victor. The choice is always yours.”

What does this mean? This means, when given any situation, your belief about the situation will in turn guide your thoughts and actions. This belief is so strong that even the way you think and address the situation happens whether or not your are conscious about it. The obvious choice would to be to see yourself as given a challenge that you must rise to and become a victor over.

The opposite is true – you could be given a challenge and say, ‘why me’. From this victim midset, you have now set yourself up for failure from the start. You’ve seen the challenge become the burden.

That thought has always been on my mind throughout the years, but I have to admit I haven’t always been as couragous as I could have been. You always assume when the time comes, you’ll be ready. The fact is, it comes and hits you in the face right when you’re planning for it. “Everyone has a plan until you’re punched in the face”. The point is, the mindset of preparation is the key to success. My study of stoicism has lead me lately to living as if it has already happened, being unphased by things we should expect happen, and turning my ‘have to’s into ‘get to’s.

How do you break this cycle? You start by examining your internal thoughts first. Now that you are conscious about your thoughts, you can then analyze how to course-correct. Until then, you might be in the default victim midset and not even know it.

3 Years of Elixir: Reflections

less than a 1 minute read

Back in 2015, I had just started at CANVAS Technology and my task was clear: to create a web application that can service many operations concurrently from users, robots, and other integration services. Prior to this new venture, I had spent my last few years doing Ruby on Rails, Node.js JavaScript, mobile applications (cordova, minimal Objective-C, Java/Android). Only a few months before joining CANVAS had I just started playing with Elixir and Phoenix. I was so excited and relieved to find something that was geared exactly for what we were embarking on.

What I want to outline in this post is the lessons I’ve learned using Elixir these last 3+ years and help others learn quickly.

Upgrade sooner than later

Discuss pains of upgrading Elixir 1.3 –> 1.6, Ecto 1.0 –> 2.0, Phoenix 0.9 –> 1.3. It definitely hasn’t been easy to update, this is mitigated by staying abreast of Elixir / phoenix changes and trying to implement early.

GenServers are your friend – but use them only if you must

Abstract away the API and the Server – link to post by Dave Thomas explaining splitting the APIs, Servers, and Implementations in Elixir.

Testing pains with GenServers and Ecto’s concurrency model

Make sure to restart genservers / supervisors. Having a connection time out be longer for longer running genservers that aren’t started every setup fixture.

Using docker for team / testing scenarios

Docker-compose for stack, testing with diff vars. Preload any databases by putting them in the postgres container root in /tmp.

Testing browsers with Hound / ChromeDriver

Use Hound and chromedriver.

Do not code everything to the Repo itself

It’s not as easy to cut off your database addiction. Having an intermediate context API that cache is a good first step.

PubSub is your friend, use structs to pass messages

When using cast/gproc, pass the Structs, don’t use tuples. Resist the simple solutiuons. Pass structs defined.

Learn ETS

Don’t use a cache when the Erlang VM has one built in.

Use behaviours

Take a look how crowdfundr app. Code to interfaces, not the implementations. Use the impl approach.

Nginx as a front-end for SSL termination

Links/discussion to the post, security wise, leave Nginx to handle the vulns and your app to handle the impl.

Releases with Distillery

Ship those tarballs, let it fly. Easier/safer than shiping your code. Should probably post about the replacing of ENV vars. Use a Config module for system set env variables instead of them getting baked into your sys.config file.

Clustering – using epmd / GenServers for node communication message passing

Link to swarm and libcluster – knowing that clustering comes out of the box with Erlang/Elixir.

Puppet: A Testing Handbook

less than a 1 minute read

  • Explain why the post – job required, growing pains, need for stability and health of the repo
  • Why to test
  • What to test
  • How to test Lint – Rake file Parser – rake file with globbing to run easier Rspec – spec type tests to run on puppet code (syntax mainly) Beaker – run tests on provisioned VM to ensure correct

Cover:

puppet-lint puppet parser validate <manifest.pp> puppet-rspec – A gem that runs rspec unit tests on a module (Based on compiled catalog) Beaker


Part of my role at work is managing a fleet of robots, servers, and other infrastructure responsible for running our business. Due to the nature of our business, we run within customer warehouses.

Being a one man operation, one of my main concerns is keeping all the systems stable and proper working conditions. I am not the only developer touching our puppet code, but I’m the one responsible for the system(s).

As I embarked on the journey to add reliability to our infrastructure, something very clear came up: there’s a lot of information about testing puppet, but most of it is fractured, out of date, or hard to understand. See this slideshow from 2016 about the state of testing puppet.

I’m writing this post up to act as a handbook of sorts for testing puppet, as well as a ‘repository’ for testing puppet.

Testing resources

Test Coverage Reports in Elixir

less than a 1 minute read

Lately I’ve been learning a ton more about Elixir and really working towards refactoring and hardening the system.

On my current project, I’ve got about 200 tests that exercise various parts of the system. Lately though, I’ve been trying to analyze which parts of the system aren’t being covered, and of course, theres tools to help with that.

The two I looked at were Coveralls and Coverex. I’m going to be using coverex in this post.

Getting started is a breeze, check the readme for that. I’ll cover it briefly for a bit here, my modifying our mix.exs file:

1
2
3
4
5
6
7
  # in `def project`, we add test_coverage
  test_coverage: [
    tool: Coverex.Task
  ],

  # in deps, add the depedency for only test environment
  {:coverex, "~> 1.4.10", only: :test},

After setup, running mix test --cover generates some reports in your projects ./cover folder – with functions.html and modules.html. These give you your standard coverage reports with lines covered / ratio covered.

For my project, I had quite a bit of generated files using exprotobuf. The coverage report was getting butchered from not using these many files in my tests.

According to the docs, we can add a keyword for ignore_modules in the keyword list test_coverage and the coverage reports will ignore those modules.

However, for my generated list of modules, I had quite the growing list to ignore and it quickly became unwieldy to put that list of modules in my mix.exs file.

Since we can’t access other modules from our mix file, I had a quick solution. I created a .coverignore file in the project directory, lumped in all the modules I wanted to ignore (from the modules.html generated file) and put them all in the .coverignore file.

I ensured all the modules I wanted to ignore were all newline delimited (\n).

From there, I modified my mix.exs file as such:

1
2
3
4
5
6
7
8
  # Near the top
  @ignore_modules File.read!("./.coverignore") |> String.split("\n") |> Enum.map(&(String.to_atom(&1)))

  # in def project
  test_coverage: [
    tool: Coverex.Task,
    ignore_modules: @ignore_modules
  ],

Boom, that does it! Now we’ve got a manageable list of modules to ignore in a separate file so we can keep our mix file clean.

All in all, coverex is a great module, and I would suggest using it if you do not want to ship data to coveralls.

Hope this helps, happy coding. Cheers!

Multicast Service Discovery in Electron

less than a 1 minute read

I’ve been playing around with mDNS lately for broadcasting some services for applications to auto-connect with.

The first experiment I had was setting up a server that broadcasts a TCP endpoint for an Electron application to discover and connect for the application data.

This was so easily done that I challenged myself to see how fast I can whip out a blog post.

First, get an Ubuntu server up (I used a Vagrant VM).

Run the commands:

1
sudo apt-get install avahi-utils

From here, the service for avahi (mdns) should be auto-started. Edit the configuration to enable broadcasting:

vim /etc/avahi/avahi-daemon.conf – here’s a config that’s minimally broadcasting only the IPv4 address:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[server]
host-name=webserver
domain-name=local
use-ipv4=yes
use-ipv6=no
allow-interfaces=eth1
deny-interfaces=eth0
ratelimit-interval-usec=1000000
ratelimit-burst=1000

[wide-area]
enable-wide-area=yes

[publish]
publish-addresses=yes
publish-hinfo=yes
publish-workstation=no
publish-domain=yes

Now, create a service configuration: vim /etc/avahi/services/mywebserver.service, with this contents:

1
2
3
4
5
6
7
<service-group>
  <name>Webserver</name>
  <service>
    <type>_http._tcp</type>
    <port>80</port>
  </service>
</service-group>

Simple as that. Just restart the avahi-daemon – sudo service avahi-daemon restart.

This should now have your server broadcasting that it has a webserver running at port 80, named Webserver.

To check the service is broadcasting, run avahi-browse _http._tcp -tr – this should show your server as servername.local, with Webserver, pointing to its IP and port.

Example:

1
2
3
4
5
6
+   eth1 IPv4 webserver                              Web Site             local
=   eth1 IPv4 webserver                              Web Site             local
   hostname = [webserver.local]
   address = [192.168.0.101]
   port = [80]
   txt = []

Now for the electron portion, in your application, install the node mdns module: npm install --save mdns.

This will add the node module to your project, but since it has native compilation steps, you must build it with electron-rebuild. Do this: npm install --save-dev electron-rebuild.

Then run: ./node_modules/.bin/electron-rebuild – this will rebuild the mdns module for your electron node version correctly.

To do the DNS lookups, simply run the steps from the node mdns README. Set the discovery type to http and it will find your service. From there, you can grab the address and then get the data from the web server (or html page redirection) as you so wish!

Happy coding!

Using Erlang Observer on a Remote Elixir Server

less than a 1 minute read

I’ve been using Elixir a ton at work and in some fun side projects and I’m absolutely in love with it.

One tool I especially love is the Erlang Observer tool, that shows you IO, memory, and CPU usage used by your app and the Erlang VM.

Once I got some apps deployed, I wanted to observe them remotely. I found a few google forum posts and the IEx docs, but I wanted to wrap up this knowledge for when I need it in the future.

I’m going to monitor a Phoenix app in this quick blog post.

First, fire up your Phoenix server on say, a VPS, giving it a node name:

iex --name server@64.16.134.61 --cookie jbavari -S mix phoenix.server

Then on your remote viewing machine, say your Mac, run the following:

iex --name josh@192.168.1.1 --cookie jbavari

Now we’re set up to do some remote observations!

Fire up :observer.start on your local machine, which should open up the Erlang observer.

Now from the menu, select ‘Nodes’, then you should see your node there. If not, click the connect to node button, type in your server@64.16.134.61 node address and you should be able to view your node via the observer!

Enjoy!

Custom JSON Encoding in Phoenix

less than a 1 minute read

I recently have been working a lot using Leaflet.js to do some mapping.

In some of my models, I use the lovely Geo package for Elixir point and geospatial usage. I needed to add support for Poison to encode my model.

I’ve been serving geo json from my models, and I needed a way to use the JSON encoding way easier. I’m sending some data out to a ZeroMQ socket so I need to encode it by transorming my Geo module in a way that I could encode it with Geo JSON.

I modified my model in two ways – one by putting the @derive directive to tell Poison to encode only certain fields. That is one way.

In another way, I needed to encode it everytime by calling the Geo.JSON.encode method without me having to do it. You can see that in the defimpl.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
defmodule MyApp.Point do
  use MyApp.Web, :model

  # Option 1 - specify exactly which fields to encode
  @derive {Poison.Encoder, only: [:id, :name, :geo_json]}
  schema "points" do
    field :name, :string
    field :position, Geo.Point
    field :geo_json, :string, virtual: true

    timestamps
  end

  def encode_model(point) do
    %MyApp.Point{point | geo_json: Geo.JSON.encode(point.position) }
  end

  defimpl Poison.Encoder, for: MyApp.Point do
    def encode(point, options) do
      point = MyApp.Point.encode_model(point)
      Poison.Encoder.Map.encode(Map.take(point, [:id, :name, :geo_json]), options)
    end
  end
end

Cheers.

Adding Additional Static Paths in Phoenix

less than a 1 minute read

Phoenix is awesome.

A problem I ran into lately is how to add additional static paths to be served.

If you take a look in your lib/endpoint.ex file, you’ll see the plug used for adding static paths:

1
2
3
plug Plug.Static,
  at: "/", from: :electronify, gzip: false,
  only: ~w(css fonts images js favicon.ico robots.txt)

I wanted to add another folder to be served, ‘zips’, that I had to edit the only: line in the plug specification as such:

1
2
3
plug Plug.Static,
  at: "/", from: :electronify, gzip: false,
  only: ~w(css fonts images js favicon.ico robots.txt zips)

There you have it, now I can access the files in the zips folder in priv/static/zips through the URL. Cheers!

Shipping Data With Protocol Buffers in Elixir

less than a 1 minute read

Lately, I’ve needed some data shipped across to various nodes to exchange data in a variety of places on a problem I was working on. There were a few ways to get that data shipped across, as the usual suspects are JSON, XML, or Google’s Protocol Buffers.

For this specific problem, we were needing to get that data shared from C++ nodes to Elixir/Erlang.

Since the team was using Protocol buffers already, I decided to give them a run in Elixir using exprotobuf.

Note: the client for this experiement is on github.

The idea

The idea here is – we’ll capture pieces of data from one node and ship it to the server for processing. We define the data structure by a .proto file, then turn our data into binary form by encoding it, and finally shipping it to it’s destination. We could do the same thing with JSON, but we want the data as light as possible.

We’ll use ZeroMQ to ship the data and use the Elixir package exzmq to encode in protocol buffers.

The process

First we define our protocol buffer format for an image message we want to send with data, its width, height, and bits per pixel:

1
2
3
4
5
6
message ImageMsg {
  optional bytes data = 1;
  optional int32 width = 2;
  optional int32 height = 3;
  optional int32 bpp = 4;
}

We set up our application to use exprotobuf in our mix.exs file:

1
2
3
4
def application do
    [applications: [:logger, :exzmq, :exprotobuf],
     mod: {Zmq2, []}]
end

as well as including it as a dependency:

1
2
3
4
5
6
defp deps do
  [
    {:exzmq, git: "https://github.com/zeromq/exzmq"},
    {:exprotobuf, "1.0.0-rc1"}
  ]
end

Finally we create an Elixir struct from this proto file as such:

1
2
3
defmodule Zmq2.Protobuf do
  use Protobuf, from: Path.wildcard(Path.expand("./proto/imagemsg.proto", __DIR__))
end

Now that we have our protobuf file read in, let’s get an images binary data, create an elixir structure from our protobuf file, and send that data over a Zero MQ socket (using exzmq):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
def check_file(file_path, socket) do
  IO.puts "Sending image from file path: #{Path.expand(file_path, __DIR__)}"

  case File.read(Path.expand(file_path)) do
    {:error, :enoent} ->
      IO.puts "No file at the path: #{file_path}"
    {:ok, img_data} ->
      send_image_data(socket, img_data)
  end
end

def send_image_data(socket, img_data) do
  img_message = Zmq2.Protobuf.ImageMsg.new(data: img_data)
  encoded_data = Zmq2.Protobuf.ImageMsg.encode(img_message)

  IO.puts "The encoded data: #{inspect encoded_data}"

  Exzmq.send(socket, [encoded_data])

  IO.puts "Sent request - awaiting reply\n\n"

  # {:ok, r} =
  case Exzmq.recv(socket) do
    {:ok, r} -> IO.puts("Received reply #{inspect r}")
    _ -> {:error, "No Reply"}
  end

end

And there we have it, a message sent serialized with protocol buffers. We can now apply this same strategy over any different protocol buffer messages we define, and ship them over any protocl we’d like.

Some inspiration

Along the R&D process, I came across David Beck’s blog. David has an experiment where he was sending several million messages in TCP where he explores some ultra-efficient methods of sending messages, it’s a great read. He also covers zeromq and protocol buffers that goes more in depth into Protocol buffers and some lessons learned.

Alas, we move on!

Cheers