Josh Bavari's Thoughts

Thoughts on technology and philosophy

2014 in Review

less than a 1 minute read

2014 has been an interesting year and I’d like to spend a minute to review it for myself as a reminder.

January started out, I was working for my start up, RaiseMore. I wanted to make 2014 I wanted to share knowledge I had been gathering from our projects at RaiseMore. I made this my purpose for the year is to help others as much as I can, as I truly believe we are all in this together. “Iron sharpens iron”.

I had been using Cordova, and set some goals up for the year to get more active and contribute to the project. It’s really easy, hit this link for more information about how to contribute. I started by grabbing some Jira tasks to improve the Cordova plugin registry. At the time, I thought the registry needed a face lift to help out the community.

As a start up in OKC, we had been using tech that at the time hadnt been popular in OKC. As a team, we all be focused heavily on a platform built of an iOS/Android app, API server, database, and a few other back end services. All of the technologies we used were done in Ruby, Rails, Sinatra, Postgres, Cordova, JavaScript, some Grunt/Gulp build systems.

The biggest challenge we had as a small team of 4 devs was how to manage the systems. Since they were all broken up into multiple projects, we all had to care a lot about one portion as well as have general knowledge on the other parts. Reflecting on this now – this worked really well for our team.

By March, I had spoke few times at the Ruby group, a few at the JavaScript group, and after some convincing and encouragement from a great friend, Rob Sullivan, I worked up the courage to submit some talks to the Kansas City Developer conference in May.

I saw a post by the Apache foundation, proposing a tweet-off to get a free ticket to ApacheCon 2014 in Denver. This would let me meet some of the great devs I had been collaborating/talking with through the Cordova IRC/Mailing list/google hangouts. I won the ticket, and with some help from friends, made it to Denver and met all the Cordova devs. Just like Rob always tells me – if you don’t ask, then it will always be a ‘no’. Glad I was pro-active and tweeted for the ticket!

May hits and I find myself infront of 100+ devs that have come to see my talk at Kansas City Dev Conf – I had to admit and say I was very nervous. After my talk, I had a ton of great questions, feedback, and general appreciation for my sharing of knowledge. I then gave a second talk a few hours later over Moving forward with Cordova plugins that talked about how to understand/create plugins for Cordova projects, including pushing them to the registry.

After my second talk is where I met a now good friend, Ross Martin, and we still talk and collaborate about an awesome Ionic app that he is making. Two big things in 2014 – sharing freely and talking through twitter. Its gold, folks.

Come July, I’ve decided it’s time for me to face my biggest fear yet – moving out of Oklahoma and living alone. I had begun interviewing and networking with others around the country involved in tech. I highly recommend this – as I met some great connections now of people to talk to, to help, to bounce ideas off, and just generally respect. I had decided to move to Boulder, Colorado, as I had fallen in love with the mountains.

Come October, I had been selected to speak at Thunder Plains, which was a great reason to head back to my home town of Oklahoma, present, and catch up with all the great technologists in Oklahoma. That town is packed full of amazing people that are working together as techlahoma – Rob Sullivan, Jesse and Amanda Harlin, Vance Lucas, Jeremy Green, Jeff French, and way too many more to mention!

I got a job at Mondo Robot, where I worked for a few months with them on a handful of interesting projects from August until November. Through my interaction with the Cordova community, I came to find a job working for Drifty, which you may know by the awesome Ionic Framework.

I can honestly say working for Drifty has been amazing. All day long I get to work on something I really believe in, find meaning in, and most importantly, aligns with my goals of helping others. All day long I get to work on a hobby with others who are just as excited and driven to win as I am. I couldn’t ask for a better place to end up.

The year I turned 30, 2014, has come to an end. Looking back, I can say I’m happy of my progress, and striving to continue processes that keep me helping others to the best of my ability and keep giving back.

Here’s to an awesome 2015 for us all, lets make it awesome.

A Field Guide to Snap.svg

less than a 1 minute read

This last weekend I spent a little time on a fun little side project to learn how to use Snap.svg. I was trying to take my friend Rob’s datachomp character and make it a little interactive.

After trying to do what I thought was a few simple little hacks with his PNG image, it turned out to be a great way to fully learn and understand SVG and the Snap.svg library.

I have to admit I did not fully understand what SVG was and what it was composed of. I wanted to compile a list of thoughts, links, blogs, and tutorials that helped me learn along the way.

What SVG is and what it isnt

First of all, I had to learn that there are two image types – ones that scale (vector), and ones that are defined with strict sizes (bitmaps). For the longest time, I admit I thought they were basically the same.

Vectors are mainly svg, while bitmap types are jpeg, png, gif, to name a few.

You’d want to use an svg element when you need an image that can grow without looking skewed. You’d want to use a bitmap type when the size can remain the same.

One thing to note is, svg’s can contain bitmap images as well, as in this example:

1
2
3
4
5
6
7
8
<html>
  <body>
    <svg id="svg-node">
      <circle id="svg-element">
      <image id="datachomp-arm" sketch:type="MSBitmapLayer" x="0" y="0" width="269" height="209" xlink:href="img/datachomp/arm.png"></image>
    </svg>
  </body>
</html>

Svg editors vs bitmap editors

My undertanding is that most bitmap editors can’t do svg. GIMP, photoshop, and other editors like these are bitmap editors. Although they can create paths and export them, for the most part, they cannot do svg type modifications.

Some svg editors are illustrator, inkscape, and fireworks, to name a few.

Most vector editors can import bitmap images and use them as an svg element. My understanding is, they cant really modify them other than stretch/skewing them. However, I could and probably am wrong about this. (I dont pretend to be an expert at this!)

Svg understanding

To start, Mozilla Developer Network had a great set of documents to help understand SVG: what it is, what elements it’s composed of, and how to define shapes, paths, and transforms.

MDN SVG tutorial

From the article: Scalable Vector Graphics (SVG) is an XML markup language for describing two-dimensional vector graphics. SVG is essentially to graphics what XHTML is to text.

That being said, you’d be interested to know that inside of a root svg element, it contains other elements. Here’s a list of those elements available.

Using Snap.svg to make svg elements look alive

Modifying svg element attributes

You can access and modify any attribute on any element from Snap.svg. Examples could be the stroke, the width of the stroke, the x/y coordinates of the element, and many other attributes.

First, select the element (using Snap), then do a call to elem.attr({}):

Html:

1
2
3
4
5
6
7
<html>
  <body>
    <svg id="svg-node">
      <circle id="svg-element">
    </svg>
  </body>
</html>

JavaScript:

1
2
3
4
5
6
7
8
9
10
var svgNode = Snap.select('#svg-node'),
    svgElement = svgNode.select('#svg-element');

svgElement.attr({
    fill: "#bada55",
    stroke: "#000",
    strokeWidth: 5,
    x: 50,
    y: 100
});

Transforms

Snap.svg defines some methods to help you transform your svg elements. It looks like this:

1
2
3
4
var arm = datachomp = Snap.select("#datachomp"),
      arm = datachomp.select("#datachomp-arm");
var elementTransform = "t0,-80r360t-30,0r360t-30,30t-10,10";
arm.animate({transform: tAmt}, 500, mina.elastic);

However, I was having some trouble understanding the transform string syntax. The author also created Raphael.js and provides some additional documentation on how to understand transform strings here.

Taken from the Raphael reference:

“ Each letter is a command. There are four commands: t is for translate, r is for rotate, s is for scale and m is for matrix.

There are also alternative ‘absolute’ translation, rotation and scale: T, R and S. They will not take previous transformation into account. For example, …T100,0 will always move element 100 px horisontally, while …t100,0 could move it vertically if there is r90 before. Just compare results of r90t100,0 and r90T100,0.

So, the example line above could be read like ‘translate by 100, 100; rotate 30° around 100, 100; scale twice around 100, 100; rotate 45° around centre; scale 1.5 times relative to centre’. As you can see rotate and scale commands have origin coordinates as optional parameters, the default is the centre point of the element. Matrix accepts six parameters. “

Paths

Again I admit I knew very little about how to define a path. This document helped tremendously in the different types of paths and how to define them.

One task I wanted to do was make an svg element follow along a path. This CodePen helped tremendously with figuring out how to make an element follow along with a path.

Out of this google group thread, a code snippit comes up that helps:

1
2
3
4
5
6
7
8
9
//Snap.svg helper method to make an element trace a defined path

function animateAlongPath( path, element, start, dur ) {
    var len = Snap.path.getTotalLength( path );
    Snap.animate( start, len, function( value ) {
            var movePoint = Snap.path.getPointAtLength( path, value );
            element.attr({ x: movePoint.x, y: movePoint.y });
    }, dur);
};

I found a blog post with a demo that helped show some additional paths and how to use tools to create them, found here.

I found another little hack on how to create paths using GIMP. First, start to create your path with the path tool. When you’re done, select your path you created from the toolbar (under the ‘paths’ tab), right click it, and select export path. That should give you an svg file with the path inside of it.

Svg vs Canvas

A question came up, when would you want to use svg over something like the canvas?

After reading this article, the author makes a point for which you’d want to use:

SVG Relies on Files, Canvas Uses Pure Scripting SVG images are defined in XML. As a result, every SVG element is appended to the Document Object Model (DOM) and can be manipulated using a combination of JavaScript and CSS. Moreover, you can attach an event handlers to a SVG element or update its properties based on another document event. Canvas, on the other hand, is a simple graphics API. It draws pixels (extremely well I might add) and nothing more. Hence, there's no way to alter existing drawings or react to events. If you want to update the Canvas image, you have to redraw it.

I’ll continue updating this post as I learn more. I hope this helps others learn these svg topics with ease.

Exploring Best Practices With Docker for Older Libraries

less than a 1 minute read

I am not pretending to be an expert about what’s in this post, but merely a talking point to learn upon.

Problem: I need to reassemble an old C++ project with some old libraries and files that may not be around (or have disappeared already).

First theres a big chunk of files that are used strictly for rendering a video, ~560MB. Some of which had since gone missing.

Then theres some old C++ libraries which a previous shell script was doing a wget request for, and the files are nowhere to be found.

Finally, there’s the need to rebuild the image used to render the files.

Theres so many ways to attack this problem, I’m just going to cover my approaches. I’m open to new ones as well.

Potential solutions for rendering files

  • store on AWS S3
  • put into git repo
  • store on server somewhere

Lets break down the pros / cons of these

Store on AWS S3

PROS:

  • quick to add
  • cheap to store

CONS:

  • can go missing (and did)

Put into git repo

PROS:

  • versioning control with notes (none before)
  • the files give a story in time
  • cheap or free

CONS:

  • slow to pull repo (duh)
  • storing binary files (derp)

Store on server somewhere

PROS:

  • cheap to store
  • fast to access (local network)

CONS:

  • can go missing (and did)
  • no story to the files

Potential solutions for server image

  • single shell script to run for setting up image
  • dockerfile to build up the image with RUN commands
  • dockerfile to execute the single shell script

Some of the libraries this said project was depending on are no longer where they were from a previous shell script to set them all up. That means I have to do some kind of dependency management. Whether that be forking the libraries into a git repo I know will be solid, or copying the files somewhere I can trust, or more simply committing them to my own repo (560 MB or more.. ugh).

This is my thought process, not sure if its right:

If your aim is to have something fully repeatable and easy to run again, go with the docker solution.

If your aim is to just get it done quickly, go with the shell script.

However, I still can’t decipher what the pro/cons of the dockerfile just running a single shell script.

Let’s dive deeper into the pros and cons of each.

Single shell script

Steps:

  • Create instance from Amazon AMI
  • create / test shell script
  • copy shell script to server
  • run shell script on server

PROS:

  • quick to run (once completed, overall time)
  • quick to tell you of errors
  • works on my machine

CONS:

  • not easily repeatable
  • may not work in another environment (things are assumed)
  • not always easy to debug

Dockerfile with RUN commmands

Steps:

  • install docker (if not already)
  • create Dockerfile with RUN commands
  • ADD dependencies to the docker container
  • docker build image
  • docker run image
  • bundle image to Amazon AMI
  • start instance
  • profit

PROS:

  • control the starting point environment
  • commands verified to work step by step
  • easily repeatable
  • quick to tell you of errors
  • fast after first run (cache)

CONS:

  • slow start up with downloads/updates/git clones/etc
  • costly for disk space
  • must install docker / boot2docker / etc

Dockerfile to execute single shell script

Steps:

  • install docker (if not already)
  • create image from dockerfile
  • run image
  • create / test shell script in image
  • modify dockerfile – ADD shell script created in previous step

PROS:

  • quick to test out your commands

CONS

  • harder to have the diffs between images when modifying shell script

Managing Environment Variables for Your Ionic Application

less than a 1 minute read

I’ve been lucky enough to be developing with the Ionic framework lately. One issue I keep running into is – how do I manage some environment variables (base api url, debug enabled, upload url, etc) across my code, both tests and application.

I’d like to share a little solution I’ve come up with. It may not be the BEST solution to take, but it has been working great for me.

The idea

I’d like to have some files that I can preprocess – say ‘AppSettings.js’ that will expose some variables for the rest of my application to use. This could contain those pesky variables that I will need to change frequently.

I put my preprocess file templates in my root folder named templates. I will have that file contain my preprocess variables. I will spit out the preprocessed file as www/js/appsettings.js file once its been preprocessed.

That preprocessed file will be used in both my index.html and my karma.conf.js for testing.

I harness gulp a lot, however you can still use Grunt or just plain node.js as well.

My AppSettings.js file:

1
2
3
4
5
6
7
8
9
10
11
12
AppSettings = {
  // @if ENV == 'DEVELOPMENT'
  baseApiUrl: 'http://localhost:4400/',
  debug: true
  // @endif
  // @if ENV == 'TEST'
  baseApiUrl: 'https://test.api-example.com/'
  // @endif
  // @if ENV == 'PRODUCTION'
  baseApiUrl: 'https://api-example.com/'
  // @endif
}

In my preprocess file – you can see I have some @if ENV == '' statements beginning with // – these will be replaced if the if statement is true. (Duh)

Gulp Preprocess Task

I like gulp preproces. Install with npm install --save-dev gulp-preprocess.

My gulpfile contains 3 tasks – dev / test_env / and prod, looking like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
var preprocess = require('gulp-preprocess');
gulp.task('dev', function() {
  gulp.src('./template/appsettings.js')
    .pipe(preprocess({context: { NODE_ENV: 'DEVELOPMENT', DEBUG: true}}))
    .pipe(gulp.dest('./www/js/'));
});

gulp.task('test_env', function() {
  gulp.src('./template/appsettings.js')
    .pipe(preprocess({context: { NODE_ENV: 'TEST', DEBUG: true}}))
    .pipe(gulp.dest('./www/js/'));
});

gulp.task('prod', function() {
  gulp.src('./template/appsettings.js')
    .pipe(preprocess({context: { NODE_ENV: 'PRODUCTION'}}))
    .pipe(gulp.dest('./www/js/'));
});

Invocation

Now I just have to fire off gulp dev for my development settings, gulp test_env for test settings, and gulp prod for production settings.

As I mentioned – this works great for my tests, as I include the preprocessed file in karma.conf.js so my tests can use AppSettings.baseApiUrl (make sure you have your tests call the dev task first!)

I hope this helps any who may have some environment variables they need to change between environments!

Making Rails Fixtures Across Postgres Schemas Play Nice

less than a 1 minute read

This past week or so I’ve had another run in with using Rails to access data across Postgres schemas. I thought I would share some of my experiences I’ve had with the two.

I’m going to assume you’re comfortable with Rails (ActiveRecord with models) and Postgres (what is a schema and why you’d want to use one).

The wild wild west of data

I had a public schema for most of my data (people, preferences, etc) and another schema bikeshop that has several entries from an external data feed, that may or may not change format along the way.

To ActiveRecord – or not to be

First I tried making some simple ActiveRecord classes, all while admitting I’m really not that fond of ActiveRecord.

1
2
3
class BikeStore < ActiveRecord::Base
  set_table_name 'bikes.store'
end

Easy right? It happens to work too, sweet.

Wait, I thought you wrote tests too?

Woops, let’s get that set up too. First I’ll start by setting up my fixtures (heres a few reasons I still use minitest & fixtures). I started by naming my file bikeshop.yml and it looks like this:

1
2
3
first:
  id: 1
  name: 'Jims Bike shop'

And a quick test:

1
2
3
4
5
6
7
8
9
10
require 'test_helper'
class BikeStoreTest < ActiveSupport::TestCase
  before do
      @bike_store = bikesstore(:first)
  end

  test 'should get bike store name' do
      refute_nil @bike_store
  end
end

Right away you’ll get one of these errors:

  • There is no table named bikestore
  • There is no function named bikesstore

How do you even use or access the fixture data then?

Fixture naming with schemas

The trick is this: naming the fixture yaml file schema.table.yml – that properly sets up the fixture to let us get some test data. I called my file bikes.store.yml and that fixed things up.

Accessing the fixture data in a test

Still not sure on this one folks. Please someone comment and help the world out!

Hope this helps

Managing Cordova Plugins With package.json and Hooks

less than a 1 minute read

In a previous post, I blogged about how to manage plugins with variables. I wanted to expand on that some more, and this time, talk about how to use your package.json to manage your plugins with versions as well as a way to reset your cordova set up.

The problem

Whenever I start a new Cordova project, I start by adding in all my plugins. Then once they are added, I’ll then commit them all and push the repository with all the plugins.

My workflow is usually like this:

  • cordova create new ProjectApp
  • cd ProjectApp
  • cordova platform add ios
  • cordova plugin add org.apache.cordova.camera
  • cordova plugin add org.apache.cordova.contacts
  • insert more plugin statements for every plugin we want
  • cordova run ios
  • cordova run android

Occassionaly, I run into this issue when I’m using plugins that require native variable hooks when installing. The prime example is the facebook plugin, it requires the APP_ID to be passed in with the cordova plugin add command with the options of --variable APP_ID="some_id".

What I’d rather do

It’d be nice to have these plugins being saved with their version, so when the next user needs to pull the plugins, or modify the installation, they can just modify the package.json and run a command to install them all. That way, we can get some kind of versioning on our plugins.

Ideally, I want to just type cordova setup – have it look at my package.json file, and just begin installing what’s listed there.

Making the dream come true

First, lets start by putting our platforms and plugins in our package.json like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
{
  "name": "SampleApp",
  "version": "0.0.0",
  "description": "Sample App",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "platforms": [
    "ios",
    "android"
  ],
  "plugins": [
    "org.apache.cordova.camera",
    "org.apache.cordova.console",
    "org.apache.cordova.contacts",
    "org.apache.cordova.device",
    "org.apache.cordova.dialogs",
    "org.apache.cordova.file",
    "org.apache.cordova.file-transfer",
    "org.apache.cordova.geolocation",
    "org.apache.cordova.inappbrowser",
    "org.apache.cordova.media",
    "org.apache.cordova.media-capture",
    "org.apache.cordova.network-information",
    "org.apache.cordova.splashscreen",
    {
      "locator": "https://github.com/jbavari/cordova-facebook-connect.git",
      "variables": {
          "APP_ID": "some_id",
          "APP_NAME": "some_name"
      }
  }
  ],
  "devDependencies": {
    "load-grunt-tasks": "~0.4.0",
    "time-grunt": "~0.3.1",
    "grunt": "~0.4.4",
    "grunt-shell": "~0.6.4"
  },
  "dependencies": {}
}

Automating Platforms

Now, we’ll need a script that will look at our package.json and begin installing our platforms and plugins.

My platform installation script is located in the tasks directory named platforms.js, and looks like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#!/usr/bin/env node

//This script will add or remove all plugins listed in package.json
//usage: node platforms.js [add | remove]

var command = process.argv[2] || 'add';
var packageJson = require('../package.json');

var fs = require('fs');
var path = require('path');
var sys = require('sys')
var exec = require('child_process').exec;

packageJson.platforms.forEach(function(platform) {
    var platformCmd = 'cordova platform ' + command + ' ' + platform;
    exec(platformCmd);
});

Automating Plugins

My plugin installation script is also in my tasks directory, and named plugins.js:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#!/usr/bin/env node

//This script will add or remove all plugins listed in package.json

//usage: node plugins.js [ add | remove ]

var command = process.argv[2] || 'add';

var packageJson = require('../package.json');

var fs = require('fs');
var path = require('path');
var sys = require('sys')
var exec = require('child_process').exec;

function createAddRemoveStatement(plugin) {
    var pluginCmd = 'cordova plugin ' + command + ' ';
    if(typeof plugin === 'string') {
        pluginCmd += plugin;
    } else {
        if(command === 'add') {
            pluginCmd += plugin.locator + ' ';
            if(plugin.variables) {
                Object.keys(plugin.variables).forEach(function(variable){
                    pluginCmd += '--variable ' + variable + '="' + plugin.variables[variable] + '" ';
                });
            }
        } else {
            pluginCmd += plugin.id;
        }
    }

    return pluginCmd;
}

function processPlugin(index) {
    if(index >= packageJson.plugins.length)
        return;

    var plugin = packageJson.plugins[index];
    var pluginCommand = createAddRemoveStatement(plugin);
    console.log(pluginCommand);
    exec(pluginCommand, function(){
        processPlugin(index + 1);
    });
}

processPlugin(0);

Great. Now I don’t really need to add all the plugins, remove them, or worry about platforms. I can just run my scripts by doing node tasks/platforms.js or node tasks/plugins.js to have it set up my project as stated in my package.json file.

Easier management for teams, I’d like to think.

Hope this helps others.

AngularJS - Testing HTTP Post Data

less than a 1 minute read

I’ve been doing a lot of testing lately in AngularJS, as I’m sure you can tell from my many posts as of late.

One thing I’m always curious about is whether or not I’m doing things correctly. Testing always helps reinforce this, as does publishing blogs and getting feedback from my peers.

Problem

Many times I’ll have my AngularJS service fire off an HTTP post request to the server for a message. I can’t even begin to tell you how much I sometimes butcher my POST request data.

I wrote a test to verify my post data was correct for the following function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
var sendPost = function sendPost(post) {
  var deferred = $q.defer();
  var publishPostPath = 'http://example.com/post';
  var postData = {
      event_user_id: Auth.currentUser().id,
      lat: location == null ? post.lat : 0,
      lon: location == null ? post.lon : 0,
      message: post.storyMessage,
      post_fb: post.postToFB,
      post_twitter: post.postToTwitter,
      post_team: post.postToTeam,
      tag: selectedTagId == null ? 0 : selectedTagId
  };

  $http.post(publishPostPath, postData).success(function(data) {
      if(data) {
          deferred.resolve(data);
      } else {
          deferred.reject(data);
      }
  }).error(function(error) {
      deferred.reject(error);
  });

  return deferred.promise;

}

Pretty simple, nothing too fancy.

I want to test this bad boy and make sure its passing the correct post data parameters. Luckily for us, AngularJS gives us our friendly $httpBackend tool to do things like this:

1
2
// Method declaration
expect(method, url, [data], [headers]);

One thing to note, is the function for [data] is a version of the POST data object after its been run through something like JSON.stringify.

A full test looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
it('should have true returned for proper sendPost', function() {
  var post = {storyMessage: 'Hello', postToFB: true, postToTwitter: true, postToTeam: false};
  $httpBackend.when('POST', 'http://example.com/post',
      function(postData) {
          jsonData = JSON.parse(postData);
          expect(jsonData.message).toBe(post.storyMessage);
          expect(jsonData.post_fb).toBe(post.postToFB);
          expect(jsonData.post_twitter).toBe(post.postToTwitter);
          expect(jsonData.post_team).toBe(post.postToTeam);
          return true;
      }
  ).respond(200, true );

  Feed.sendPost(post).then(function(d) {
      expect(d).toBeTruthy();
  });

  $httpBackend.flush();
});

Going forward, there should be no excuses as to why my HTTP post requests fail due to parameters being passed or set incorrectly.

I hope this helps any others looking to test their post data parameters.

AngularJS Project Structures

less than a 1 minute read

As I’m always trying to learn more about Angular and proper file structures, I just wanted to get a quick log of references for project layout information in AngularJS.

I’m particularly interested in the Ionic style layout, and how that should line up.

So far I’ve been reading:

Pushing Jobs to Sidekiq From Another Server

less than a 1 minute read

We use Sidekiq for our background job processing for videos, social integrations, and other tasks. It works great for what it does.

Due to some of technical decisions at work, we have a few servers set up.

  • An API server
  • A job processing server
  • An analytical dashboard Rails server

The job processing server has all the Sidekiq worker models in it, as you’d expect. We did this to keep all processing in one central location.

Some use cases we have for it is to have all Push notifications sent from a single location, the job server. However, we need to trigger some of those from our API or analytical dashboard.

The problem and solution

How do we get workers queued up from other servers without replicating the Worker class in other servers? Since Sidekiq uses Redis, we figure’d we’d make a simple RedisJobPusher class to push workers to list in Redis that Sidekiq watches. Using this class, we can now queue jobs from other servers.

The class has a core method, push_to_queue, that other methods (push_leg_notification, etc) call to push the worker name and parameters in redis. The above class assumes it is able to connect to redis.

It looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
require 'redis'
require 'json'
class RedisJobPusher

  def self.push_leg_notification(user_id, event_id, message, title)
      params = [user_id, event_id, 'leg', message, title, nil]
      RedisJobPusher.push_to_queue('PushNotificationWorker', params)
  end

  def self.push_post_notification(user_id, event_id, message, title, event_user_social_id)
      params = [user_id, event_id, 'post', message, title, event_user_social_id]
      RedisJobPusher.push_to_queue('PushNotificationWorker', params)
  end

  def self.push_to_queue(worker_name, params)
    # using <<  rather than + because it cats instead of newing up string objects
    redisurl = 'redis://' << CONFIG[:redis_server] << ':6379' << '/' << CONFIG[:redis_db_num]

      msg = { 'class' => worker_name, 'args' => params, 'retry' => true }
      redis = Redis.new(:url => redisurl)
      redis.lpush("raisemore_sidekiq:queue:JobWorker", JSON.dump(msg))
  end

end

As you can see, there isn’t a lot going on here. Simple and easy. Just connect to redis, do a quick lpush, and go on your day.

Testing Interceptor Headers in AngularJS

less than a 1 minute read

In AngularJS, you can set up HTTP Interceptors (middleware) to inject headers etc.

I had a service that I wanted to intercept every http request to our API service to attach a token that we consume to verify a user. This would only happen once a token is set.

Today I paired with Jeff French. We figured out how to test the AngularJS interceptors properly.

Things I want to test:

  • Is the token only being attached when a token is set?
  • Are the requests actually being attached via the interceptor

The service and interceptor look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
angular.moduler('services')
.factory('RequestService', function RequestService(){
  var token = null;

  var setToken = function setToken(someToken) {
      token = someToken;
  }

  var getToken = function getToken() {
      return token;
  }

  var request = function request(config) {
      if (token) {
          // jqXHR.setRequestHeader('Authorization','Token token="' + app.user.api_key.access_token + '"');
            config.headers['Authorization'] = 'Token token="' + token + '"';
        }
        return config;
  }

  return {
      setToken: setToken,
      getToken: getToken,
      request: request
  }
})

.config(['$httpProvider', function($httpProvider) {
    $httpProvider.interceptors.push('RequestService');
}]);

With just a few simple tests, I’m asserting a few things:

  • I have no simple ParseError or ReferenceErrors
  • Modules are set up correctly
  • Interceptors are set up correctly
  • My service is actually setting the token correctly
  • The interceptor is attaching the header correctly
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
"use strict";

var httpProviderIt;

describe("Service Unit Tests", function() {

  beforeEach(function() {
    module('services', function ($httpProvider) {
      //save our interceptor
      httpProviderIt = $httpProvider;
    });

    inject(function (_AuthService_, _RequestService_) {
      RequestService = _RequestService_;
      AuthService = _AuthService_;
    })
  });

  var RequestService, AuthService;
  var $httpBackend;
  var token = 'someToken';

  describe('RequestService Tests', function() {

    it('should have RequestService be defined', function () {
      expect(RequestService).toBeDefined();
    });

    it('should properly set an api token', function() {
      expect(RequestService.getToken()).toBeNull();
      RequestService.setToken(token);
      expect(RequestService.getToken()).toBe(token);
    });

    it('should save the users api token after saveUser', function() {
      spyOn(RequestService, 'setToken');
      Auth.saveUser(apiResponse);
      expect(RequestService.setToken).toHaveBeenCalled();
    });

    it('should have no api token upon start up', function() {
      var token = RequestService.getToken();
      expect(token).toBeNull();
    });

    describe('HTTP tests', function () {

      it('should have the RequestService as an interceptor', function () {
          expect(httpProviderIt.interceptors).toContain('RequestService');
      });

      it('should token in the headers after setting', function() {
        RequestService.setToken(token);
        $httpBackend.when('GET', 'http://example.com', null, function(headers) {
          expect(headers.Authorization).toBe(token);
        }).respond(200, {name: 'example' });
      });

      it('should not place a token in the http request headers if no token is set', function() {
        var config = RequestService.request({headers: {} });
        expect(config.headers['Authorization']).toBe(undefined);
      });

      it('should place a token in the http request headers after a token is set', function() {
        RequestService.setToken(token);
        var config = RequestService.request({headers: {} });
        expect(config.headers['Authorization']).toBe('Token token="' + token + '"');
      });
    }); //Mocked HTTP Requests

  }); //RequestService tests

});

Simple and sweet.