Josh Bavari's Ramblings

tech babbles on Ruby, Javascript, Rails, Phonegap/Cordova, Grunt, and more

Understanding Built Node Modules

less than a 1 minute read

If you’ve recently change node versions and begun running into issues with some of your modules, you might get a little help from understanding how native node modules work.

TL;DR: If you upgraded node, run npm rebuild or rm -rf node_modules && npm install.

Why: This has to do with some of your modules you may be using having native bindings to your current system runtime.

This then put me on a quest to understand more how native node modules are used. What I’m referring to, is using Node addons:

Addons are dynamically linked shared objects. They can provide glue to C and C++ libraries. The API (at the moment) is rather complex, involving knowledge of several libraries

Node.js statically compiles all its dependencies into the executable. When compiling your module, you don’t need to worry about linking to any of these libraries.

Since I maintain the Ionic CLI, we have a few depedencies to a native node module, node-sass.

Node-sass relies on some native C/C++ bindings to compile SASS down to CSS.

If you have a version of Node 0.12 for example, and install the ionic-cli, then install Node 4.2.1, you may have issues running the CLI.

This is due to the module building itself with that version of node and using those bindings, then when you install a new version of Node, you can’t use those same bindings.

When you change versions of node, make sure you do a quick rm -rf node_modules if on mac, or deleting the node_modules folder on windows and doing a fresh npm install.

If you want to read a little more, background information is shared by Chris Williams about his struggles maintaining the node-serialport module on this post.

Releasing Electron for Windows

less than a 1 minute read

Releasing Electron applications on Windows can be a tricky issue. Especially if you mainly use a Mac (like me). And you have to think about that pesky code signing thing you have to do to avoid the annoying ‘SmartScreen’ filter users may get.

Thankfully, there’s a great tool called Squirrel made by Paul Betts that does a ton of the heavy lifting for you. Codesigning and all.

I really got a ton of knowledge from the blog post, Creating a Windows Distribution of an Electron App using Squirrel and Using Electron Packager to Package an Electron App.

I wanted to curate a ton of knowledge in one place, so here we go.

I use a few tools to get this done on my Mac:

First, let’s look at the project layout:

Project Layout

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
/build # Installers go here
  /osx
  /resources # Icons, iconset, images, etc
  /win # Binaries to build, Package.nuspec file to specify configurations
  packager.json # File used by electron-builder to build OSX DMG file.
/dist # Distributions go here (.app, .exe, .dmg)
  /osx
  /win
/docs # Docs about project.
/node_modules # Modules used for building/packaging/testing
/scss # Sass for CSS compilation in www
/spec # Tests
  AppCtrl.spec.js
www # Source code for the application
  /css
  /data
  /img
  /js
  /lib
  /node_modules # Modules here used by the application itself.
  /templates

karma.conf.js # Configuration for tests.
livereload.js # Dev script to set up live reload in Electron
package.json # Main package.json with scripts/dependencies to package/build.

Process

First we’ll need to make the exe and associated files to a dist folder. From there, we take the win dist files and pack them into the Setup.exe file, where Squirrel will do the heavy lifting to pack all this into a one step process.

npm Scripts

We’ll use the npm script pack:win task to put all our www files into a nice package (resources, exe, etc) and output to the dist folder.

pack:win step will just execute electron-packager with some relevant information. Please note the asar=true, this is recommended because sometimes node_modules can get nested a few times and the file paths will be too long for certain Windows platforms.

Script:

1
2
3
4
{
  "scripts": {
    "pack:win": "electron-packager ./www \"Project\" --out=dist/win --platform=win32 --arch=ia32 --version=0.29.1 --icon=build/resources/icon.ico --version-string.CompanyName=\"My Company\" --version-string.ProductName=\"Project\" --version-string.FileDescription=\"Project\" --asar=true"
  }

Electron Build script

I used a simple build script in node to assist in some of the heavy lifting. I recommend getting an Extended Validation certificate from this blog post.

This will take the windows package in dist/win and create dist/win/Setup.exe.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
#!/usr/bin/env node
// File is in root/build/win/build.js
// First call nuget pack Package.nuspec
// Then you'll have Project.<version>.nupkg
// Run Squirrel.exe --releaseify Project.<version>.nupkg --icon iconPath --loadingGif loadingGifPath
// resources in build/resources/

//Need to get around weird command line passes with windows paths
function addWindowsPathFix(path) {
  return ['"', path, '"'].join('');
}

var childProcess = require('child_process'),
  path = require('path'),
  packageJsonPath = path.join(__dirname, '..', '..', 'package.json'),
  packageJson = require(packageJsonPath),
  loadingGifPath = path.join(__dirname, '..', 'resources', 'windows-loader.png'),
  nugetPackageSpecPath = path.join(__dirname, 'Package.nuspec'),
  nugetPackageOutputPath = path.join(__dirname),
  nugetPackageName = ['Project', '.1.0.0', '.nupkg'].join(''),
  // nugetPackageName = ['Project', packageJson.version, '.nupkg'].join(''),
  nugetPackagePath = path.join(nugetPackageOutputPath, nugetPackageName),
  nugetExePath = path.join(__dirname, 'nuget.exe'),
  setupIconPath = path.join(__dirname, '..', 'resources', 'icon.ico'),
  setupReleasePath = path.join(__dirname, '..', '..', 'dist', 'win'),
  signatureCertificatePath = path.join(__dirname, 'Certificate.pfx'),
  signParams = ['"/a /f "', addWindowsPathFix(signatureCertificatePath), '" /p ', process.env.PRIVATE_CERT_PASSWORD, '"'].join(''),
  squirrelExePath = path.join(__dirname, 'Squirrel.exe');

  console.log('sign params', signParams);

var createNugetPackageCommand = [addWindowsPathFix(nugetExePath), 'pack', addWindowsPathFix(nugetPackageSpecPath), '-OutputDirectory', addWindowsPathFix(nugetPackageOutputPath)].join(' ');
var createSetupCommand = [
              addWindowsPathFix(squirrelExePath),
              '--releasify', addWindowsPathFix(nugetPackagePath),
              '--loadingGif', addWindowsPathFix(loadingGifPath),
              '--icon', addWindowsPathFix(setupIconPath),
              '--releaseDir', addWindowsPathFix(setupReleasePath),
              '--signWithParams', signParams
            ].join(' ');


console.log('Creating nuget package from nuget spec file:', nugetPackageSpecPath);
// console.log(createNugetPackageCommand);
childProcess.execSync(createNugetPackageCommand);
console.log('Created nuget package');

console.log('Building Setup.exe');
// console.log(createSetupCommand);
childProcess.execSync(createSetupCommand);
console.log('Built Setup.exe');

Hope this helps!

Lazy Loading Your Node Modules

less than a 1 minute read

While working at Ionic I’ve been focused on the Ionic CLI.

My first big refactor of the CLI was pulling out most of the 21 commands it offers into an external library (ionic-app-lib) that could be consumed by both the Ionic CLI and our GUI – Ionic Lab.

The refactor went rather smoothly.

However, one thing happened that was not expected – now that the ionic-app-lib bundled all the commands together, whenever you required the app-lib module, it was rather slower than expected.

For example, whenever you ran: var IonicAppLib = require('ionic-app-lib'); – it would take a wee bit longer.

Here’s the code for the included moduled ionic-app-lib:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
var browser = require('./lib/browser'),
    configXml = require('./lib/config-xml'),
    cordova = require('./lib/cordova'),
    events = require('./lib/events'),
    hooks = require('./lib/hooks'),
    info = require('./lib/info'),
    ioConfig = require('./lib/io-config'),
    login = require('./lib/login'),
    logging = require('./lib/logging'),
    multibar = require('./lib/multibar'),
    opbeat = require('./lib/opbeat'),
    project = require('./lib/project'),
    share = require('./lib/share'),
    semver = require('semver'),
    serve = require('./lib/serve'),
    settings = require('./lib/settings'),
    setup = require('./lib/setup'),
    start = require('./lib/start'),
    state = require('./lib/state'),
    upload = require('./lib/upload'),
    utils = require('./lib/utils');

module.exports = {
  browser: browser,
  configXml: configXml,
  cordova: cordova,
  events: events,
  hooks: hooks,
  info: info,
  ioConfig: ioConfig,
  login: login,
  logging: logging,
  multibar: multibar,
  opbeat: opbeat,
  project: project,
  share: share,
  semver: semver,
  serve: serve,
  settings: settings,
  setup: setup,
  start: start,
  state: state,
  upload: upload,
  utils: utils
}

As you can see, whenever this module is require’d in, it require’s even more modules. This means, more file read requests and fulfilling those just to get this module working.

Also to note – anytime a new command was added in, it must be exported by adding in another annoying require statement.

Lazy loading via JavaScript getters

While looking through other open source projects, I came across the idea of lazy loading your modules on demand.

One way to do this is with JavaScript getters being defined. We wont require the module until it is requested.

For example, the code snippet:

1
2
3
4
5
var IonicAppLib = require('ionic-app-lib');
var options = { port: 8100, liveReloadPort: 35729 };

//Do not load the serve command until it is requested as below:
IonicAppLib.serve.start(options);

What’s happening above – require('ionic-app-lib') is called, which sets up the getters for start, serve, run, etc. Then, when the command is called, the require for the module then happens, thereby getting the module loaded, and returning it to the caller.

Here’s that code to enforce the lazy loading of modules:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
var fs = require('fs'),
    IonicAppLib = module.exports,
    path = require('path');

var camelCase = function camelCase(input) {
    return input.toLowerCase().replace(/-(.)/g, function(match, group1) {
        return group1.toUpperCase();
    });
};

//
// Setup all modules as lazy-loaded getters.
//
fs.readdirSync(path.join(__dirname, 'lib')).forEach(function (file) {
  file = file.replace('.js', '');
  var command;

  if (file.indexOf('-') > 0) {
    // console.log('file', file);
    command = camelCase(file);
  } else {
    command = file;
  }

  IonicAppLib.__defineGetter__(command, function () {
    return require('./lib/' + file);
  });
});

IonicAppLib.__defineGetter__('semver', function () {
  return require('semver');
});

Testing

I threw together a quick test to ensure that all of the modules were still correctly being accessible:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
var index = require('../index');

describe('index', function() {

  it('should have index defined', function() {
    expect(index).toBeDefined();
  });

  function testForProperty(input) {
    it('should have ' + input + ' available', function() {
      expect(index[input]).toBeDefined();
    });
  }

  var objs = ['browser', 'configXml', 'cordova', 'events', 'hooks', 'info',
              'ioConfig', 'login', 'logging', 'multibar', 'opbeat', 'project',
              'share', 'semver', 'serve', 'settings', 'setup', 'start', 'state',
              'stats', 'upload', 'utils'];

  // Doing it this way to give better failure messages. 
  // Ensures all commands are available currently from
  objs.forEach(function(obj) {
    // expect(index[obj], obj).toBeDefined();
    testForProperty(obj);
  });

});

Gotchas

For one – you’ll need to ensure your files adhere to some naming conventions. For our commands, we had some with hyphens (-) that we had to account for, as you can see above if (file.indexOf('-') > 0).

Also – if you want to export other modules you can set up other getters, as I did with semver above.

If you want to short circuit lazy loading, go ahead and just export them as normal.

Performance

We say about a 8x performance increase by lazy loading the modules.

CLI run times:

1
2
Not lazy loading modules:   830ms
Lazy loading modules:       200ms

Enjoy!

Codesigning Electron Applications

less than a 1 minute read

Lately I’ve been busy at work creating and maintaining Ionic Lab. It’s been a fun and challenging problem using HTML/CSS/JavaScript to create native OSX/Windows applications.

I’m going to admit – I’ve gotten a few hybrid projects on the App store. Honestly though I had a lot of help.

This time I was mostly on my own.

I’m not great at the native dev and half the problems I occur are with the platform I am dealing with. In this I mean – Android I deal with how Google does signing and releasing and how Apple does signing and releasing.

I’m going to cover mainly Apple policies to release an app on your own with or without the App store. With Electron, we’re going to make a native build, so we’ll need to know how to do this.

Mac’s Gatekeeper

On Mac OSX, there’s an application that checks all the applications you download and run to see if they are valid and trusted.

Certainly you’ve seen the message from an app you’ve downloaded: "App can't be opened because it is from an unidentified developer."

If you create and app and do not codesign it with a valid Apple dev account, your users will see this. It’s not a good thing.

How to codesign

The main tool of codesigning is the CLI tool codesign.

I really found a lot of help from OSX Code Signing in Depth.

It’s pretty clear right away what you need to run and how to verify what you need to run. I’d like to go over how to do it with Electron, specifically.

I posted the script below. I want to highlight the issues I ran into as a result of my ignorance.

One issue I ran into – I was using the “Mac Development” certificate to sign – and when I ran the verify command (codesign -vvvv -d "/path/to/MyApp.app") it gave me a good to go signal. When I ran the security CLI command (spctl --assess -vvvv "/path/to/MyApp.app"), it was rejected.

My error: I thought “Mac Development” was a “Developer-ID application”.

It’s not. I was an account admin. In the Apple Member Center for Certificate Administration, I could only set up a “Mac Development” type certificate. Apple member center would not let met set up a “Developer ID Application” certificate. You need a ‘team agent’ to set one up for you. (That or become a team agent)

That being said – ensure you sign with a certificate type of “Developer ID Application” to sign with, and you’re good to go.

I set up my codesign script like the following. There’s comments to help understand:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# Invoke this script with a relative `.app` path
# EX:
# codesign.sh "dist/osx/MyApp-darwin-x64/MyApp.app"

# I had better luck using the iPhoneOS codesign_allocate
export CODESIGN_ALLOCATE="/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/codesign_allocate"

# Next two are specified in Apple docs:
# export CODESIGN_ALLOCATE="/Applications/Xcode.app/Contents/Developer/usr/bin/codesign_allocate"
# export CODESIGN_ALLOCATE="/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/codesign_allocate"

# However, neither worked, and gave error:
# /Users/JoshBavari/Development/ionic-gui/dist/osx/MyApp-darwin-x64/MyApp.app/Contents/Frameworks/Electron Framework.framework/Electron Framework: cannot find code object on disk

#Run the following to get a list of certs
# security find-identity
app="$PWD/$1"
identity="<ENTER_ID_OF_RESULT_FROM_SECURITY_FIND_IDENTITY_COMMAND>"

echo "### signing frameworks"
codesign --deep --force --verify --verbose --sign "$identity" "$app/Contents/Frameworks/Electron Framework.framework/Electron Framework"
codesign --deep --force --verify --verbose --sign "$identity" "$app/Contents/Frameworks/Electron Framework.framework/"
/Versions/A"
codesign --deep --force --verify --verbose --sign "$identity" "$app/Contents/Frameworks/Electron Framework.framework/Versions/Current/Electron Framework"
codesign --deep --force --verify --verbose --sign "$identity" "$app/Contents/Frameworks/Electron Helper EH.app/Contents/MacOS/Electron Helper EH"
codesign --deep --force --verify --verbose --sign "$identity" "$app/Contents/Frameworks/Electron Helper NP.app/Contents/MacOS/Electron Helper NP"
codesign --deep --force --verify --verbose --sign "$identity" "$app/Contents/Frameworks/Electron Helper NP.app/Contents/MacOS/Electron Helper NP"
codesign --deep --force --verify --verbose --sign "$identity" "$app/Contents/Frameworks/MyApp Helper.app/Contents/MacOS/MyApp Helper"
codesign --deep --force --verify --verbose --sign "$identity" "$app/Contents/Frameworks/Mantle.framework/Mantle"
codesign --deep --force --verify --verbose --sign "$identity" "$app/Contents/Frameworks/Mantle.framework/Versions/A"
codesign --deep --force --verify --verbose --sign "$identity" "$app/Contents/Frameworks/ReactiveCocoa.framework/ReactiveCocoa"
codesign --deep --force --verify --verbose --sign "$identity" "$app/Contents/Frameworks/ReactiveCocoa.framework/Versions/A"
codesign --deep --force --verify --verbose --sign "$identity" "$app/Contents/Frameworks/Squirrel.framework/Squirrel"
codesign --deep --force --verify --verbose --sign "$identity" "$app/Contents/Frameworks/Squirrel.framework/Versions/A"

echo "### signing app"
codesign --deep --force --verify --verbose --sign "$identity" "$app"


echo "### Zipping app"
ditto -c -k --sequesterRsrc --keepParent dist/osx/MyApp-darwin-x64/MyApp.app/ dist/osx/MyApp-Mac.zip

echo "### verifying signature",
codesign -vvvv -d "$app"
sudo spctl -a -vvvv "$app"

Pitfalls

Since I wasn’t very familiar with the Apple specifics I’d like to high light a few pitfalls I ran into with my ignorance.

A ‘Developer-ID signed app’ means setting up a certificate (private key + cert) with “type” as “Developer ID Application”. This does NOT mean a “Mac Development” certificate. From the OSX Codesigning guide:

Like Gatekeeper, spctl will only accept Developer ID-signed apps and apps downloaded from the Mac App Store by default. It will reject apps signed with Mac App Store development or distribution certificates.

Issues

Most users say to specify this environment variable:

export CODESIGN_ALLOCATE="/Applications/Xcode.app/Contents/Developer/usr/bin/codesign_allocate"

For some reason, I couldn’t use the default codesign allocate as specified in the Github issue above. Instead, I had to go with this Environment variable for CODESIGN_ALLOCATE for iPhoneOS.platform:

export CODESIGN_ALLOCATE="/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/codesign_allocate"

Hints

Only include signed code in directories that should contain signed code. Only include resources in directories that should contain resources. Do not use the —resource-rules flag or ResourceRules.plist. They have been obsoleted and will be rejected.

A little note on signing frameworks [5]:

Signing Frameworks

When you sign frameworks, you have to sign a specific version. So, let’s say your framework is called CSMail, you’d sign CSMail.framework/Versions/A. If you try and just sign the top level folder it will silently fail, as will CSMail.framework/Versions/Current (see “Symbolic Links” below).

Symbolic Links

Any symbolic links will be silently ignored and this extends to the path you give to the codesign command line utility. I think there’s actually a problem with symbolic links: you can add them to a Resources folder and it won’t invalidate the signature (whereas you cannot add normal files). I’ve reported this to Apple (rdar://problem/6050445).

Helpful links

  1. Apple Code Signing Overview
  2. Apple OS X Code Signing In Depth
  3. Apple Anatomy of Framework Bundles
  4. Apple codesign Man Page
  5. Chris Suter’s Blog – Code signing
  6. Stackoverflow – Creating Symlinks in OSX Frameworks
  7. How to sign your Mac OSX app for Gatekeeper
  8. Codesigning and Mavericks 9 Atom Electron – Signing Mac App
  9. Codesign – useful info in Xcode > 5.0
  10. Electron for the Mac App Store
  11. nw.js issue about code signing.

Writing Unit Tests for Electron and AngularJS

less than a 1 minute read

Unit testing is something most of us dev’s don’t think much of. Until we encounter some simple to solve bugs or have regressions in code that drives us crazy.

JavaScript testing itself is hard with no clear cut path to take. Most times, you’ll have to decide important things for yourself as far as which testing framework to use and the tools to do them.

I enjoy Jasmine testing framework right now. For my node projects, I like to use the node package jasmine-node. However, Electron is basically a web browser with node conveniences, so we want to test browser related things.

Since Electron applications take a unique approach to combining elements from the browser with conveniences from node, such as require, __dirname, global and other keywords specific to node, testing gets a little more complicated.

I’m going to outline a few of the approaches I took. I’m sure they are not perfect, I’m still learning and I’m outlining that here.

Tools of the trade

I outlined some things I did to test AngularJS in a previous post. I pretty much use the same tools and set up:

1
npm install -g karma karma-jasmine karma-phantomjs-launcher karma-spec-reporter phantomjs

Now I’ve got my karma.config.js file:

1
2
3
4
5
6
7
8
9
//..snip..
// list of files / patterns to load in the browser
files: [
  'www/lib/angular/angular.min.js',
  'node_modules/angular-mocks/angular-mocks.js',
  'www/js/**/*.js',
  'spec/**/*.js'
]
//..snip..

Now we’re set up to do some testing!

Exposing require to AngularJS service

I first wanted a one stop shop for all my node conveniences in one angular js service to contain what Electron provides.

Here’s my service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
angular.module('app.services')
.factory('NodeService', function() {
  var fixPath = require('fix-path'),
      fs = require('fs'),
      ipc = require('ipc'),
      opn = require('opn'),
      path = require('path'),
      shell = require('shell');

  //Fixes the path issue with node being run outside of this GUI  
  fixPath();
  process.env.PATH = process.env.PATH + ':/usr/local/bin';

  //Path from root -> 'www'
  //__dirname == 'www' dir
  var appJsonPath = path.join(__dirname, 'package.json');
  var appJson = require(appJsonPath);

  return {
    appJson: appJson,
    fixPath: fixPath,
    fs: fs,
    ipc: ipc,
    opn: opn,
    path: path;
  };
});

Test set up for Service

Now, hopefully I have all my node conveniences in one place (require, __dirname, etc).

Let’s get a simple test up:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
describe('#NodeService', function() {
  var NodeService;

  beforeEach(function() {
      //Ensure angular modules available
    module('app.services');
  });

  beforeEach(inject(function(_NodeService_) {
    NodeService = _NodeService_;
  }));

  it('should have node service defined', function() {
    expect(NodeService).toBeDefined();
  });
});

If we run this test without anything else, we’ll see immediately a problem:

1
ReferenceError: Can't find variable: require

My approach to this is simple – create a faked out global variable that represents require and does what you want, such as:

1
2
3
4
5
6
7
8
9
10
11
12
13
var fakePackageJson = { name: "Fake package.json name" };
window.require = function(requirePath) {
  console.log('Requiring:', requirePath);
  switch(requirePath) {
    case 'ipc':
      return ipcSpy;
    case 'fs':
      return fsSpy;
    case '/spec/package.json':
      return fakePackageJson;
  }
};
window.__dirname = '/some/fake/path';

Package.json test setup

Let’s define some quick scripts to run from our package.json to help others run our tests:

1
2
3
4
5
//..snip..
  "scripts": {
    "test": "karma start"
  }
//..snip

Now when we run our tests, we’ll have the faked out node modules passed back.

This is just one approach to take to setting up some faking out for node modules using Electron, Angular JS, and Jasmine.

Hope this helps.

Comparisons of nw.js and Electron

less than a 1 minute read

In the last few months, I’ve been playing around with two tools to help bridge the gap between the web and native desktop applications. There are two main tools that come to mind – nw.js (formerly known as Node Webkit) and Electron (formerly known as Atom Shell).

This post focuses on using both, the differences between the two, and focusing on issues that I’ve encountered.

Outline:

  • Getting started – package.json
  • Native Menus (application menu)
  • Shell execution (child processes)
  • Packaging / run
  • Icons
  • Performance

Nw.js

Getting started

Nw.js and Electron share a lot of the same steps for getting started. The only real difference between the two is how they are run, and how they handle the node process internally.

With Nw.js, your app is bundled together. With Electron, the application is set up differently – with the main node process the handle running the browser process, and the rendering process, which handles all things from the browser (the event loop).

To get running, download the nw.js app or the electron app. Both of these applications look at your package.json file to get running by looking at the main attribute.

Bootstrapping

For nw.js, the main attribute should specify which html file to start loading when your application launched. With Electron, your main attribute should specify a JavaScript file to be run.

You also specify attributes about the nw.js window that runs via the window attribute, things like toolbar, width, and height, notably.

With Electron, the JS file that you specify will launch the browser window and specify other attributes like width, height, and other window attributes.

For convenience sake, I also created a node run script to execute the Nw.js app with my current folder. To run the node-webkit app, you simply type npm run nwjs. I also included a livereload script to watch my www folder to live reload my changes in the nw.js app.

Here’s a quick look at the package.json file used to bootstrap nw.js:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
  "name": "nwjs-app",
  "version": "1.0.0",
  "description": "",
  "main": "www/index.html",
  "scripts": {
    "nwjs": "/Applications/nwjs.app/Contents/MacOS/nwjs . & node livereload",
    "electron": "/Applications/Electron.app/Contents/MacOS/Electron . & node livereload"
  },
  "window": {
    "toolbar": true,
    "width": 800,
    "height": 500
  }
}

Here’s a quick look at the package.json file used to bootstrap Electron:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
  "name": "nwjs-app",
  "version": "1.0.0",
  "description": "",
  "main": "src/main.js",
  "scripts": {
    "nwjs": "/Applications/nwjs.app/Contents/MacOS/nwjs . & node livereload",
    "electron": "/Applications/Electron.app/Contents/MacOS/Electron . & node livereload"
  },
  "window": {
    "toolbar": true,
    "width": 800,
    "height": 500
  }
}

Additionally for Electron, my main.js file looks like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
var app = require('app');  // Module to control application life.
var BrowserWindow = require('browser-window');  // Module to create native browser window.
var Menu = require('menu');
var ipc = require('ipc');

// var menu = new Menu();
// Report crashes to our server.
// require('crash-reporter').start();

// Keep a global reference of the window object, if you don't, the window will
// be closed automatically when the javascript object is GCed.
var mainWindow = null;
var menu;

var browserOptions = {
  height: 600,
  title: 'Electron App',
  width: 800
};

// Quit when all windows are closed.
app.on('window-all-closed', function() {
  if (process.platform != 'darwin')
    app.quit();
});

// This method will be called when Electron has done everything
// initialization and ready for creating browser windows.
app.on('ready', function() {
  // Create the browser window.
  mainWindow = new BrowserWindow(browserOptions);

  // and load the index.html of the app.
  mainWindow.loadUrl('file://' + __dirname + '/www/index.html');

  // Emitted when the window is closed.
  mainWindow.on('closed', function() {
    // Dereference the window object, usually you would store windows
    // in an array if your app supports multi windows, this is the time
    // when you should delete the corresponding element.
    mainWindow = null;
  });

  ipc.on('update-application-menu', function(event, template, keystrokesByCommand) {
    //Go through the templates, wrap their click events back to the browser
    console.log('update-application-menu - template');
    console.log(template);
    translateTemplate(template, keystrokesByCommand);
    menu = Menu
    Menu.setApplicationMenu(menu);
  });
});

Native Menus

Electron

Due to the way electron is split up into two processes, the main process (that handles native menus) and the browser process (mainly your app), menus are mainly available to be set on the main process.

If you want your app to change your application menus, you’ll need to use the ipc module electron provides to get a message out to the main process to update the menus.

Other than that, the menu system is super easy if you wish to use static menus.

Nw.js

It’s dead simple. Since it’s all one bundled process, just call the set menu, and you’re good. It’s easy to set short cuts and modify the menus.

Shell execution

In nw.js, you’re good to go when it comes to making external shell calls.

When it comes to electron, make sure you spawn your child processes with the pipe stdio option. Without that option, you may run into some errors (due to the fact electron doesnt have a stdout it manages easily).

Packaging / running

It’s really easy on both platforms. Just set up your package.json/index.html/main.js file and run the appropriate command.

I don’t have a lot of experience with nw.js, so I cant speak to the packaging process.

For electron, to run I like to use electron-prebuilt to run my www files as an app, using electron-packager to package into an .app file, and electron-builder to create installers (dmg/setup.exe).

Icons

To get custom icons for your app files for Mac, you need an .icns file that bundles up all your icons in all the formats/sizes for your dock icon, your cmd+tab icon, and your running icon.

I used this as a walkthrough.

I first started with a size of 1024x1024 pixels, then used the following commands:

1
2
3
4
5
6
7
8
9
10
11
12
# Enter app.iconset, drop in icon.png as a 1024 x 1024 image.
# Run the following commands:
sips -z 16 16     icon.png --out ./icon_16x16.png
sips -z 32 32     icon.png --out ./icon_16x16@2x.png
sips -z 32 32     icon.png --out ./icon_32x32.png
sips -z 64 64     icon.png --out ./icon_32x32@2x.png
sips -z 128 128   icon.png --out ./icon_128x128.png
sips -z 256 256   icon.png --out ./icon_128x128@2x.png
sips -z 256 256   icon.png --out ./icon_256x256.png
sips -z 512 512   icon.png --out ./icon_256x256@2x.png
sips -z 512 512   icon.png --out ./icon_512x512.png
cp icon.png icon_512x512@2x.png

Then just run:

1
iconutil -c icns app.iconset -o ./app-dir/YourAppName.app/Contents/Resources/app.icns

You should now have your app with icons ready to go.

Performance

I didn’t see a lot of major performance bumps from using either platform. It’s JavaScript after all.

Closing words

Most of all, have fun with developing with these tools! They’re open source and free, so when you get a chance, share some knowledge, post an issue, respond to an issue, or even submit a PR.

We’re all in this together.

2014 in Review

less than a 1 minute read

2014 has been an interesting year and I’d like to spend a minute to review it for myself as a reminder.

January started out, I was working for my start up, RaiseMore. I wanted to make 2014 I wanted to share knowledge I had been gathering from our projects at RaiseMore. I made this my purpose for the year is to help others as much as I can, as I truly believe we are all in this together. “Iron sharpens iron”.

I had been using Cordova, and set some goals up for the year to get more active and contribute to the project. It’s really easy, hit this link for more information about how to contribute. I started by grabbing some Jira tasks to improve the Cordova plugin registry. At the time, I thought the registry needed a face lift to help out the community.

As a start up in OKC, we had been using tech that at the time hadnt been popular in OKC. As a team, we all be focused heavily on a platform built of an iOS/Android app, API server, database, and a few other back end services. All of the technologies we used were done in Ruby, Rails, Sinatra, Postgres, Cordova, JavaScript, some Grunt/Gulp build systems.

The biggest challenge we had as a small team of 4 devs was how to manage the systems. Since they were all broken up into multiple projects, we all had to care a lot about one portion as well as have general knowledge on the other parts. Reflecting on this now – this worked really well for our team.

By March, I had spoke few times at the Ruby group, a few at the JavaScript group, and after some convincing and encouragement from a great friend, Rob Sullivan, I worked up the courage to submit some talks to the Kansas City Developer conference in May.

I saw a post by the Apache foundation, proposing a tweet-off to get a free ticket to ApacheCon 2014 in Denver. This would let me meet some of the great devs I had been collaborating/talking with through the Cordova IRC/Mailing list/google hangouts. I won the ticket, and with some help from friends, made it to Denver and met all the Cordova devs. Just like Rob always tells me – if you don’t ask, then it will always be a ‘no’. Glad I was pro-active and tweeted for the ticket!

May hits and I find myself infront of 100+ devs that have come to see my talk at Kansas City Dev Conf – I had to admit and say I was very nervous. After my talk, I had a ton of great questions, feedback, and general appreciation for my sharing of knowledge. I then gave a second talk a few hours later over Moving forward with Cordova plugins that talked about how to understand/create plugins for Cordova projects, including pushing them to the registry.

After my second talk is where I met a now good friend, Ross Martin, and we still talk and collaborate about an awesome Ionic app that he is making. Two big things in 2014 – sharing freely and talking through twitter. Its gold, folks.

Come July, I’ve decided it’s time for me to face my biggest fear yet – moving out of Oklahoma and living alone. I had begun interviewing and networking with others around the country involved in tech. I highly recommend this – as I met some great connections now of people to talk to, to help, to bounce ideas off, and just generally respect. I had decided to move to Boulder, Colorado, as I had fallen in love with the mountains.

Come October, I had been selected to speak at Thunder Plains, which was a great reason to head back to my home town of Oklahoma, present, and catch up with all the great technologists in Oklahoma. That town is packed full of amazing people that are working together as techlahoma – Rob Sullivan, Jesse and Amanda Harlin, Vance Lucas, Jeremy Green, Jeff French, and way too many more to mention!

I got a job at Mondo Robot, where I worked for a few months with them on a handful of interesting projects from August until November. Through my interaction with the Cordova community, I came to find a job working for Drifty, which you may know by the awesome Ionic Framework.

I can honestly say working for Drifty has been amazing. All day long I get to work on something I really believe in, find meaning in, and most importantly, aligns with my goals of helping others. All day long I get to work on a hobby with others who are just as excited and driven to win as I am. I couldn’t ask for a better place to end up.

The year I turned 30, 2014, has come to an end. Looking back, I can say I’m happy of my progress, and striving to continue processes that keep me helping others to the best of my ability and keep giving back.

Here’s to an awesome 2015 for us all, lets make it awesome.

A Field Guide to Snap.svg

less than a 1 minute read

This last weekend I spent a little time on a fun little side project to learn how to use Snap.svg. I was trying to take my friend Rob’s datachomp character and make it a little interactive.

After trying to do what I thought was a few simple little hacks with his PNG image, it turned out to be a great way to fully learn and understand SVG and the Snap.svg library.

I have to admit I did not fully understand what SVG was and what it was composed of. I wanted to compile a list of thoughts, links, blogs, and tutorials that helped me learn along the way.

What SVG is and what it isnt

First of all, I had to learn that there are two image types – ones that scale (vector), and ones that are defined with strict sizes (bitmaps). For the longest time, I admit I thought they were basically the same.

Vectors are mainly svg, while bitmap types are jpeg, png, gif, to name a few.

You’d want to use an svg element when you need an image that can grow without looking skewed. You’d want to use a bitmap type when the size can remain the same.

One thing to note is, svg’s can contain bitmap images as well, as in this example:

1
2
3
4
5
6
7
8
<html>
  <body>
    <svg id="svg-node">
      <circle id="svg-element">
      <image id="datachomp-arm" sketch:type="MSBitmapLayer" x="0" y="0" width="269" height="209" xlink:href="img/datachomp/arm.png"></image>
    </svg>
  </body>
</html>

Svg editors vs bitmap editors

My undertanding is that most bitmap editors can’t do svg. GIMP, photoshop, and other editors like these are bitmap editors. Although they can create paths and export them, for the most part, they cannot do svg type modifications.

Some svg editors are illustrator, inkscape, and fireworks, to name a few.

Most vector editors can import bitmap images and use them as an svg element. My understanding is, they cant really modify them other than stretch/skewing them. However, I could and probably am wrong about this. (I dont pretend to be an expert at this!)

Svg understanding

To start, Mozilla Developer Network had a great set of documents to help understand SVG: what it is, what elements it’s composed of, and how to define shapes, paths, and transforms.

MDN SVG tutorial

From the article: Scalable Vector Graphics (SVG) is an XML markup language for describing two-dimensional vector graphics. SVG is essentially to graphics what XHTML is to text.

That being said, you’d be interested to know that inside of a root svg element, it contains other elements. Here’s a list of those elements available.

Using Snap.svg to make svg elements look alive

Modifying svg element attributes

You can access and modify any attribute on any element from Snap.svg. Examples could be the stroke, the width of the stroke, the x/y coordinates of the element, and many other attributes.

First, select the element (using Snap), then do a call to elem.attr({}):

Html:

1
2
3
4
5
6
7
<html>
  <body>
    <svg id="svg-node">
      <circle id="svg-element">
    </svg>
  </body>
</html>

JavaScript:

1
2
3
4
5
6
7
8
9
10
var svgNode = Snap.select('#svg-node'),
    svgElement = svgNode.select('#svg-element');

svgElement.attr({
    fill: "#bada55",
    stroke: "#000",
    strokeWidth: 5,
    x: 50,
    y: 100
});

Transforms

Snap.svg defines some methods to help you transform your svg elements. It looks like this:

1
2
3
4
var arm = datachomp = Snap.select("#datachomp"),
      arm = datachomp.select("#datachomp-arm");
var elementTransform = "t0,-80r360t-30,0r360t-30,30t-10,10";
arm.animate({transform: tAmt}, 500, mina.elastic);

However, I was having some trouble understanding the transform string syntax. The author also created Raphael.js and provides some additional documentation on how to understand transform strings here.

Taken from the Raphael reference:

“ Each letter is a command. There are four commands: t is for translate, r is for rotate, s is for scale and m is for matrix.

There are also alternative ‘absolute’ translation, rotation and scale: T, R and S. They will not take previous transformation into account. For example, …T100,0 will always move element 100 px horisontally, while …t100,0 could move it vertically if there is r90 before. Just compare results of r90t100,0 and r90T100,0.

So, the example line above could be read like ‘translate by 100, 100; rotate 30° around 100, 100; scale twice around 100, 100; rotate 45° around centre; scale 1.5 times relative to centre’. As you can see rotate and scale commands have origin coordinates as optional parameters, the default is the centre point of the element. Matrix accepts six parameters. “

Paths

Again I admit I knew very little about how to define a path. This document helped tremendously in the different types of paths and how to define them.

One task I wanted to do was make an svg element follow along a path. This CodePen helped tremendously with figuring out how to make an element follow along with a path.

Out of this google group thread, a code snippit comes up that helps:

1
2
3
4
5
6
7
8
9
//Snap.svg helper method to make an element trace a defined path

function animateAlongPath( path, element, start, dur ) {
    var len = Snap.path.getTotalLength( path );
    Snap.animate( start, len, function( value ) {
            var movePoint = Snap.path.getPointAtLength( path, value );
            element.attr({ x: movePoint.x, y: movePoint.y });
    }, dur);
};

I found a blog post with a demo that helped show some additional paths and how to use tools to create them, found here.

I found another little hack on how to create paths using GIMP. First, start to create your path with the path tool. When you’re done, select your path you created from the toolbar (under the ‘paths’ tab), right click it, and select export path. That should give you an svg file with the path inside of it.

Svg vs Canvas

A question came up, when would you want to use svg over something like the canvas?

After reading this article, the author makes a point for which you’d want to use:

SVG Relies on Files, Canvas Uses Pure Scripting SVG images are defined in XML. As a result, every SVG element is appended to the Document Object Model (DOM) and can be manipulated using a combination of JavaScript and CSS. Moreover, you can attach an event handlers to a SVG element or update its properties based on another document event. Canvas, on the other hand, is a simple graphics API. It draws pixels (extremely well I might add) and nothing more. Hence, there's no way to alter existing drawings or react to events. If you want to update the Canvas image, you have to redraw it.

I’ll continue updating this post as I learn more. I hope this helps others learn these svg topics with ease.

Exploring Best Practices With Docker for Older Libraries

less than a 1 minute read

I am not pretending to be an expert about what’s in this post, but merely a talking point to learn upon.

Problem: I need to reassemble an old C++ project with some old libraries and files that may not be around (or have disappeared already).

First theres a big chunk of files that are used strictly for rendering a video, ~560MB. Some of which had since gone missing.

Then theres some old C++ libraries which a previous shell script was doing a wget request for, and the files are nowhere to be found.

Finally, there’s the need to rebuild the image used to render the files.

Theres so many ways to attack this problem, I’m just going to cover my approaches. I’m open to new ones as well.

Potential solutions for rendering files

  • store on AWS S3
  • put into git repo
  • store on server somewhere

Lets break down the pros / cons of these

Store on AWS S3

PROS:

  • quick to add
  • cheap to store

CONS:

  • can go missing (and did)

Put into git repo

PROS:

  • versioning control with notes (none before)
  • the files give a story in time
  • cheap or free

CONS:

  • slow to pull repo (duh)
  • storing binary files (derp)

Store on server somewhere

PROS:

  • cheap to store
  • fast to access (local network)

CONS:

  • can go missing (and did)
  • no story to the files

Potential solutions for server image

  • single shell script to run for setting up image
  • dockerfile to build up the image with RUN commands
  • dockerfile to execute the single shell script

Some of the libraries this said project was depending on are no longer where they were from a previous shell script to set them all up. That means I have to do some kind of dependency management. Whether that be forking the libraries into a git repo I know will be solid, or copying the files somewhere I can trust, or more simply committing them to my own repo (560 MB or more.. ugh).

This is my thought process, not sure if its right:

If your aim is to have something fully repeatable and easy to run again, go with the docker solution.

If your aim is to just get it done quickly, go with the shell script.

However, I still can’t decipher what the pro/cons of the dockerfile just running a single shell script.

Let’s dive deeper into the pros and cons of each.

Single shell script

Steps:

  • Create instance from Amazon AMI
  • create / test shell script
  • copy shell script to server
  • run shell script on server

PROS:

  • quick to run (once completed, overall time)
  • quick to tell you of errors
  • works on my machine

CONS:

  • not easily repeatable
  • may not work in another environment (things are assumed)
  • not always easy to debug

Dockerfile with RUN commmands

Steps:

  • install docker (if not already)
  • create Dockerfile with RUN commands
  • ADD dependencies to the docker container
  • docker build image
  • docker run image
  • bundle image to Amazon AMI
  • start instance
  • profit

PROS:

  • control the starting point environment
  • commands verified to work step by step
  • easily repeatable
  • quick to tell you of errors
  • fast after first run (cache)

CONS:

  • slow start up with downloads/updates/git clones/etc
  • costly for disk space
  • must install docker / boot2docker / etc

Dockerfile to execute single shell script

Steps:

  • install docker (if not already)
  • create image from dockerfile
  • run image
  • create / test shell script in image
  • modify dockerfile – ADD shell script created in previous step

PROS:

  • quick to test out your commands

CONS

  • harder to have the diffs between images when modifying shell script

Managing Environment Variables for Your Ionic Application

less than a 1 minute read

I’ve been lucky enough to be developing with the Ionic framework lately. One issue I keep running into is – how do I manage some environment variables (base api url, debug enabled, upload url, etc) across my code, both tests and application.

I’d like to share a little solution I’ve come up with. It may not be the BEST solution to take, but it has been working great for me.

The idea

I’d like to have some files that I can preprocess – say ‘AppSettings.js’ that will expose some variables for the rest of my application to use. This could contain those pesky variables that I will need to change frequently.

I put my preprocess file templates in my root folder named templates. I will have that file contain my preprocess variables. I will spit out the preprocessed file as www/js/appsettings.js file once its been preprocessed.

That preprocessed file will be used in both my index.html and my karma.conf.js for testing.

I harness gulp a lot, however you can still use Grunt or just plain node.js as well.

My AppSettings.js file:

1
2
3
4
5
6
7
8
9
10
11
12
AppSettings = {
  // @if ENV == 'DEVELOPMENT'
  baseApiUrl: 'http://localhost:4400/',
  debug: true
  // @endif
  // @if ENV == 'TEST'
  baseApiUrl: 'https://test.api-example.com/'
  // @endif
  // @if ENV == 'PRODUCTION'
  baseApiUrl: 'https://api-example.com/'
  // @endif
}

In my preprocess file – you can see I have some @if ENV == '' statements beginning with // – these will be replaced if the if statement is true. (Duh)

Gulp Preprocess Task

I like gulp preproces. Install with npm install --save-dev gulp-preprocess.

My gulpfile contains 3 tasks – dev / test_env / and prod, looking like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
var preprocess = require('gulp-preprocess');
gulp.task('dev', function() {
  gulp.src('./template/appsettings.js')
    .pipe(preprocess({context: { NODE_ENV: 'DEVELOPMENT', DEBUG: true}}))
    .pipe(gulp.dest('./www/js/'));
});

gulp.task('test_env', function() {
  gulp.src('./template/appsettings.js')
    .pipe(preprocess({context: { NODE_ENV: 'TEST', DEBUG: true}}))
    .pipe(gulp.dest('./www/js/'));
});

gulp.task('prod', function() {
  gulp.src('./template/appsettings.js')
    .pipe(preprocess({context: { NODE_ENV: 'PRODUCTION'}}))
    .pipe(gulp.dest('./www/js/'));
});

Invocation

Now I just have to fire off gulp dev for my development settings, gulp test_env for test settings, and gulp prod for production settings.

As I mentioned – this works great for my tests, as I include the preprocessed file in karma.conf.js so my tests can use AppSettings.baseApiUrl (make sure you have your tests call the dev task first!)

I hope this helps any who may have some environment variables they need to change between environments!