Alexa skill with Ruby as OpenShift Container App

A custom Alexa skill (for Amazon Echo family devices) can be written in any programming language and hosted as web service (Hosting a Custom Skill as Web Service).  There are few requirements the web service must meet:

  • Must be internet accessible.
  • Must adhere the Alexa Skill Kit interface (a JSON request-response protocol).
  • Must be secured with HTTPS and valid certificate.

In this article I will show how to build a Ruby app that can handle requests sent by Alexa and deploy it as container application on RedHat OpenShift Cloud (free plan). Once the web service is up and running, we will create and configure the custom skill in Amazon Developer Console.

You may be wondering do we need to implement the Alexa Skill Kit interface from scratch because all the documentation available from Amazon is about Node.js and AWS Lambda. No need, there are plenty of ruby gems that is implementing the protocol. The one I chose is alexa_rubykit (thanks Damian F). Recently there was a pull request about Alexa built-in dialog interfaces that makes it very easy to create sensitive conversations using a minimal interaction model.

If I wanted to build a complex skill that require database access, authorization, rich rendering and session management I would start with Ruby on Rails and ActiveRecord, but in this case lets create a simple Rack application only with few rows of code:

config.ru

require 'alexa_rubykit'

map '/alexa' do
  alexa = proc do |env|
    response = AlexaRubykit::Response.new
    response.add_speech('Ruby is running ready!')
    [200, {"Content-Type" => "application/json"}, [response.build_response]]
  end
  run alexa
end

Create a Gemfile for use with Bundler (The best way to manage Ruby application’s gems).

Gemfile

source 'https://rubygems.org'
gem 'rack'
gem 'puma'
gem 'alexa_rubykit'

Start the app (do gem install bundler if not installed)

$ bundle install
$ rackup

Now http://localhost:9292/alexa will serve a valid Alexa response.

OpenShift Online

The RedHat OpenShift platform provides a free plan that is excellent for our purposes. Easy way to deploy a ruby app with SSL sub-domain certificate. The most popular platform for examples around the ruby community is Heroku but I like to play with alternatives. OpenShift announced end of life for v2 (gears based) platform and this is an excellent opportunity to try the new generation OpenShift 3 container based platform.

Create an account if you don’t have one and read the Beyond the Basics documentation section. (A hint: when I migrated my account to the new platform it offered me to select US East or Canada regions, looks like the California cluster may be overloaded because some configuration commands failed with message “too many requests”, so I suggest to choose other option). Install the CLI. Open the web console from https://manage.openshift.com click the help icon and go to Command Line Tools. 

Copy the oc login https://api..openshift.com –token=… suggested login command and create a new project (*project means a group of applications)

$ oc new-project

$ oc status

The easiest way to setup the app is to provide a GitHub repository. Follow the documentation from Beyond the Basics (not the one from GitHub README) Here it is:

$ oc new-app https://github.com/AlexVangelov/ruby-ex –name alexa

For the app to be exposed in internet we need to create a route by executing:

$ oc expose svc/alexa

Now some confusion is following. Is it deploying? How it detects that I want ruby? Be patient, after a while the app will be up and running (hopefully). Some useful commands:

  • oc status
  • oc logs -f bc/alexa
  • oc get pods -w

I see my web service response at http://alexa-alexv.193b.starter-ca-central-1.openshiftapps.com/alexa. I want it to be available via HTTPS,  but I was not able to find how to do this with command line tools so I did it manually from the web console (Application -> Routes, Actions -> Edit).

redshift_ssl_config

Now the web service complies all the requirements and we can create the Alexa Skill.

Amazon Developer Console

Follow the documentation from Register an Alexa Skill. I created one with invocation name “rubytest”. It is highly recommended to use the Skill Builder Beta, but for this demo application I can use a simple interaction model with the following schema.

{
  "intents": [
    {
      "slots": [
        {
          "name": "helper",
          "type": "LITERAL"
        }
      ],
      "intent": "rubytest"
    }
  ]
}

In the next configuration section select Service Endpoint Type: “HTTPS” and set the url to your web service URL (mine is https://alexa-alexv.193b.starter-ca-central-1.openshiftapps.com/alexa).

In the SSL Certificate section select My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority.

Try the web service response with the Service Simulator in the Test section.

alexa_sim

If you have an Alexa device registered with the same Amazon account, your unpublished skill is available for you. Say “Alexa start rubytest” and it will reply with “Ruby is running ready!”.

Building a final skill require to the Amazon recommendations like verifying that the request is coming from Alexa by checking the signature, timestamp and so on, that may be I will cover in another article about dialog interfaces that I tried. It is cool, the interaction model is so improved from an year ago. For example a friend of me tried to cheat my skill by confirming the intent with “that’s correct” instead of some common agreement and the dialog model accepted the confirmation without it being a described response.

10x for reading!

 

Advertisements

Gmail Add-on Making – Part 2 / SCA design pattern

In the first blog Gmail Add-on Making I was dealing with “Hello Add-on” project setup steps. Tools used:

In this article I’m planning to play with cards UI, OAuth2, API services and tests. The source code until now is tagged here: https://github.com/AlexVangelov/gmail-addon-oauth2-wp-ts/releases/tag/1.0.0.

I you didn’t go through Gmail Add-ons documentation yet, do it now before continue reading to learn the concepts of cards, sections, widgets and actions.

Looking at the official samples (https://github.com/googlesamples/gmail-add-ons-samples), I’m not very happy with the project structure. All source files are living in single folder and the only hint what the file is about is its name. Let’s try to organize it better and create a software pattern. Similar to MVC, I thinking about something like Service-Card-Action pattern with the following folder structure:

  • /src
    • /actions – global functions/action handlers.
    • /cards – card builders as Typescript classes.
    • /services – retrieve data and fetch API models as Typescript classes.
    • /helpers – some common stuff for DRY (don’t repeat yourself) routines.

The app entry point in manifest is gmail.contextualTriggers.onTriggerFunction and in most examples it is called ‘GetContextualAddOn’. It is action, so let’s rename and move the src/index.ts file into actions folder.

src/actions/getContextualAddOn.ts
import { MainCard } from '../cards/mainCard';

global.GetContextualAddOn = (e: GoogleAppsScript.Card.ActionEvent)=> {
  let card = new MainCard(e);
  return [card.build()];
}

Some knowledge of TypeScript is required to follow the code, but I’ll try to explain what is going on. I chose it exactly to build cards and services as class instances. The card class is a kind of wrapper for the GoogleAppsScript.Card.CardBuilder interface. The action event is passed as parameter to the constructor because of the Add-on nature – all action parameters, form input values and other meaningful info (like clientPlatform) are accessible from the event. The idea is that card constructor will add sections and widgets conditionally (I already experimented a little bit and this is good, you’ll see)

src/cards/MainCard.ts
import { Card } from "./card";

export class MainCard extends Card {

  constructor(e: GoogleAppsScript.Card.ActionEvent) {
    super(e);
    this.buildHeader();
  }

  buildHeader() {
    let header = CardService.newCardHeader()
      .setTitle('Card Header')
    this._card.setHeader(header);
  }
}

All cards extends an abstract class that will take care of the shared routines. For now I need to provide _card and _event instance variables available in every card.

src/cards/Card.ts
export abstract class Card {
  _event: GoogleAppsScript.Card.ActionEvent;
  _card: GoogleAppsScript.Card.CardBuilder;

  constructor(e: GoogleAppsScript.Card.ActionEvent) {
    this._event = e;
    this._card = CardService.newCardBuilder();
  }

  build() {
    return this._card.build();
  }
}

Now change the src/index.ts to require the actions (add @types/node package or add ‘declare var require: any;’ for TypeScript to not complain). See the project transformation pushed with this git commit.

Now comes the interesting part – OAuth2 and Connecting to non-Google Services. I’m gonna reuse the same Github service as in the example but using the SCA design pattern.

There are required changes in the project manifest to include OAuth2 for Apps Script library and I will use something not available in example – the gmail.authorizationCheckFunction property.

Modify src/appsscript.json.ejs
{
  "dependencies": {
    "libraries": [{
      "userSymbol": "OAuth2",
      "libraryId": "1B7FSrk5Zi6L1rSxxTDgDEUsPzlukDsi4KGuTMorsTQHhGBzBkMun4iDF",
      "version": "21"
    }]
  },
...
  "oauthScopes": [
    ...
    "https://www.googleapis.com/auth/script.external_request",
    "https://www.googleapis.com/auth/script.storage"
  ],
  "gmail": {
    "authorizationCheckFunction": "githubAuthorizationCheck",
    "openLinkUrlPrefixes": [
      "https://github.com"
    ]
...

The githubAuthorizationCheck action will be called by the Gmail add-on platform prior to rendering the add-on. It will trigger a CardService.newAuthorizationException and build the AuthorizationCard if the service is not authorized.

src/actions/githubAuthorizationCheck
declare var global: any;

import { githubAuth, authorizationException } from '../helpers';

global.githubAuthorizationCheck = ()=> {
  if (!githubAuth.hasAccess()) {
    return authorizationException();
  }
}

The OAuth2 library is called “service”, but it does not require to be class in services folder, it can be initialize once from static data, so I will make it a helper.

src/helpers/githubAuth.ts
declare var OAuth2: any;

export var githubAuth = OAuth2.createService("github")
  .setAuthorizationBaseUrl("https://github.com/login/oauth/authorize")
  .setTokenUrl("https://github.com/login/oauth/access_token")
  .setClientId("-APP_ID-")
  .setClientSecret("-SECRET-")
  .setCallbackFunction("gitHubAuthCallback")
  .setPropertyStore(PropertiesService.getUserProperties())
  .setCache(CacheService.getUserCache())
  .setScope("user user:email user:follow repo")
  .setParam("approval_prompt", "auto"); //this one is undocumented bugfix

* Do not forget to configure the OAuth2 app as described in Redirect URI.

src/helpers/authorizationException
import { githubAuth } from './githubAuth';

export function authorizationException () {
  CardService.newAuthorizationException()
    .setAuthorizationUrl(githubAuth.getAuthorizationUrl())
    .setResourceDisplayName("Github Resource")
    .setCustomUiCallback("createAuthorizationUi")
    .throwException();
}

Create the createAuthorizationUi (setCustomUiCallback) action and the AuthorizationCard

src/actions/createAuthorizationUi.ts
declare var global: any;

import { AuthorizationCard } from '../cards';

global.createAuthorizationUi = (e)=> {
  let card = new AuthorizationCard(e);
  return [card.build()];
}
src/cards/AuthorizationCard.ts
import { Card } from "./Card";
import { githubAuth } from '../helpers';

export class AuthorizationCard extends Card {

  constructor(e: GoogleAppsScript.Card.ActionEvent) {
    super(e);
    this.buildHeader();
    this.buildSection();
  }

  buildHeader() {
    let header = CardService.newCardHeader()
      .setTitle('Authorization')
    this._card.setHeader(header);
  }

  buildSection() {
    let section = CardService.newCardSection()
      .addWidget(
        CardService.newTextParagraph().setText(
          'Please authorize access to your GitHub account.'
        )
      )
      .addWidget(
        CardService.newButtonSet().addButton(
          CardService.newTextButton()
            .setText("Authorize")
            .setAuthorizationAction(
              CardService.newAuthorizationAction()
                .setAuthorizationUrl(githubAuth.getAuthorizationUrl())
            )
        ) 
      )
    this._card.addSection(section);
  }
}

Next time the app is loaded  it will ask user to grant permissions for the newly added scopes in the manifest and then it will show the AuthorizationCard.

To complete the OAuth flow a githubAuthCallback action needed (see setCallbackFunction in src/helpers/githubAuth.ts) that will handle the authorization result:

src/actions/githubAuthCallback.ts
declare var global: any;

import { githubAuth } from '../helpers';

global.githubAuthCallback = (e)=> {
  let isAuthorized = githubAuth.handleCallback(e);
  if (isAuthorized) {
    return HtmlService.createHtmlOutput(
      'Success! '
    );
  } else {
    return HtmlService.createHtmlOutput('Denied');
  }
}

Here I would like to digress for a moment and talk a little bit about OAuth2 flow because I often have difficulty explaining it to others. By opening a new widow and loading the authorize URL (or redirect to it) the application is transferring the UI control to the browser and remains in a pending state. From this moment the dialogue takes place between the user and the 3rd party app (GitHub). When the user has completed the authorization process, the foreign service redirects him back (redirect URI) and then the application regains control over the interface. The user may return with an error, success or not to come back at all. During this process Gmail Add-on behavior is to show a loading spinner and to refresh the card once the OAuth popup window is closed.
The callback handler must store the result from the authorization. This is handled automatically by the OAuth2 library saving the token in PropertiesService.

* The property key used by OAuth2 library is “oauth2._service_name_” it is possible to extract some info included with the token response later by JSON.parse the property value.

Now don’t give up (talking to myself) The Sign out is as important as Sign in, so let’s do it and I will leave the API Service for the next article because I feel like I started to sound edificationary (is there such a word?)

OAuth2 library provides method reset that will clear the stored tokens (* Note that it is not calling the revoke API)

Add to src/helpers/githubAuth.ts
...
export function githubAuthReset() {
  return githubAuth.reset();
}

It would be great to attach the Sign out as menu item CardAction in the MainCard that is shown when authorized.

Modify src/cards/MainCard
  constructor(e: GoogleAppsScript.Card.ActionEvent) {
    ...
    this.buildCardActions();
  }

  buildCardActions() {
    let signOutAction = CardService.newCardAction()
      .setText('Sign out')
      .setOnClickAction(
        CardService.newAction()
          .setFunctionName('githubSignOut')
      );
    this._card.addCardAction(signOutAction);
  }
src/actions/githubSignOut
import { githubAuthReset } from '../helpers';

global.githubSignOut = ()=> {
  githubAuthReset();
  return authorizationException();
}

And it works. But there is something weird… After Signing out, when I refresh the email page the MainCard is shown again which means that githubAuth.hasAccess() returns true. Looking at the source code of OAuth2 library everything looks correct. It deletes the property key and and clears the cache. Then why? I checked the value of the “oauth2.github” key in PropertiesService and surprisingly – it was there. May be deleteProperty method is asynchronous and throwing AuthorizationException immediately after it is aborting the operation? Let’s try not to throw authorization but to rebuild the card and show a Notification:

global.githubSignOut = (e)=> {
  githubAuthReset();
  var card = new MainCard(e);
  return CardService.newActionResponseBuilder()
    .setNavigation(
      CardService.newNavigation().updateCard(card.build())
    )
    .setNotification(CardService.newNotification()
        .setType(CardService.NotificationType.INFO)
        .setText("Some info to display to user"))
    .build();
}

But the behavior is the same… I noticed that in Apps Script console Resources -> Libraries, the latest available OAuth2 version is 24, but the project is using 21.

After changing the OAuth2 lib version in the manifest the Sign in/out works as expected!

Commit, push and 10x for reading!

Gmail Add-on Making

The Gmail add-on is a new toy (available from a month ago). It is a micro application that runs within Gmail. So far I see 10 apps available in G-Suite Marketplace (category works-with-gmail). As a user experience the add-on is a side panel visible when you open a message in Gmail and gives a chance to add some extra functionality starting from the current email. It is not active while you search or list emails and therefore it can not be used as all-time active channel for receiving events from the outside world. But there is something interesting – it is accessible from both web interface and the mobile Gmail app as a native extension to the user email. Good, let’s try to build one!

I want to know how it works to estimate what can I do. Looks like the program code is executed on Google servers and some internal protocol is bringing the UI in the browser. Every click is triggering a spinner – forget about some dynamic interface. There is no way to use your own HTML elements or CSS styles.  Will live with that since this is a direct channel to all Gmail users. The source code must be written with Goolge Apps Script – pure Javascript with access to internal Google services, but the source is not accessible like in a repository, it is stored in Google Drive. The editor is online, there is some concept of deployment and versioning that I will go deep later. Found 2 samples here: https://github.com/googlesamples/gmail-add-ons-samples. The sample integrating Github OAuth2 is something I’m interested in, but deployment instructions are some kind of joke (create and copy-paste each file, hah). To do it right way – first thing that I want is to setup a development environment locally on my computer with tools to build, test and upload the project. I see a project in Github addressing the upload process https://github.com/danthareja/node-google-apps-script. I think it’s a good idea to start my Add-on codding with TypeScript and building with Webpack.

Project setup

$ mkdir gmail-addon-oauth2-wp-ts
$ cd gmail-addon-oauth2-wp-ts
$ npm init //(*main: "src/index.ts")
...
$ yarn add typescript webpack awesome-typescript-loader -D
$ echo "node_modules/" > .gitignore && git init
webpack.config.js
module.exports = {
  entry: './src/index.ts',
  output: {
    filename: 'Code.gs',
    path: __dirname + '/build'
  },
  resolve: {
    extensions: ['.ts']
  },
  module: {
    rules: [
      { test: /\.ts$/, loader: 'awesome-typescript-loader' }
    ]
  }
}
tsconfig.json
{
  "compilerOptions": {
    "module": "commonjs",
    "target": "es5",
    "sourceMap": true,
    "outDir": "./build",
    "declaration": true
  },
  "exclude": [
    "node_modules"
  ],
  "files": [
    "src/index.ts"
  ]
}
src/index.ts
export function GetContextualAddOn(e: Event) {
}

Adding a “build”: “webpack” to script commands in package.js and initial setup is done.

$ npm run build

Now – add-on specific… The CardService Apps Script typings are available with @types/google-apps-script, and node-google-apps-script will bring the deploy commands.

$ yarn add @types/google-apps-script node-google-apps-script -D

And it start looking as a nice project…
The Add-on needs a manifest file (see View > Show project manifest in Apps Script console). The manifest can be pretty specific per environment and should be generated from a template with the build. I will use html-webpack-plugin as template engine.

$ yarn add html-webpack-plugin -D

Add the plugin in webpack.config.js

const HtmlWebpackPlugin = require('html-webpack-plugin');
...
plugins: [
    new HtmlWebpackPlugin({
      filename: 'appsscript.json',
      template: './src/appsscript.json.ejs',
      chunks: [],
      addon: {
        name: 'Hello Gmail Add-on'
      }
    })
  ]
...
src/appsscript.json.ejs
{
  "oauthScopes": [
    "https://www.googleapis.com/auth/gmail.addons.execute",
    "https://www.googleapis.com/auth/gmail.readonly"
  ],
  "gmail": {
    "name": "<%= htmlWebpackPlugin.options.addon.name %>",
    "logoUrl": "https://www.gstatic.com/images/icons/material/system/2x/bookmark_black_24dp.png",
    "contextualTriggers": [{
      "unconditional": {
      },
      "onTriggerFunction": "GetContextualAddOn"
    }],
    "primaryColor": "#4285F4",
    "secondaryColor": "#4285F4",
    "version": "TRUSTED_TESTER_V2"
  }
}

Follow the instructions in https://github.com/danthareja/node-google-apps-script to get google drive credentials. I chose to use Independent Developer Console Project and downloaded my credentials in the project folder (*add that file to .gitignore).

Running the gapps auth and visit the printed authorize url manually.

$ node_modules/.bin/gapps auth client_secret_.apps.googleusercontent.com.json

Successfully Authenticated with Google Drive! message shown in console once authorized. Now I need to setup the project as described in Apps Script Project. It will generate a gapps config file, but I want to change the path to “build” folder:

gapps.config.json
{
  "path": "build",
  "fileId": "my-apps-script-project-id"
}

Adding a “deploy”: “gapps upload” script command to package.json and try.

$ npm run build && npm run deploy

Pushing back up to Google Drive…
The latest files were successfully uploaded to your Apps Script project.

Nice! But instead my appsscript.json content I see some default manifest in Google Script console… Looking at node-google-appps-script issues I see other people reporting that the file is not uploaded, so I forked the project and applied a quick fix (commit “add apppsscript.json upload”) to include .json files with the upload. I modified my package.json to install the package from my forked repository. There is a pull request fixing this in the original project, hopefully it will be merged when you read this article.

The owner of the project danthareja sounds like a nice guy, asking the community members to help with the maintenance of the project in his last commit. In fact I found another similar tool for uploading Apss Script files that looks active (https://github.com/MaartenDesnouck/google-apps-script), but for some reason my sympathy goes to the first one and may be I will offer my help if I continue dealing with Apps Script.

Changing package.json to use my fixed version of gapps

"node-google-apps-script": "git+https://github.com/AlexanderVangelov/node-google-apps-script.git"

Now it uploads both manifest and code, and I can try to run the add-on for the first time as described in install unpublished add-ons. Getting the addon ID is a tricky – from Apps Script console, go to Publush -> Deploy from manifest. Do not create a deployment, just click the “Get ID” link to copy the ID and cancel all. Installing unpublished add-ons is available for @gmail.com accounts – open Settings in Gmail, to go  Add-ons tab. Click “Enable developer add-ons for my account”, paste the ID and click install.

Now I see the Add-on activated when I open some email, asking for authorization. This is the internal Google OAuth2 flow that require the user to grant the Add-on access to resources requested in the manifest. Access granted, but I see a strange error – it can’t find the entry function… It should be something with my Webpack configuration.

A quick search in internet points me to gas-webpack-plugin package that explains “it must be top level function declaration that entry point called from google.script.run“, so let’s give it a try.

$ yarn add gas-webpack-plugin -D

 webpack.config.js
const GasPlugin = require("gas-webpack-plugin");
...
  plugins: [
    new GasPlugin(),
...
New src/index.ts
declare var global: any;
global.GetContextualAddOn = (e)=> {
  let card = CardService.newCardBuilder()
    .setHeader(
      CardService.newCardHeader()
        .setTitle('Card Header')
    )
  return [card.build()];
}

$ npm run build && npm run deploy

Voila! The Add-on shows a card header. I will continue with the app development in a separate article. My plan is to build the UI, connect a non Google service with OAuth2 and access some external APIs from the Add-on.

 

You can find the source code for this article in Github: https://github.com/AlexVangelov/gmail-addon-oauth2-wp-ts

10x for reading!

User Acceptance Testing and Automation as Atomic App

The germ of the technical problem

Is there a problem? Looking at some recent internet activity about automation testing was confirmed that there is widespread instances of test environment stability problem. (Definition of Done for Regression Test Automation Suite – Quick,Reliable and Credible by Alex Lavrenov, How Many Test Failures Are Acceptable? by Dave Farley). My philosophy is that before we start writing scripts to re-run failed test, we have to think a little bit whether we have done everything to provide a favorable environment for our test suite.

There is a lot of tools for acceptance testing and continuous integration. All of them are using browser specific API to drive the testing process.  Here is the list of drivers for most popular browsers if you want to read about:
* Yeah, putting this list here make no sense but it will help Google to deliver this guideline to a larger audience.

Desktop:

Mobile:

We trust that browser vendor provided drivers are stable software but it is not.

They are very sensitive to the operating system and windowing system. So when we say that we gonna use Selenium it’s not enough. Even more if we switch to parallel testing without some special settings it becomes a nightmare.

The germ of the spiritual problem

Very often this part of development process can make a developer unhappy. After spending a lot of time writing test scenarios for your application and waiting for satisfaction of well done job – Oh, that never happens. Looking at the failing tests every morning is really depressingly.

Another trend is that when you work in a team, not all developers care about the tests. But the team should not force a spike developer to fix tests when he got inspiration. Often this task remains for a special breed of programmers who I greatly respect.

How to be a happy developer when your task is User Acceptance Testing?

Respect

Build it so that no one can say “Tests are broken”. The test suite must instill respect. If there is failures it should mean that there is a real issue or the interface is not passing the requirements.

Guideline

First you need a stable, simple and easy to maintain test environment. Do not falter in the development of tools on top of test environment. Prepare and test the test environment. Ask the company for a virtual server specifically provided for those purposes or build your own. Install the low level tools and run a simple test 100 times or all the night. If there is no 100% success, something is wrong. “Unable to obtain stable firefox connection in 60 seconds” or “Unable to connect to host 127.0.0.1 on port 7055 after 45000 ms” are typical error messages for unstable environment.

Once you find the right configuration, fix the versions of all modules. Remember all your steps by creating a scripts that can set up the test environment from scratch.

Solution

A nice solution that I want to share is using a Vagrant box. Vagrant is cross OS tool that creates and configures virtual development environments. Vargant Provisioning can help you to document and preserve the Test Environment setup process. With Vagrant Synced Folders you can have your project outside the test environment, work with your favorite editor and finally – get the test execution results back to your host machine.

I’m a fan of Fedora Linux and I was pleasantly surprised when I found these two articles:
Using Fedora 22 Atomic Vagrant Boxes
Running Vagrant on Fedora 22

I was not familiar what “Atomic” means and reading more about Atomic App and Nulecule Specification (describing a container-based application) encouraged me that this is the right direction.

Technical Part

I’m a Rubyist, so I will give an example configuration with Ruby project, but you can adapt it for other project natures too.
* Part of my plan to be a happy developer was to program in Ruby. (This is not a joke. When a Java or PHP developer becomes a Ruby and Ruby on Rails developer – he becomes a happy developer)

Create an empty project and Gemfile including Rspec, Capybara and Selenium WebDriver.

Gemfile

source 'https://rubygems.org'

gem 'rspec'
gem 'capybara'
gem 'selenium-webdriver'

Run from your project directory:

bundle install
bundle exec rspec –init

The last command will generate .rspec and spec/rspec_helper.rb files.
Now create a separate file spec/test_helper.rb for the initialization code and add require ‘test_helper’ on top of existing rspec_helper.rb:

spec/test_helper.rb

require 'capybara/rspec'
require 'selenium-webdriver'

Capybara.default_driver = :selenium

Lets create the a test feature:

spec/features/uat_spec.rb

describe 'UAT', type: :feature do

  it "Find Definition from Wikipedia" do
    visit 'https://en.wikipedia.org'
    
    within '#simpleSearch' do
      fill_in 'search', with: 'UAT'
      click_on 'Go'
    end
    
    within '#bodyContent' do
      expect(page).to have_selector 'a', text: 'User acceptance testing'
      click_on 'User acceptance testing'
    end
    
    expect(page).to have_content 'verifying that a solution works for the user'
  end
end

Now we have a ready test environment and we can run the test. (I told you Ruby is fun). Run it and you will see Firefox opening Wikipedia and executing the scenario.

bundle exec rspec
.

Finished in 7.25 seconds (files took 0.2968 seconds to load)
1 example, 0 failures

FirefoxDriver is default, but you can easily switch to Chrome having ChromeDriver installed, by changing the scpec/test_helper.rb:

Capybara.register_driver :chrome do |app|
  Capybara::Selenium::Driver.new(app, :browser => :chrome)
end

Capybara.default_driver = :chrome

We can run the test several times with little Bash script help and I’m sure all will pass

for i in 1 2 3 4 5; do bundle exec rspec; done;

But let’s go parallel!
Clone the test 4 times (change decribe title too) and add parallel_tests gem to the Gemfile (don’t forget bundle install):

Gemfile

gem 'parallel_tests'

Now if you run ‘for i in 1 2 3 4 5; do bundle exec parallel_rspec spec/features; done;‘ it will run all the tests simultaneously 5 times (if you have 4 CPUs) and you’ll be lucky if there is no failures.

Atomic App

Install Vagrant and prepare the f22atomic box image as described in http://fedoramagazine.org/using-fedora-22-atomic-vagrant-boxes or use a box of your choice.

Add Vagrant file to the project by executing:

vagrant init f22atomic
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! …

I decided to use libvirt Vagrant Provider, but the ‘virtualbox’ provider shows a great result too.

Vagrantfile

Vagrant.configure(2) do |config|
  config.vm.box = "f22atomic"
  config.vm.provider "libvirt" do |libvirt|
    libvirt.memory = 2048
    libvirt.cpus = 4
  end
end

Lets create vagrant provisioning scripts. We will provide headless browser testing with Xvfb and in my case – rvm and ruby.

Vagrantfile

config.vm.provision :shell, :path => "vagrant-install-xvfb.sh"
config.vm.provision :shell, :path => "vagrant-install-firefox.sh"
config.vm.provision :shell, :path => "vagrant-install-rvm.sh",  :args => "stable"
config.vm.provision :shell, :path => "vagrant-install-ruby.sh", :args => "2.2"

vagrant-install-xvfb.sh

#!/usr/bin/env bash

dnf install -y Xvfb

vagrant-install-firefox.sh

#!/usr/bin/env bash

dnf install -y firefox liberation-sans-fonts

vagrant-install-rvm.sh

#!/usr/bin/env bash

dnf install -y which #fix missing bash command for f22atomic
gpg2 --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
curl -sSL https://get.rvm.io | bash -s $1

vagrant-install-ruby.sh

#!/usr/bin/env bash

source /usr/local/rvm/scripts/rvm

rvm use --install $1

shift

if (( $# ))
  then gem install $@
fi

usermod -a -G rvm vagrant
rvm --default use $1
gem install bundler

Vagrant commands:

vagrant up #start Vagrant (with provisioning if for the first time)
vagrant ssh #establish SSH session to running machine
vagrant halt #stop it
vagrant provision #re-run provision scripts
vagrant destroy #stop and delete all resources

Now add the headless gem to Gemfile and modify spec/test_helper.rb to use Xvfb display.

Gemfile

gem 'headless'

spec/test_helper.rb

require 'headless'

ENV['UAT_TEST_NUMBER'] ||= "#{ (ENV['TEST_ENV_NUMBER']!='' ? ENV['TEST_ENV_NUMBER'] : 1).to_i }"

headless = Headless.new(
  display: "#{ 100 + ENV['UAT_TEST_NUMBER'].to_i }",
  reuse: true,
  dimensions: "1280x900x24"
) if ENV['HEADLESS']
RSpec.configure do |c|
  c.before(:suite) do
    if (ENV['HEADLESS'])
      p 'Starting Headless...'
      headless.start
    end
  end
  
  c.after(:suite) do
    if (ENV['HEADLESS'])
      headless.destroy
    end
  end
end

The headless configuration is triggered only if HEADLESS environment variable passed, so we can run the tests in usual way too.
Note the code display: “#{ 100 + ENV[‘UAT_TEST_NUMBER’].to_i }”. With Parallel Tests gem, we have Environment variable TEST_ENV_NUMBER, which is used to start separate Xvfb display for each set of tests.
* If you going to use Jenkins to run tests, include Jenkins provided variable BUILD_NUMBER in the calculation too.

Inside the box, we have the project mounted in /vagrant folder by default (thanks to Vagrant Synced Folders) Now we can run the test suite like:

vagrant ssh
cd /vagrant
bundle install
HEADLESS=1 bundle exec parallel_rspec spec/features

And if added –format ParallelTests::RSpec::SummaryLogger –out tmp/spec_summary.log to the parallel_rspec command it will provide the test results out of the box in the host’s  <project>tmp folder.

Vagrant is a friend with Docker, so you can create a Dockerfile in your project and add more color to the test environment by providing aliases for the most common commands, but I do not intend to go deep into.

Now the project is portable. You do not even need to have development tools installed to run the tests.

Is it stable?

It depends on your configuration. But this is a big step towards stability.

PS:

Oh, and there is an extra that other test environments can’t do so easy – video recording. Here is a quick example:

spec/test_helper.rb

headless = Headless.new(
  display: "#{ 100 + ENV['UAT_TEST_NUMBER'].to_i }",
  reuse: true,
  dimensions: "1280x900x24",
  video: {
    provider:   :ffmpeg,
    frame_rate: 12,
    codec:      :libx264,
    pid_file_name:  "/tmp/.headless_ffmpeg_#{ENV['UAT_TEST_NUMBER']}.pid",
    tmp_file_name:  "/tmp/.headless_ffmpeg_#{ENV['UAT_TEST_NUMBER']}.pid"
  }
) if ENV['HEADLESS']

RSpec.configure do |c|
  c.before(:each) do
    page.driver.browser.manage.window.resize_to(1280,900)
    headless.video.start_capture if ENV['HEADLESS']
  end
  
  c.after(:each) do |e|
    headless.video.stop_and_save "video/video_#{ENV['UAT_TEST_NUMBER']}_#{File.basename(e.metadata[:file_path])}.mp4" if ENV['HEADLESS']
  end
end

10x for reading!

Extface driver writing guide 2

First post about Extface driver programming (Extface driver writing guide), was about base send and receive methods. In this article, we will create the second layer methods, that will allow us to reach full device functionality. As I said before, we need to create a complex method that will:

  • build the packet for a specific command
  • send it to the device
  • re-transmit it if necessary
  • read and decode the response packet
  • check status bytes (returned with every command) for errors
  • and return unpacked data if everything is OK
  • or stop the execution by rising an exception

The following implementation is not so nice, but it works with Daisy driver, which is very similar to Datecs. My desire now is to go alive, and then we can optimize the code. First, we need to declare some constants about max retries and timeouts. Best place for them is at he beginning of the file, so we can easily manipulate them later to tune the process. And then to create the “smart” frecv and fsend methods.

app/models/extface/driver/datecs/fp550.rb

RESPONSE_TIMEOUT = 3  #seconds
INVALID_FRAME_RETRIES = 6  #count (bad length, bad checksum)
ACKS_MAX_WAIT = 60 #count / nothing is forever
NAKS_MAX_COUNT = 3 #count
def frecv(timeout) # return Frame or nil
  if frame_bytes = pull(timeout)
    return Frame.new(frame_bytes.b)
  else
    errors.add :base, "No data received from device"
    return nil
  end
end

def fsend(cmd, data = "") #return data or nil
    packet_data = build_packet(cmd, data)
    result = false
    invalid_frames = 0
    nak_messages = 0
    push packet_data
    ACKS_MAX_WAIT.times do |retries|
      errors.clear
      if resp = frecv(RESPONSE_TIMEOUT)
        if resp.valid?
          human_status_errors(resp.status)
          if errors.empty?
            result = resp.data
            break
          else
            raise errors.full_messages.join(',')
          end
        else #ack, nak or bad
          if resp.nak?
            nak_messages += 1
            if nak_messages > NAKS_MAX_COUNT
              errors.add :base, "#{NAKS_MAX_COUNT} NAKs Received. Abort!"
              break
            end
          elsif !resp.ack?
            invalid_frames += 1
            if nak_messages > INVALID_FRAME_RETRIES
              errors.add :base, "#{INVALID_FRAME_RETRIES} Broken Packets Received. Abort!"
              break
            end
          end
          push packet_data unless resp.ack?
        end
      end
      errors.add :base, "#{ACKS_MAX_WAIT} ACKs Received. Abort!"
    end
    return result
  end

Now we should find a way to test this method.The Extface core is using Redis server to communicate with the device. If Redis is available in test mode, we can run the command in a thread, then simulate the response and then join the thread to test the result. Extface core requires a job to be associated with the driver commands, otherwise it will raise an exception (This is a part of the queue mechanism, taking care of the consistent implementation of tasks. I was not planning to talk about this, but it is required for our test).

It is hard to create a fully functional test for the “fsend” method… I had to do an ActiveSupport::TestCase helper method “simulate_device_pull” to simulate the device connection. Here is the result:

test/models/extface/driver/datecs/fp550_test.rb

test "fsend" do
  job = extface_jobs(:one)
  job_thread = Thread.new do
    @driver.set_job(job)
    result = @driver.fsend(0x2C) # paper move command
  end
  simulate_device_pull(job)
  @driver.handle("\x01\x2C\x2F\x2D\x50\x04\x88\x80\xC0\x80\x80\xB0\x05\x30\x34\x35\x39\x03".b)
  simulate_device_pull(job)
  @driver.handle("\x01\x2C\x2F\x2D\x50\x04\x88\x80\xC0\x80\x80\xB0\x05\x30\x34\x35\x39\x03".b)
  job_thread.join
  assert @driver.errors.empty?
end

Hmm.. I have to think about some kind of device simulator to be able to test the drivers. This test simply checks if there is a syntax error in the code that will raise an exception. Anyway it is still useful.

I start writing this article, because I need this driver up and running in production. A friend of mine has prepared a real Datecs FP550 device connected to a windows computer in Bulgaria. It is 4700 miles away (about 8000 kilometers), but this is the magic of Extface – I will be able to control the device no mater where it is. I could not resist and did a simple paper cut command for a real test.

app/models/extface/driver/datecs/fp550.rb

def paper_cut
  device.session('Paper Cut') do |s|
    s.push build_packet(Printer::PAPER_CUT)
  end
end

app/views/extface/driver/datecs/fp550/_control.html.erb

<%= button_to 'Paper Cut', fiscal_device_path(@device), remote: true, name: :paper_cut, value: true %>

Commit, gem push, bundle update extface 😉 deploy! And surprisingly the command works.

Extface gem provides a low level communication log in ‘log/extface/:device_id/:driver_name.log’. Let’s take a look at it:

D, [2015-05-21T02:00:49.669430 #9703] DEBUG — : –> 01 25 20 4A 58 05 30 30 3E 3C 03
D, [2015-05-21T02:00:51.375926 #9736] DEBUG — : <– 01 2B 20 4A 04 80 80 92 8F 80 B2 05 30 33 3F 31 03
D, [2015-05-21T02:00:51.391514 #9703] DEBUG — : –> 01 24 21 2D 05 30 30 37 37 03
D, [2015-05-21T02:00:51.504858 #9727] DEBUG — : <– 16
D, [2015-05-21T02:00:51.573943 #9703] DEBUG — : <– 16
D, [2015-05-21T02:00:51.642706 #9727] DEBUG — : <– 16
D, [2015-05-21T02:00:51.714377 #9739] DEBUG — : <– 16 16
D, [2015-05-21T02:00:51.785687 #9712] DEBUG — : <– 16 16 01 2C 21 2D 46 04 80 80 92 8F 80 B2 05 30 34 31
D, [2015-05-21T02:00:51.855080 #9739] DEBUG — : <– 16 01 2C 21 2D 46 04 80 80 92 8F 80 B2 05 30 34 31 3C 03

This log contains the binary packets data in a way it is transferred via the TCP protocol. First the driver sends a GET_STATUS (0x4A) command, if there is no error in the response (check status bits), the driver begins to execute commands in the job session. We can see the PAPER_CUT (0x2D) command on row 3. Since the operation is delayed, the device sends several ACK packets until the paper is cut, and then returns the response packet. Looks like it is transferred in two pieces, but our driver is ready to handle this behavior, and the job ends with the receipt of 0x01..0x03 pattern packet.

The final step is to overwrite the base methods in ‘/app/models/extface/driver/base/fiscal.rb‘, which are required for a fiscal memory device driver model. This will get in the next article coming soon.

10x for reading!

Sencha Touch Rails


Why to mix front-end MVC framework with back-end MVC framework?

There is a trend in web programming for increasingly widespread use of javascript MVC frameworks like Angular JS , Backbone or Sencha Touch. Since we have the power of Ruby on Rails model-view-controller architectural pattern, what is the point to duplicate it on the front? The server resources are expensive and the clients are already powerful enough to deal with the visualization part of the application. Once the application is loaded, we can use Rails back-end more like API, providing data in permissive way and keeping our network and computing resources.

Therefore I upgraded my good old sencha-touch-rails gem and now will explain a little how to use it.
It provides the GPL version of Sencha Touch to Rails assets pipeline. After adding the gem to your project Gemfile, you can load the javascript part of Sencha Touch in your application.js file with:

app/assets/javascripts/application.js

//= require sencha-touch-rails

It will insert only the core of Sencha Touch framework, but with enabled Ext.Loader, so the components will be loaded on the fly when it’s requested.

Now lets rename our application.css to application.scss, and we’ll be able to use @import rather than *= require (Sencha Touch is using a lot of Sass mixins and variables in different places, so @import is more friendly to it and does not require extra tuning and load order attention).

app/assets/stylesheets/application.scss

@import "sencha-touch/themes/sencha-touch";

Sencha Touch comes with set of ready to use themes (sencha-touch, cupertino, cupertino-classic, tizen, bb10, wp and mountainview).
As I start a fresh project, I wonder how to name my first controller…  Thе minimum is not to create controller at all. I’m gonna create a blank action with enabled layout in my ApplicationController, and forward root to ‘application#blank’. It’s working.

app/controller/application_controller.rb

def blank; render inline: "", layout: true; end

app/config/routes.rb

root 'application#blank'

We need a Sencha application initialization script, and it will be nice to use coffeescript. Create init.coffee file:

app/assets/javascripts/init.coffee

Ext.application
  name: 'Sencha'
  launch: ->
    Ext.create "Ext.tab.Panel",
      fullscreen: true
      tabBarPosition: 'bottom'
      items: [
        title: 'Home'
        iconCls: 'home'
        html: 'Home'
      ,
        title: 'Settings'
        iconCls: 'settings'
        html: 'Settings'
      ]

That’s it. We have a nice looking page with bottom navigation bar and page transition effect.

Extface driver writing guide (Ruby)


I would like to show in practice how to write a new driver for https://github.com/AlexVangelov/extface module.

My example device will be Datecs Fiscal Printer FP550, which requires fast two-way communication.

First, let’s take a glance at the protocol.
We have packet messages from host to printer with sequence number and control sum:

and packet or non-packet messages from printer to host:

0x15 (NAK) – means that we have to re-transmit last packet with same sequence number (of course not infinity)
0x16 (ACK) – device has job to do and the host must wait (but nothing is forever again)

For sending packets we need a function with 2 input parameters (cmd, data). Length, sequence number and check sum will be generated automatically.
For receiving data, we can decide that if the stream contains 0x15, one or more of 0x16, or 0x03 – it may be a valid packet and must be processed.

Fork the project and create a new device driver skeleton:

git clone git://github.com/AlexVangelov/extface.git
cd ./extface
bundle install
bundle exec bin/rails generate extface:driver datecs/fp550

The last command will create:
app/models/extface/driver/datecs/fp550.rb
app/views/extface/driver/datecs/fp550/_settings.html.erb
app/views/extface/driver/datecs/fp550/_control.html.erb
test/models/extface/driver/datecs/fp550_test.rb

Layer 1 (Send & Receive)

Extface base driver functionality eliminates the need of thinking how data is transferred through the network. Sending data is easy, just call push(some_data) and it will be delivered to the device. For receiving data we use data = pull(timeout_in_seconds), but before that we should tell the driver what to expect from the input stream. There is a build in FIFO (First in – first out) buffer, that contains everything received from the device, and it is served to the driver through #handle(buffer) callback-like method. The method should return number of bytes processed, which will be auto-deleted from the beginning of buffer. If the received data is not enough to recognize a packet, the method may return nil and inspect the buffer next time, when a fresh data will be appended to it. We have to override that method:

def handle(buffer)
  if i = buffer.index(/[\x03\x16\x15]/)   # find position of frame possible delimiter
    rpush buffer[0..i]                    # this will make data available for #pull(timeout) method
    return i+1                            # return number of bytes processed
  end
end

For an unpretentious driver like simple terminal communication with line delimiter, nothing more needed. Just replace regex with index(‘\r\n\’) and you will be able to talk with device like:

device.session(‘Raw session’) do |s|
s.push “Extface rocks!”
data = s.pull(5) #wait for response 5 seconds
s.push “Extface really rocks!” if data.present?
end

Back to fiscal driver, check the frame recognition by writing #handle method test ( ‘test/models/extface/driver/datecs/fp550_test.rb’ ). Serial communication is unstable and the driver must be ready to process any random bytes without rising exception yet.

require 'test_helper'
module Extface
  class Driver::Datecs::Fp550Test < ActiveSupport::TestCase
    setup do
      @driver = extface_drivers(:datecs_fp550) # require device and driver fixtures
      @driver.flush # clear receive buffer
    end
    
    test "handle" do
      assert_equal nil, @driver.handle('bad packet')
      assert_equal 6, @driver.handle("\x01data\x03data"), "Frame not match"
      assert_equal 9, @driver.handle("pre\x01data\x03data"), "Frame with preamble not match"
      assert_equal 1, @driver.handle("\x16\x16\x01data\x03data"), "Frame with ACK not match"
      assert_equal 4, @driver.handle("pre\x15"), "NAK not match"
    end
  end
end

It’s time to declare all the command constants described in device specification. Creating a separate file keeps code clear and allows reuse it for the future Datecs drivers.

‘app/models/extface/driver/datecs/commands_v1.rb’

module Extface
  module Driver::Datecs::CommandsV1
    STX = 0x01
    PA1 = 0x05
    PA2 = 0x04
    ETX = 0x03
    NAK = 0x15
    SYN = 0x16
    module Init
      SET_MEMORY_SWITCHES         = 0x29
      SET_FOOTER                  = 0x2B
      ...
    end
    module Info
      GET_DATE_HOUR               = 0x3E
      ...
      GET_STATUS                  = 0x4A #Receiving the status bytes
    end
  end
end

Add ‘include Extface::Driver::Datecs::CommandsV1‘ to the driver model.
We need a method for building packets with 2 input parameters – 1 byte command and data binary string (not required). Auto generated sequence number, and check sum calculation procedure does not need to be public.

def build_packet(cmd, data = "")
  "".b.tap() do |packet|
    packet << STX                    #Preamble. 1 byte long. Value: 01H.
    packet << 0x20 + 4 + data.length #Number of bytes from  preamble (excluded) to  (included) plus the fixed offset of 20H
    packet << sequence_number        #Sequence number of the frame. Length : 1 byte. Value: 20H – FFH.
    packet << cmd                    #Length: 1 byte. Value: 20H - 7FH.
    packet << data                   #Length: 0 - 218 bytes for Host to printer
    packet << PA1                    #Post-amble. Length: 1 byte. Value: 05H.
    packet << bcc(packet[1..-1])     #Control sum (0000H-FFFFH). Length: 4 bytes. Value of each byte: 30H-3FH
    packet << ETX                    #Terminator. Length: 1 byte. Value: 03H.   end end private   def bcc(buffer)     sum = 0     buffer.each_byte{ |b| sum += b }     "".b.tap() do |bcc|       4.times do |halfbyte|         bcc.insert 0, (0x30 + ((sum >> (halfbyte*4)) & 0x0f)).chr
      end
    end
  end
 
  def sequence_number
    @seq ||= 0x1f
    @seq += 1
    @seq = 0x1f if @seq == 0x7f
    @seq
  end

To test the packet generation, find a valid packet example in documentation or obtain it by monitoring the vendor driver communication.

test "build packet" do
  assert_equal "\x01\x24\x20\x4a\x05\x30\x30\x39\x33\x03".b, @driver.build_packet(0x4a), "packet without data"
  assert_equal "\x01\x25\x21\x4a\x58\x05\x30\x30\x3e\x3d\x03".b, @driver.build_packet(0x4a, 'X'), "packet with data"
end

To keep the pleasure of programming, we can try to send (unconditionally) a simple command like paper cut to a real device. Prepare a rails application with extface module included in Gemfile with relative path (gem ‘extface’, path: ‘../extface’), a model with ‘has_extface_devices’ , and route ‘extface_for :model’  in resources section (see https://github.com/AlexVangelov/extface readme).  Go to model_extface_path and create a new device. The new driver ‘Datecs FP550’ is now available for selection (group Fiscal Printers & Cash Registers). Copy ‘Client Pull Url’ from device page and run extface client.

extface.exe http://localhost:3000/shops/1/shop_extface/a649a221ec1cebd0cacbc3ccf4846dba COM1,9600,8N1

There is only windows software client realized and if your development machine is unix based (like mine), you have to run it in a virtual windows machine or on a separate computer. In this case replace ‘localhost’ with IP address, and make sure your firewall settings will not block the port.

Our simple command will be push(build_packet(Printer::PAPER_CUT)). To make it available from the interface, we have to create method ‘paper_cut‘ in the driver model, and add a link in _control.html.erb (it’s the driver control panel and we will come back to it when the driver is ready)

app/models/extface/driver/datecs/fp550.rb

def paper_cut
  device.session('Paper Cut') do |s|
    s.push build_packet(Printer::PAPER_CUT)
  end
end

app/views/extface/driver/datecs/fp550/_control.html.erb

<%= button_to 'Paper Cut', fiscal_device_path(@device), remote: true, name: :paper_cut, value: true %>

The paper cut button will be accessible in control section of device page. In ideal conditions it should work, but it’s just a test and is not enough.
In the next layer of the driver we need to create a complex method that will build packet, send it to the device, re-transmit it if necessary, read the response packet, check status bytes (returned with every command) for errors, and return unpacked data if everything is OK.
But before that the driver should be able to receive packets from device.

A good approach to deal with a response packet, is to convert it to an object with clean properties (data length, sequence number, command, data bytes, status, check sum and error messages). It can be a private subclass of the driver model, with ActiveModel::Validations included. Any errors of the packet will be accessible through build in Errors object after initialization, nice!
Now I’m a little confused because it will repeat a similar functionality (check sum calculation). May be building packet should use the same subclass object… Anyway, for now we will make check sum calculation as a class method and reuse it. At the end we can play with optimizations and test memory and processor consumption for different variants.

class Frame
  include ActiveModel::Validations
  attr_reader :frame, :len, :seq, :cmd, :data, :status, :bcc
  
  validates_presence_of :frame, unless: :unpacked?
  validate :bcc_validation
  validate :len_validation
  
  def initialize(buffer)
    if match = buffer.match(/\x01(.{1})(.{1})(.{1})(.*)\x04(.{6})\x05(.{4})\x03/nm)
      @frame = match.to_a.first
      @len, @seq, @cmd, @data, @status, @bcc = match.captures
    else
      if buffer[/^\x16+$/] # only ACKs
        @ack = true
      elsif buffer.index("\x15")
        @nak = true
      end
    end
  end
  
  def ack?; !!@ack; end #should wait, response is yet to come
        
  def nak?; !!@nak; end #should retry command with same seq

  private
    def unpacked? # is it packed or unpacked message?
      @ack || @nak
    end

    def bcc_validation
      if unpacked?
        calc_bcc = self.class.bcc frame[1..-6]
        errors.add(:bcc, I18n.t('errors.messages.invalid')) if bcc != calc_bcc
      end
    end
    
    def len_validation
      unless unpacked?
        errors.add(:len, I18n.t('errors.messages.invalid')) if frame.nil? || len.ord != (frame[1..-6].length + 0x20)
      end
    end
  
    class << self       def bcc(buffer) #TODO remove old implementation         sum = 0         buffer.each_byte{ |b| sum += b }         "".tap() do |bcc|           4.times do |halfbyte|             bcc.insert 0, (0x30 + ((sum >> (halfbyte*4)) & 0x0f)).chr
          end
        end
      end
    end
end

I’m not gonna talk about regular expressions in ruby. You can extract the packet parts with any code you like. My personal practice is to play some time with an online regexp tester an then put it in the code. Then the tests will show whether it is correct. Find an example response packet and test the new class:

test "response frame" do
  frame_class = @driver.class::Frame
  assert frame_class.new("\x15").nak?, "NAK message failed"
  assert frame_class.new("\x16\x16").ack?, "ACK message failed"
  assert_nothing_raised do
    assert_equal false, frame_class.new("bad data\x01\x25\x21\x4asome broken packet\x58\x05\x30\x30\x3e\x3d\x03".b).valid?
  end
  frame = frame_class.new("\x16\x01\x2C\x2F\x2D\x50\x04\x88\x80\xC0\x80\x80\xB0\x05\x30\x34\x35\x39\x03".b)
  assert frame.valid?, "Vailid frame not recognized"
  assert_equal "\x01\x2C\x2F\x2D\x50\x04\x88\x80\xC0\x80\x80\xB0\x05\x30\x34\x35\x39\x03".b, frame.frame
  assert_equal "\x2c".b, frame.len
  assert_equal "\x2f".b, frame.seq
  assert_equal "\x2d".b, frame.cmd
  assert_equal "\x50".b, frame.data
  assert_equal "\x88\x80\xC0\x80\x80\xB0".b, frame.status
  assert_equal "\x30\x34\x35\x39".b, frame.bcc
  #bad check sum
  frame = frame_class.new("\x01\x2C\x2F\x2D\x50\x04\x88\x80\xC0\x80\x80\xB0\x05\x30\x34\x35\x38\x03".b)
  assert_equal false, frame.valid?
  assert frame.errors.messages[:bcc]
  #bad length
  frame = frame_class.new("\x01\x2b\x2F\x2D\x50\x04\x88\x80\xC0\x80\x80\xB0\x05\x30\x34\x35\x38\x03".b)
  assert_equal false, frame.valid?
  assert frame.errors.messages[:len]
end

The last thing we need before we move to the next layer is the ability to decode status bytes (included in each response packet). Messages must be human readable. Read device documentation carefully and check the bits that must stop session execution.

def human_status_errors(status) #inspect 6 bytes status
  status_0 = status[0].ord
  errors.add :base, "Fiscal Device General Error" unless (status_0 & 0x20).zero?
  ...
end

The topic is very spacious for a single article, so that’s it for now (will be continue)
10x for reading!

20 May 2015: Extface driver writing guide 2