0

Testing Stripe transactions with Scala

I’m nearing completion on my latest project, Training Sleuth, and I’ve once again decided to use Stripe as my payment processor.  I used Stripe on my previous project, Rhino SchoolTracker, and have absolutely nothing negative to say about it :).

While with Rhino SchoolTracker I tested my stripe payments manually, I decided I wanted to write some automated integration tests this time around (particularly since I’m using Scala for this project which has a decidedly slower and more unwieldy edit-compile-test loop than Ruby does).

The Stripe API is actually split into two parts, a client side javascript library (stripe.js), and a server-side API available in several of the popular server-side web languages (php, java, ruby, python, and javascript (for node.js) last time I checked).  Anyway, the basic concept goes like this:

  1. You serve an HTML payment form (with fields for CC number, etc.) from your server to the client.
  2. When the client submits the form, instead of sending it to your sever, you use stripe.js to grab the form data and sends it to Stripe’s servers, which will validate the card and return a unique token (or an error message in case of invalid/expired credit cards, etc.) via an ajax request.
  3. Once you have the stripe card token, you send it up to your server, do whatever processing you need to do on your end (grant access to your site, record an order, etc.), and then submit a charge to Stripe using the Stripe API.

The key feature of all of this is that the user’s credit card information never touches your server, so you don’t need to worry about PCI compliance and all the headaches that go with it (yes, stripe does require you to use SSL, and despite their best efforts it is possible to mis-configure your server in such a way as to expose user payment info if you don’t know what you’re doing).

Now Stripe offers a test mode, which is what we’ll be using here, with a variety of test card numbers to simulate various conditions (successful charge, declined card, expired card, etc.).  The main problem I ran into writing automated tests in Scala was that I needed to use stripe.js to generate a card token before I could interact with the server-side (Java) API.

Enter Rhino, a Javascript interpreter for Java.  Using Rhino, I was able to whip up some quick-and-dirty javascript to generate a stripe token and call it from Scala.  Of course, Rhino alone wasn’t enough — I also needed to bring in Envjs and create some basic HTML to simulate a browser environment for stripe.js.

First, here’s my stripetest.js:

And you need to provide some basic HTML, I created a file called ‘stripetest.html’ which merely contained this:

Simple, but this was enough to get things working.

I dropped these files (along with env.rhino.js which I obtained from the Envjs website) into my test/resources folder.

With all of that in place, I was able to write some specs2 tests:

There you go, kind of painful to get set up, but definitely nice to have.

0

Backbone and Full Calendar

My current project, Training Sleuth, involves scheduling and keeping track of various training events.  While not strictly necessary, I’m fairly certain potential users are going to want to view all of the events they’ve painstakingly entered into the app in a nice familiar calendar format.

I’m using Backbone for my application’s front-end (which interacts with a REST backend written in Scala), and unfortunately I couldn’t find a javascript calendar library that was designed to work with backbone out of the box.  I briefly toyed with the idea of creating my own, but quickly rejected the idea when I realized such a thing would be a project unto itself, and would just serve as a distraction to getting my current app finished.  The next best thing to Backbone-Calendar, of course, would be a well-designed, highly configurable calendar that’s flexible enough to be used with just about anything.  Full Calendar is just such a library, and also happens to be widely used (and surprisingly well documented for an open source javascript library, perhaps that’s why it’s widely used).

FC is based on ‘event‘ objects, and provides several different mechanisms for feeding events to the calendar.  For integrating FC’s event objects with my Backbone app, I used FC’s custom events function to translate between Backbone models and FC event objects.  The events method is called whenever FC needs more event data (for example, when the user is paging through months).

Here’s one of my view methods that renders a calendar, and defines my ‘events’ method (warning for Javascript purists: I use Coffeescript, and yes, I realize this means we can’t be friends):

EventList is a fairly vanilla Backbone Collection that fetches event data (training session date/time, etc.) from my back-end REST service (the data object in the fetch statement defines the date/time range for which I’m requesting event data).  In all, it was surprisingly… easy.  I overrode the default CSS a bit to match the rest of my UI (I’m using Bootstrap 3), and the result doesn’t look half bad (IMHO):

FC_screenshot

0

Graham’s Scan in Scala

Sometimes my job throws an interesting problem my way.  This week I was presented with a very odd geometry problem :)

I needed to generate KML files from geographic data and one of my requirements was to represent certain geographic areas as polygons, the vertices of which would be supplied (along with the rest of the data) by another OSGi service running elsewhere.  Seems fairly straightforward — In KML, the vertices of a polygon are usually specified as follows:

The coordinates tag requires at least four longitude/latitude/altitude triples to be specified, with the last coordinate being the same as the first.   Here is where the problem comes in — The order in which these coordinates are specified matters (they must be specified in counter-clockwise order).  To mix up the order of the coordinates would have unpredictable results (e.g. crazy geometric shapes) when the data is later displayed via Google Earth (or some other application that supports KML files).  However, the area vertices are indeed fed to my KML generator in no particular order (and the services providing the data cannot be changed to guarantee a particular ordering).

So… how do I put the points in order?  ”Surely this is a solved problem.”  I thought, turning to the all-knowing internet.  A bit of searching turned up an algorithm called Graham’s Scan.  Basically, this algorithm takes a bag of random coordinates and generates a convex hull with vertices defined in counter-clockwise order (Note: This may not be suitable if you’re trying to faithfully recreate complex geometries, fortunately I’m mostly concerned with rectangular areas).  Roughly, the algorithm works as follows:

  1. Find the coordinate with the lowest y-value.
  2. Sort the remaining points by the polar angle between the line defined by the current point and the point found in step 1, and the x-axis.
  3. Go through the list, evaluating 3 points at a time (you’re concerned with the angle of the turn made by the two resulting line segments).  Eliminate any non counter-clockwise turns (i.e., non-convex corners).

I found several example implementations for this algorithm in various languages: C++, Java, etc.  Since I’m coding this up in Scala, I wasn’t too happy with any of those, and I couldn’t find an example in Scala to rip off draw inspiration from.  However, I did manage to find an implementation in Haskell which I used as a rough guide.  Anyway, here’s my attempt at Graham’s Scan in Scala:

I think you’ll find that’s a bit shorter than some of the imperative language implementations out there :)

1

Akka and Scalatra

On my current project, I’ve been using Akka to handle the Service Layer of my application while using Scalatra for my REST controllers.  This combination works quite well, though it took me a little bit of time to figure out how to integrate Scalatra and Akka.  The examples presented on the Scalatra website didn’t exactly work for me (it’s possible they’ve since been fixed).  But after some studying of the Akka and Scalatra API documentation and some good ol’ fashion trial-and-error, I got to something that worked.  First, Akka actors are set up and initialized in ScalatraBootstrap.scala thusly:

I’m initializing each actor with a router (in this case, a SmallestMailboxRouter, though others, such as a RoundRobinRouter are also available).  The router will create up to 10 child actors and route incoming messages to the actor with the least number of ‘messages’ in its inbox.

The Scalatra controller responds to a request for a resource by sending a message to the appropriate actor (I’m using one Actor type per resource) using a Future and returning the result.  Scalatra provides an AsyncResult construct that helps here:

My actor here happens to return an ‘Either’ type in response to a request.  By convention, a ‘Left’ response indicates an error condition (in this case a tuple containing the HTTP error code to return and a message), and a ‘Right’ response indicates success and contains the requested data (a ‘User’ object). The actor itself looks like this:

The message types are implemented as case classes, and enter the actor in the ‘receive’ method, which passes each message to a handler and returns the result to the message’s ‘sender’ (the controller).

1

Spring, OSGi, and dropped services

My current job has had me working with a number of technologies with which I was completely unfamiliar when I started. The Spring Framework and OSGi are two such technologies that I believe I’ve become fairly comfortable with, yet still manage to throw something new and/or bizarre at me every once in a while.

My latest issue involved convincing Spring to register an OSGi service. Seems simple enough, Spring offers pretty good OSGi support, and one can usally register a service by doing something like this:

Pretty straightforward.  It creates a bean called theService, then uses that bean to publish a service.  Spring automagically takes care of registering the new service in your container’s OSGi service registry (in my case, I’m running Virgo).  I’ve done something similar to this many times with no problem.  However, this time I needed to do something slightly more complex:

Now for the problem… my “beanB” service was NOT being published.  I tried everything I could think of, switched my beans to constructor injection (no idea why that should work), verified that both my “beanA” and “beanB” beans were being created, etc., and everything looked fine… but the service wasn’t showing up!  Checking the logs… I found nothing.  No exceptions, no warnings, just the lack of the usual log message Spring generates when it publishes a new service.

So what was going on?  By using the good ‘ol “comment things out until it works, then reverse the process until it breaks” method of debugging, I identified the first line of my services.xml file, the osgi:reference tag, as the source of the problem.

The osgi:reference tag deontes a service that is consumed, rather than published.  BeanA uses that service, and this is where the problem comes in.  That otherService was something I had yet to implement.  It wasn’t strictly necessary for “beanA” to work, and beanA was being initialized just fine, but the lack of “someOtherService” in the OSGi service registry was triggering an obscure feature of Spring (or at least it was obscure to me, it’s entirely possible I’m the last Spring/OSGi user on Earth to learn about this).

I found a description of the problem, and its solution, in the spring documentation:

7.5. Relationship Between The Service Exporter And Service Importer
An exported service may depend, either directly or indirectly, on other services in order to perform its function. If one of these services is considered a mandatory dependency (has cardinality 1..x) and the dependency can no longer be satisfied (because the backing service has gone away and there is no suitable replacement available) then the exported service that depends on it will be automatically unregistered from the service registry – meaning that it is no longer available to clients. If the mandatory dependency becomes satisfied once more (by registration of a suitable service), then the exported service will be re-registered in the service registry.
In other words, it doesn’t matter that my beanB service didn’t depend directly on someOtherService, the fact that an unpublished service was somewhere in beanB’s chain of dependencies meant that Spring was simply going to refuse to publish beanB as an OSGI service.  It’s easy to imagine a rather hilarious cascade of dropping services depending on how you have beans and OSGi services wired together.
The solution lies in an option to the osgi:reference tag, namely “cardinality.”  From the Spring docs:
The cardinality attribute is used to specify whether or not a matching service is required at all times. A cardinality value of 1..1 (the default) indicates that a matching service must always be available. A cardinality value of 0..1 indicates that a matching service is not required at all times (see section 4.2.1.6 for more details).

So, changing the first line in my xml to this:

fixed the issue.  In my opinion the default cardinality should be “0..1″, rather than “1..1″.  At the very least, if a mandatory service reference cannot be satisfied, some sort of conspicuous error log message ought to be generated.  But anyway, if you were having a similar problem and found your way here, hopefully I just saved you several hours of pain, anguish, and misery.

 

0

Getting Scala, HATEOAS, and JSON to work together

I’ve been working with Scala for the last few months on a new project, and I’ll confess that it’s starting to grow on me (this is in stark contrast to Java, which I’m liking less the more I learn about it).

My current project has me creating a REST API using Scalatra along with a front-end built with Coffeescript and Backbone.js.  This definitely has a different feel to it than a typical web application built using one of the uber-frameworks like Rails.  The lack of tight integration between back-end and front-end has its advantages, but also introduces a few issues that must be sorted out.  One of these issues that I’ve recently happened upon involves controlling how a user may interact with resources on the server based on his or her access level (or ‘role’).  For example, if I have a database table called ‘people’, each containing a record for a member of an organization, I probably want to control who can do what with said records.  Perhaps standard users are only allowed to view these people records, managers are allowed to edit them, and Administrators may delete or create a new records.

This is a trivial problem with a traditional web app, but in the case of a REST API, consider this:  I request a list of people records from my server by issuing a GET request to http://myserver.com/api/persons.  The server checks my credentials, and returns a list of 20 records of people in, say, the accounting department.  The client (whether it be a web app, mobile app, etc.) renders a nice, spiffy table full of people records.  The client interface also has several buttons that allow me to manipulate the data.  Buttons with such labels as ‘View Record’, ‘Edit Record’, ‘Delete Record’, etc.

Now we have an issue.  Let’s say I’m the manager of 6 people in the accounting department, but the other 14 belong to other managers.  It has been decided that managers should be able to view the records of other personnel in the organization, but should only be able to edit records for their own.  Further, only Administrators (let’s say HR folks) can delete a record. No problem, you might say, just have the server check the user’s role regarding a person record before executing a request to update or delete a record.  We can make this easy by adding a ‘manager_id’ field to the ‘person’ table identifying each person’s manager.

Of course, that would work fine.  The problem, however, is not in ‘correctness’ of the application, but in the user-friendliness of the client interface.  The client has no way of knowing your permissions in regards to each person record so it displays buttons for every possible action that can be taken for each and every one, relying on the server to sort things out on the back-end and return an error if you try to do something illegal.  It would be better if we could have the server send down a list of actions the authenticated user is allowed to take for each record, then we could simply not display (or grey-out) the related interface elements (buttons, drop-down items, etc.) for non-specified actions, giving the user an instant visual cue regarding what he’s allowed to do.  While we’re at it, why not send down a link to the REST call for each of the allowed actions as well?

This is where HATEOAS (Hypermedia as the engine of application state) comes in.  For a more thorough explanation, go to the Wikipedia page.  Basically, a HATEOAS compliant REST service requires the server to, along with the resource data itself, send a list of actions (and links) that may be performed on or with that resource.  It’s probably easier explained via example.

First, here’s a plain JSON object returned from a non-HATEOAS compliant service:

Just a bag of data — no information regarding what I should, or what I’m allowed to do with it. Well, how about this:

The _links section of this object tells me that I’m allowed to update AND delete this record, and provides links to the REST calls necessary to perform those actions. It also includes a link to itself. By the way, there are several “standard” formats out there for returning these links, I’m attempting to follow HAL. For more fun, you could also include the MIME-type for the data that each action would return (JSON, HTML, PDF, whatever).

The concept is rather simple, and definitely beats the hackish ideas I initially had for solving this issue. However, and this could just be my relative new-ness to Scala, it did take a bit of effort to figure out how to get the server to spit out correctly formatted JSON for the HAL links (I didn’t want my _links section to be sent as an array, for example, or the myraid other ways the Jackson default serializer decided to do it before I sorted it out). I eventually came up with something like this (ok, exactly this):

I use this code to generate each object’s _link section before pushing it down to the client. It’s not by any means a fully-realized HAL implementation, but it solves my main issue for now, and I can easily add more functionality as needed.

1

Scala and Scalatra

I’ve been using Ruby on Rails almost exclusively for my web projects over the last year or two. Recently, when I had an idea for a new project, I decided to try something a little different.

My current Rails project, Rhino SchoolTracker, is a traditional CRUD-type web application that is fairly well suited to the Rails way of doing things. For this new project, however, I wanted to completely decouple my server side code from my front-end web application.

My idea is to create a simple REST API for the back-end services, and build the web UI using Backbone and Bootstrap. This also has the benefit of providing significant flexibility for possible mobile clients later. For the server side stuff, I could have turned to Rails again, but that seemed like overkill when I would only be using a subset of its features.

I stumbled upon Scala while researching alternative server-side languages. While I would never use Java if I had a choice in the matter, the idea behind Scala is a good one. Fix the basic problems with Java (the language) and add functional programming support, all while retaining compatibility with the vast Java ecosystem and the ability to run on the mature (mostly, after all these years/decades) JVM. It should also be significantly faster and scale better than anything written in interpreted languages like ruby or python.

Scalatra

Scala has a number of web frameworks available to it.  Lift and Play are probably the most popular.  However, I wanted something lightweight, so I looked and found a minimalistic framework called Scalatra, which attempts to mimic the excellent Sinatra framework over in Ruby-land.  So, I decided to give it a shot.

Scalatra relies on the Simple Build Tool (sbt), and setting up a new project is fairly simple using g8:

Firing up the build system is not difficult either, just execute the following in the project root directory:

I’m using IntelliJ IDEA for my development environment, and it just so happens there’s a helper plugin for sbt called gen-idea that generates all of the proper project files. I believe there is a similar plugin for eclipse users, if you’re one of those people.

Adding dependencies to the project is surprisingly easy compared to say, maven, or ivy.  And when I say easy, I mean NO XML.  To add support for my database and json, for example, I add the following lines to my project’s build.scala file:

squeryl is an ORM for Scala.  Not quite as easy to work with as ActiveRecord, but at least it’s not Hibernate.  C3p0 handles connection pooling.

Scalatra Routes

Scalatra handles routes much like Sinatra. Pretty easy actually, here’s a simple controller for a hypothetical record called “Person”:

What does it do? all requests to “/” — the servlet’s root, not necessarily the web root, result in a request to our Person model for all of the “Person” objects in the database. One thing that may not be obvious is that the response is sent as JSON… the before() filter automagically runs before all requests, setting the output type for each controller action to JSON. To enable this we have to mixin JacksonJsonSupport (it’s a Scala trait) and tell json4s which formats we want it to use when doing its serialization by setting that implicit variable (jsonFormats).

If you’re wondering how we register all of our servlets(i.e., controllers), Scalatra projects have a single ‘ScalatraBootstrap.scala’ file, that goes something like this:

So our Persons servlet is mounted at “/persons” — so a request to http://example.com/persons should result in retrieving our “Person” objects.

Database Support

In our ScalatraBootstrap class, you’ll also notice we call configureDb() in the init method (and a corresponding closeDbConnection() in the destroy method).  The appliction is stood up and torn down here, so this is the natural place to set up our database (and close it).  There’s a trait mixed into our ScalatraBootstrap class called DatabaseInit that provides these methods.  Here it is:

The usual properties needed to connect to the database are stored in a separate c3p0.properties file:

Easy enough, but what about the DatabaseSessionSupport trait that we mixed into the controller? Oh, here it is, lifted almost verbatim from the scalatra documentation:

Finally, if you’re curious about our “Person” model, here it is:

You’ll notice we’re using a Java type here, java.sql.Timestamp, as if it belonged in our scala code.  Neat, eh?  You also might have noticed that we have both a class and a singleton object named ‘Person’ in the same source file.  In Scala, the object ‘Person’ would be said to be the companion object of class ‘Person.’  A class and its companion object can access each other’s private members (and they must both be defined in the same source file).

Well, that’s enough code for one blog entry.  That wasn’t nearly as bad as I feared it would be.  I’ve definitely seen more convoluted ways of accomplishing much the same thing in other languages/frameworks (*cough* Java/Spring/Hibernate *cough*).  I’m enjoying Scala so far, hopefully it continues to grow on me.

0

LEMP, Debian, and WordPress

Back in 2011, when I started getting back into this whole web thing, I rented a VPS from the folks over at ServInt to run a website called RhythmScore (my first attempt at a web-based business — it failed, hard… maybe I’ll write about it sometime after the sting wears off).  Now, ServInt offers managed VPSes, which means they basically handle a lot of the routine system administration tasks for you.  At the time I thought this was a good thing since while I fancy myself a good programmer, as a sysadmin I know just enough to be dangerous.

As time went on, I upgraded the VPS with more ram, disk space, etc., and also began hosting a few more websites and rails apps on it, including this blog.  Now, a managed VPS comes at a higher cost than a comparable unmanaged solution, and since I’ve become more comfortable tinkering with web servers, databases, etc., I decided to explore other options to host my blog (and possibly a few other simple sites).  In fact, my preference now would be to run multiple, cheap VPSes that can be configured especially for the applications that will be running on them.  With my ServInt VPS running $60, I was trying to run everything on it.  RhythmScore was a PHP website running with MySQL.  My old blog was a Rails app running the Refinery CMS.  I also was hosting a static website for a friend, as well as another rails app.  All of this of course using Apache, mod_php, and Passenger.

This time around I wanted to simplify things.  A single, lean, VPS for running PHP and static websites, and then an independent VPS for each rails app I want to deploy (because running multiple versions of rails on the same production server can be a pain — and yes, I do use rvm, but only on my development box).  In fact, for my upcoming rails app (second attempt at a web-based business), I plan to run an additional VPS running just the database server.

Anyway, I’m starting to ramble.  I located a company called 6sync that sells VPSes at a wide variety of price points.  Another upshot is the ability to choose which linux distro you want to run.  I’ve always been a Debian man myself, but ServInt (along with most other managed hosting providers) pretty much forces you into CentOS, which I hate, along with Fedora, RHEL, and any other rpm-based distro.

So, I bought a $15 ‘nano’ instance from 6sync and spun up a 32-bit (hey, the nano instance has only 256MB of RAM) Debian 6 server.  I ssh-ed (not sure if I can use ssh as a verb, but I’m going to anyway) into my server and discovered a nice minimal Debian install ready to be configured.

Being in the mood to experiment, I wanted to try using Nginx (pronounced engine-x), rather than Apache, because I heard Apache was old and busted and I wanted to be cool.  Unfortunately, I couldn’t find very many guides on setting up a LEMP (Where the E stands for “engine-x”, as opposed to the more well known LAMP)  stack on Debian, so I had to piece together information gleaned from various sources on the intertubes to come up with something that worked.

First, I added the dotdeb sources  to my /etc/apt/sources.list file:

That done, I then made sure my package database was up to date:

Now, the first step is to install MySQL:

There, that was easy. Next up is Nginx.
Now Nginx is pretty easy to install, though some people may want to compile from source because they are nerds, but I am not a nerd. I just spend all of my spare time programming and writing about it on my blog, because I’m not a nerd. Anyway:

Of course, I’m also going to need PHP if I want to run WordPress:

Now, there is one configuration change needed to php.ini to start with. In the file /etc/php5/fpm/php.ini, I found the line:

and changed it to:

Ok, that’s done. Now for the fun part of actually configuring the web server. Since I plan to run multiple websites from this VPS I’ll need to set up a few virtual hosts. I was actually surprised at how much easier this is to do on Nginx than Apache. For my first site, jamesadam.me, I set up a new user, jcadam, with a home directory at /home/jcadam as follows:

Then, in the /etc/nginx/sites-available directory, I created a new file called www.jamesadam.me, and configured it thusly:

I made a link to this file in the /etc/nginx/sites-enabled directory also:

I want my web files to be served from /home/jcadam/public_html. The ‘location ~ .php$’ section is needed to enable PHP support. That last line containing the ‘try_files’ command I added later, after I discovered that the pretty permalinks in WordPress wouldn’t work without it.
My first test was to create a test page at /home/jcadam/public_html/index.php that contained nothing but the phpinfo() command. Since that worked, I figured it was time to go on and install wordpress.
First, I downloaded wordpress:

and unzipped it into my public_html directory.
Next, I suppose WordPress would like a database to work with. So, I logged into mysql as the root database user(which I set up when I installed MySQL) and issued the following commands:

After that, I connected to my server using a web browser and went though the WordPress web-based configuration with no trouble… though I did need to set up my wp-config.php file manually, but that was not difficult :)