Getting Scala, HATEOAS, and JSON to work together

I’ve been working with Scala for the last few months on a new project, and I’ll confess that it’s starting to grow on me (this is in stark contrast to Java, which I’m liking less the more I learn about it).

My current project has me creating a REST API using Scalatra along with a front-end built with Coffeescript and Backbone.js.  This definitely has a different feel to it than a typical web application built using one of the uber-frameworks like Rails.  The lack of tight integration between back-end and front-end has its advantages, but also introduces a few issues that must be sorted out.  One of these issues that I’ve recently happened upon involves controlling how a user may interact with resources on the server based on his or her access level (or ‘role’).  For example, if I have a database table called ‘people’, each containing a record for a member of an organization, I probably want to control who can do what with said records.  Perhaps standard users are only allowed to view these people records, managers are allowed to edit them, and Administrators may delete or create a new records.

This is a trivial problem with a traditional web app, but in the case of a REST API, consider this:  I request a list of people records from my server by issuing a GET request to http://myserver.com/api/persons.  The server checks my credentials, and returns a list of 20 records of people in, say, the accounting department.  The client (whether it be a web app, mobile app, etc.) renders a nice, spiffy table full of people records.  The client interface also has several buttons that allow me to manipulate the data.  Buttons with such labels as ‘View Record’, ‘Edit Record’, ‘Delete Record’, etc.

Now we have an issue.  Let’s say I’m the manager of 6 people in the accounting department, but the other 14 belong to other managers.  It has been decided that managers should be able to view the records of other personnel in the organization, but should only be able to edit records for their own.  Further, only Administrators (let’s say HR folks) can delete a record. No problem, you might say, just have the server check the user’s role regarding a person record before executing a request to update or delete a record.  We can make this easy by adding a ‘manager_id’ field to the ‘person’ table identifying each person’s manager.

Of course, that would work fine.  The problem, however, is not in ‘correctness’ of the application, but in the user-friendliness of the client interface.  The client has no way of knowing your permissions in regards to each person record so it displays buttons for every possible action that can be taken for each and every one, relying on the server to sort things out on the back-end and return an error if you try to do something illegal.  It would be better if we could have the server send down a list of actions the authenticated user is allowed to take for each record, then we could simply not display (or grey-out) the related interface elements (buttons, drop-down items, etc.) for non-specified actions, giving the user an instant visual cue regarding what he’s allowed to do.  While we’re at it, why not send down a link to the REST call for each of the allowed actions as well?

This is where HATEOAS (Hypermedia as the engine of application state) comes in.  For a more thorough explanation, go to the Wikipedia page.  Basically, a HATEOAS compliant REST service requires the server to, along with the resource data itself, send a list of actions (and links) that may be performed on or with that resource.  It’s probably easier explained via example.

First, here’s a plain JSON object returned from a non-HATEOAS compliant service:

{
  "id":35,
  "employeeId":"7",
  "lastName":"NewGuy",
  "firstName":"Steve",
  "middleName":"",
  "email":"steve@acme.com",
  "title":"Clerk",
  "hireDate":"01/02/2013",
  "dateOfBirth":"01/01/1980"
}

Just a bag of data — no information regarding what I should, or what I’m allowed to do with it. Well, how about this:

{
  "_links": 
  { 
    "self": {"href":"/api/persons/35","method":"GET"},
    "update":{"href":"/api/persons/35","method":"PUT"},
    "delete":{"href":"/api/persons/35","method":"DELETE"}
  },
  "id":35,
  "employeeId":"7",
  "lastName":"NewGuy",
  "firstName":"Steve",
  "middleName":"",
  "email":"steve@acme.com",
  "title":"Clerk",
  "hireDate":"01/02/2013",
  "dateOfBirth":"01/01/1980"
}

The _links section of this object tells me that I’m allowed to update AND delete this record, and provides links to the REST calls necessary to perform those actions. It also includes a link to itself. By the way, there are several “standard” formats out there for returning these links, I’m attempting to follow HAL. For more fun, you could also include the MIME-type for the data that each action would return (JSON, HTML, PDF, whatever).

The concept is rather simple, and definitely beats the hackish ideas I initially had for solving this issue. However, and this could just be my relative new-ness to Scala, it did take a bit of effort to figure out how to get the server to spit out correctly formatted JSON for the HAL links (I didn’t want my _links section to be sent as an array, for example, or the myraid other ways the Jackson default serializer decided to do it before I sorted it out). I eventually came up with something like this (ok, exactly this):

//package object full o' utility functions for creating some HAL-style HATEOAS links
package object Hateoas{
  //could add an additional field specifying MIME-type, for example
  case class Link(href: String, method: String)
  type HateoasLinks = Map[String, Link]
  //case class for a response containing a Collection of items
  case class ListResponse(_links: HateoasLinks, _embedded: Map[String, List[Any]])
  object HateoasLinkFactory{
    //could (should) add a function for generating a "custom" action link
    def createSelfLink(uri: String) = {
      ("self" -> new Link(uri, "GET")) 
    }
    //create Create!
    def createCreateLink(uri: String) = {
      ("create" -> new Link(uri, "POST")) 
    }

    def createUpdateLink(uri: String) = {
      ("update" -> new Link(uri, "PUT"))
    }

    def createDeleteLink(uri: String) = {
      ("delete" -> new Link(uri, "DELETE"))
    }
  }
}

I use this code to generate each object’s _link section before pushing it down to the client. It’s not by any means a fully-realized HAL implementation, but it solves my main issue for now, and I can easily add more functionality as needed.

Scala and Scalatra

I’ve been using Ruby on Rails almost exclusively for my web projects over the last year or two. Recently, when I had an idea for a new project, I decided to try something a little different.

My current Rails project, Rhino SchoolTracker, is a traditional CRUD-type web application that is fairly well suited to the Rails way of doing things. For this new project, however, I wanted to completely decouple my server side code from my front-end web application.

My idea is to create a simple REST API for the back-end services, and build the web UI using Backbone and Bootstrap. This also has the benefit of providing significant flexibility for possible mobile clients later. For the server side stuff, I could have turned to Rails again, but that seemed like overkill when I would only be using a subset of its features.

I stumbled upon Scala while researching alternative server-side languages. While I would never use Java if I had a choice in the matter, the idea behind Scala is a good one. Fix the basic problems with Java (the language) and add functional programming support, all while retaining compatibility with the vast Java ecosystem and the ability to run on the mature (mostly, after all these years/decades) JVM. It should also be significantly faster and scale better than anything written in interpreted languages like ruby or python.

Scalatra

Scala has a number of web frameworks available to it.  Lift and Play are probably the most popular.  However, I wanted something lightweight, so I looked and found a minimalistic framework called Scalatra, which attempts to mimic the excellent Sinatra framework over in Ruby-land.  So, I decided to give it a shot.

Scalatra relies on the Simple Build Tool (sbt), and setting up a new project is fairly simple using g8:

g8 scalatra/scalatra-sbt

Firing up the build system is not difficult either, just execute the following in the project root directory:

g8 scalatra/scalatra-sbt

starting the build system is done by running the following in the project directory:

./sbt

I’m using IntelliJ IDEA for my development environment, and it just so happens there’s a helper plugin for sbt called gen-idea that generates all of the proper project files. I believe there is a similar plugin for eclipse users, if you’re one of those people.

Adding dependencies to the project is surprisingly easy compared to say, maven, or ivy.  And when I say easy, I mean NO XML.  To add support for my database and json, for example, I add the following lines to my project’s build.scala file:

"org.scalatra" %% "scalatra-json" % "2.2.1",
"org.json4s"   %% "json4s-jackson" % "3.2.4",
"org.json4s"   %% "json4s-ext"     % "3.2.4",
"org.squeryl"  %%  "squeryl" % "0.9.5-6",
"postgresql"   % "postgresql" % "9.1-901.jdbc4",
"c3p0"         % "c3p0" % "0.9.1.2",

squeryl is an ORM for Scala.  Not quite as easy to work with as ActiveRecord, but at least it’s not Hibernate.  C3p0 handles connection pooling.

Scalatra Routes

Scalatra handles routes much like Sinatra. Pretty easy actually, here’s a simple controller for a hypothetical record called “Person”:

import org.scalatra._
import org.json4s.{DefaultFormats, Formats}
import com.caffeinatedrhino.db.DatabaseSessionSupport
import com.caffeinatedrhino.testproj.models.Person
import org.scalatra.json.JacksonJsonSupport
import org.json4s.JsonAST.JValue

class PersonsController extends ScalatraServlet with DatabaseSessionSupport with JacksonJsonSupport {

  protected implicit val jsonFormats: Formats = DefaultFormats

  before() {
    contentType = formats("json")
  }

  get("/") {
    Person.allPersons
  }

}

What does it do? all requests to “/” — the servlet’s root, not necessarily the web root, result in a request to our Person model for all of the “Person” objects in the database. One thing that may not be obvious is that the response is sent as JSON… the before() filter automagically runs before all requests, setting the output type for each controller action to JSON. To enable this we have to mixin JacksonJsonSupport (it’s a Scala trait) and tell json4s which formats we want it to use when doing its serialization by setting that implicit variable (jsonFormats).

If you’re wondering how we register all of our servlets(i.e., controllers), Scalatra projects have a single ‘ScalatraBootstrap.scala’ file, that goes something like this:

import com.caffeinatedrhino.testproj.controllers.PersonsController
import org.scalatra._
import javax.servlet.ServletContext
import com.caffeinatedrhino.db.DatabaseInit

class ScalatraBootstrap extends LifeCycle with DatabaseInit {
  override def init(context: ServletContext) {
    configureDb()
    context.mount(new PersonsController, "/persons")
  }

  override def destroy(context: ServletContext) {
    closeDbConnection()
  }
}

So our Persons servlet is mounted at “/persons” — so a request to http://example.com/persons should result in retrieving our “Person” objects.

Database Support

In our ScalatraBootstrap class, you’ll also notice we call configureDb() in the init method (and a corresponding closeDbConnection() in the destroy method).  The appliction is stood up and torn down here, so this is the natural place to set up our database (and close it).  There’s a trait mixed into our ScalatraBootstrap class called DatabaseInit that provides these methods.  Here it is:

import org.slf4j.LoggerFactory
import java.util.Properties
import com.mchange.v2.c3p0.ComboPooledDataSource
import org.squeryl.adapters.PostgreSqlAdapter
import org.squeryl.Session
import org.squeryl.SessionFactory

trait DatabaseInit{

  val logger = LoggerFactory.getLogger(getClass)
  var cpds = new ComboPooledDataSource

  def configureDb() {
    val props = new Properties
    props.load(getClass.getResourceAsStream("/c3p0.properties"))
    cpds.setProperties(props)
    SessionFactory.concreteFactory = Some (() => connection)

    def connection = {
      logger.info("Creating connection with c3p0 connection pool")
      Session.create(cpds.getConnection, new PostgreSqlAdapter)
    }
    logger.info("Created c3p0 connection pool")
  }

  def closeDbConnection() {
    logger.info("Closing c3p0 connection pool")
    cpds.close
  }

}

The usual properties needed to connect to the database are stored in a separate c3p0.properties file:

c3p0.driverClass=org.postgresql.Driver
c3p0.jdbcUrl=jdbc:postgresql://localhost:5432/testdb
user=testuser
password=testpass
c3p0.minPoolSize=1
c3p0.acquireIncrement=1
c3p0.maxPoolSize=50

Easy enough, but what about the DatabaseSessionSupport trait that we mixed into the controller? Oh, here it is, lifted almost verbatim from the scalatra documentation:

package com.caffeinatedrhino.db

import org.squeryl.Session
import org.squeryl.SessionFactory
import org.scalatra._

object DatabaseSessionSupport {
  val key = {
    val n = getClass.getName
    if (n.endsWith("$")) n.dropRight(1) else n
  }
}

trait DatabaseSessionSupport { this: ScalatraBase =>
  import DatabaseSessionSupport._

  def dbSession = request.get(key).orNull.asInstanceOf[Session]

  before() {
    request(key) = SessionFactory.newSession
    dbSession.bindToCurrentThread
  }

  after() {
    dbSession.close
    dbSession.unbindFromCurrentThread
  }

}

Finally, if you’re curious about our “Person” model, here it is:

package com.caffeinatedrhino.testproj.models

import com.caffeinatedrhino.db.DBRecord

import org.squeryl.PrimitiveTypeMode._
import org.squeryl.{Query, Schema}
import org.squeryl.annotations.Column

import java.sql.Timestamp

class Person(val id: Long,
             @Column("USER_ID") val userID: Long,
             @Column("LAST_NAME") var lastName: String,
             @Column("FIRST_NAME") var firstName: String,
             @Column("DATE_OF_BIRTH") var dateOfBirth: Timestamp,
             @Column("CREATED_AT") val createdAt: Timestamp,
             @Column("UPDATED_AT") var updatedAt: Timestamp) extends DBRecord{
  def this() = this(0, 0, "NO_NAME", "NO_NAME", new Timestamp(0), new Timestamp(0), new Timestamp(0))
}

/**
 * Kind of a cross between a Schema and a DAO really.  But I'll call it a Dao anyway
 * because it pleases me to do so.
 */
object PersonDao extends Schema {
  val persons = table[Person]("PERSONS")

  on(persons)(p => declare(
    p.id is(autoIncremented, primaryKey)
  ))
}

object Person{
  def create(person: Person):Boolean = {
    inTransaction {
      val result = PersonDao.persons.insert(person)
      if(result.isPersisted){
        true
      } else {
        false
      }
    }
  }
  def allPersons = {
    from(PersonDao.persons)(p => select(p)).toList
  }
}

You’ll notice we’re using a Java type here, java.sql.Timestamp, as if it belonged in our scala code.  Neat, eh?  You also might have noticed that we have both a class and a singleton object named ‘Person’ in the same source file.  In Scala, the object ‘Person’ would be said to be the companion object of class ‘Person.’  A class and its companion object can access each other’s private members (and they must both be defined in the same source file).

Well, that’s enough code for one blog entry.  That wasn’t nearly as bad as I feared it would be.  I’ve definitely seen more convoluted ways of accomplishing much the same thing in other languages/frameworks (*cough* Java/Spring/Hibernate *cough*).  I’m enjoying Scala so far, hopefully it continues to grow on me.

Rhino SchoolTracker

rhinoman_face_rightI suppose it’s time I wrote a bit about my latest project, Rhino SchoolTracker, which I finally put up on the web a few days ago.  I’ve been working on this application for the last 6 months, and it’s definitely the largest ‘side’ project I’ve done (at least in Ruby on Rails).

The concept is fairly simple.  Last year, my wife was homeschooling two of our children and was using a collection of spreadsheets, word documents, and good ‘ol pencil and paper to keep track of everything, to include attendance, lesson plans, grades, etc.  Being a software engineer and thus a problem-solver by nature, I figured there had to be a decent software solution out there to handle the needs of homeschool educators.  Well, there are a few solutions out there, but we found them to be quite sub-par — a motley collection of windows 95-era desktop applications (we don’t use windows at the Adam house, aside from IE testing, of course) and a couple of uninspired web based offerings that looked overly complicated with dated, dreary UIs.

I thought I could do better, so I did (I hope).  I created a new Rails project, opened a fresh repository on GitHub, and got to work.  Most of my early UI concepts were sketched out on graph paper with #2 stubby pencil, which seemed to suit me better than any of the software-based UI layout tools I tried (the strength of which is most likely the team collaboration features… useless to a team of one).

I started out using a combination of Blueprint CSS and jQuery UI for the frontend, but was never really happy with it.  Searching for a solution, I happened upon Twitter Bootstrap, a great CSS/Javascript UI framework that puts jQuery UI to shame.  The bootstrap gem for RoR integrated seamlessly into my project, and within a few days I had ripped jQuery UI and Blueprint from my app and substituted bootstrap.

master_schedule_small

After I had decided on the frameworks and toolkits I’d be using on the project, I settled in for many a late night and weekend in front of my computer coding.  The result, Rhino SchoolTracker, is a complete record keeping system for homeschool parents.  Lesson Plans, Attendance, Grades, etc.  I also needed to give my application the ability to generate printable reports, espcecially attendance sheets and transcripts.  The PDF format was a natural choice, and I chose a ruby library called Prawn for the task of report generation.  It’s pretty nice, and I highly recommend it for any ruby project involving PDF creation.

In a lot of ways, this project was a great learning experience.  I delved much deeper into the Ruby language and the Rails framework than I had in the past, and I also picked up a little CoffeeScript and SCSS skills along the way.  As much fun as learning new things is, I must admit there was/is always a financial motivation to many of my projects.  I intended to make a great product, yes, but I also wanted to make a bit of money doing so.  Thus, as I was nearing the end of development on Rhino SchoolTracker, I had to think about how I was going to process the innumerable monthly subscription payments that were sure to come pouring in (maybe).  My only experience with processing payments online has been with PayPal (ick), so I was looking for something better this time around.  I wanted a simple way to manage monthly subscriptions to Rhino SchoolTracker, but without the hassles of PCI compliance or the cheesiness of tossing my users out to a third party site to input their credit card information.  Enter Stripe.  It’s absolutely perfect for small software shops that want quick, (relatively) painless payment processing.  Using their API, I can create payment forms on my site, but all of the sensitive credit card information and such is sent to stripe for processing and never touches my server, alleviating the need for PCI compliance.

paymentform

Stripe does require your site to use SSL.  However, if your web application has users entering private information (such as a school record keeping system), you’ll be using SSL anyway (I would hope).

After much testing and bug fixing (always get another person to test your app, ideally a non-programmer if you have one around — my wife was happy to assist me with this), I took Rhino SchoolTracker live a few days ago.  I’ve registered an LLC with my state (Caffeinated Rhino, LLC), and I’m actually envisioning a small side business focusing on educational software, so I’m hoping to come up with other products in the future.

P.S. Navigating the byzantine laws and regulations required to start a business in Virginia is not nearly as straightforward as building a quality web application from scratch.  It’s something I’ll write about in the future, after I feel I have a decent handle on it (possibly never).

 

Setting up a Rails server on Debian 6

debianplusrubyI’ve been working on a Rails application for the better part of a year (which I plan to talk about in detail in a future post), and I’ve come to the point where I really must put the thing up on a live server in order to work out the last few details. My initial plan is to deploy my application to two virtual machines. One will host the rails application itself, while the other will host the application’s Postgres database. Eventually (hopefully) if I need to scale the application I can simply deploy additional app (easy) and/or database (less easy) VMs and set up yet another VM with something like HAProxy to route requests.

I’ve decided upon 6sync as my hosting provider, due to their good performance, decent prices, and linux distro options. I’ve also decided to use Debian 6 for all of my VMs (because I rather don’t like CentOS/RHEL). For my rails app server, I plan to use Passenger Standalone. The steps I followed were as follows:

1. Spin up a new VM from 6sync’s control panel with the following options: 64-bit, Debian 6, 256MB nano instance (when I’m done testing I will bump this up, and continue to upgrade it as needed). This will create a VM with a bare-bones Debian install.

2. login as root, open up a terminal and enter the following commands:

apt-get update
apt-get install sudo
apt-get install ruby1.9.1 #Note: This actually installs Ruby 1.9.2 on Debian, which is what I want
apt-get install buildessentials #Needed for passenger
apt-get install ruby1.9.1-dev
apt-get install ri1.9.1
apt-get install graphviz

3. Now, for some reason installing these packages does not cause the proper symlinks to be created in /usr/bin, so to fix that:

ln -s /usr/bin/ruby1.9.1 /usr/bin/ruby
ln -s /usr/bin/gem1.9.1 /usr/bin/gem

4. Also, it would be nice to have the rubygems bin directory in my PATH — it’s actually kind of annoying that installing ruby didn’t do this automatically. So, open up the file /etc/profile and append the following to both PATH entries:

/var/lib/gems/1.9.1/bin

5. For the gems I’m using, I required a few prerequisites — you may or may not need these, though I’m guessing you probably do, as the gems that require them are fairly common:

#The first 3 are required for Passenger
apt-get install libcurl4-openssl-dev
apt-get install libssl-dev
apt-get install zlib1g-dev
#The nokogiri gem requires the following:
apt-get install libxml2 libxml2-dev
apt-get install libxslt-dev
#And the postgresql driver needs this:
apt-get install libpg-dev

6. Finally, install the passenger gem:

gem install passenger

Well, that wasn’t that hard… “passenger start -e production” will fire up passenger standalone on port 3000, which is ok for testing, but you’ll need to do something like “sudo passenger start -p 80 –user=non-root-user-name” to start it on port 80 when you’re ready to go live. I would plan to write a few scripts to automate things 🙂

LEMP, Debian, and WordPress

Back in 2011, when I started getting back into this whole web thing, I rented a VPS from the folks over at ServInt to run a website called RhythmScore (my first attempt at a web-based business — it failed, hard… maybe I’ll write about it sometime after the sting wears off).  Now, ServInt offers managed VPSes, which means they basically handle a lot of the routine system administration tasks for you.  At the time I thought this was a good thing since while I fancy myself a good programmer, as a sysadmin I know just enough to be dangerous.

As time went on, I upgraded the VPS with more ram, disk space, etc., and also began hosting a few more websites and rails apps on it, including this blog.  Now, a managed VPS comes at a higher cost than a comparable unmanaged solution, and since I’ve become more comfortable tinkering with web servers, databases, etc., I decided to explore other options to host my blog (and possibly a few other simple sites).  In fact, my preference now would be to run multiple, cheap VPSes that can be configured especially for the applications that will be running on them.  With my ServInt VPS running $60, I was trying to run everything on it.  RhythmScore was a PHP website running with MySQL.  My old blog was a Rails app running the Refinery CMS.  I also was hosting a static website for a friend, as well as another rails app.  All of this of course using Apache, mod_php, and Passenger.

This time around I wanted to simplify things.  A single, lean, VPS for running PHP and static websites, and then an independent VPS for each rails app I want to deploy (because running multiple versions of rails on the same production server can be a pain — and yes, I do use rvm, but only on my development box).  In fact, for my upcoming rails app (second attempt at a web-based business), I plan to run an additional VPS running just the database server.

Anyway, I’m starting to ramble.  I located a company called 6sync that sells VPSes at a wide variety of price points.  Another upshot is the ability to choose which linux distro you want to run.  I’ve always been a Debian man myself, but ServInt (along with most other managed hosting providers) pretty much forces you into CentOS, which I hate, along with Fedora, RHEL, and any other rpm-based distro.

So, I bought a $15 ‘nano’ instance from 6sync and spun up a 32-bit (hey, the nano instance has only 256MB of RAM) Debian 6 server.  I ssh-ed (not sure if I can use ssh as a verb, but I’m going to anyway) into my server and discovered a nice minimal Debian install ready to be configured.

Being in the mood to experiment, I wanted to try using Nginx (pronounced engine-x), rather than Apache, because I heard Apache was old and busted and I wanted to be cool.  Unfortunately, I couldn’t find very many guides on setting up a LEMP (Where the E stands for “engine-x”, as opposed to the more well known LAMP)  stack on Debian, so I had to piece together information gleaned from various sources on the intertubes to come up with something that worked.

First, I added the dotdeb sources  to my /etc/apt/sources.list file:

deb http://packages.dotdeb.org stable all

That done, I then made sure my package database was up to date:

apt-get update

Now, the first step is to install MySQL:

apt-get install mysql-server mysql-client php5-mysql

There, that was easy. Next up is Nginx.
Now Nginx is pretty easy to install, though some people may want to compile from source because they are nerds, but I am not a nerd. I just spend all of my spare time programming and writing about it on my blog, because I’m not a nerd. Anyway:

apt-get install nginx

Of course, I’m also going to need PHP if I want to run WordPress:

apt-get install php5 php5-fpm

Now, there is one configuration change needed to php.ini to start with. In the file /etc/php5/fpm/php.ini, I found the line:

cgi.fix_pathinfo=1

and changed it to:

cgi.fix_pathinfo=0

Ok, that’s done. Now for the fun part of actually configuring the web server. Since I plan to run multiple websites from this VPS I’ll need to set up a few virtual hosts. I was actually surprised at how much easier this is to do on Nginx than Apache. For my first site, jamesadam.me, I set up a new user, jcadam, with a home directory at /home/jcadam as follows:

adduser -g www-data jcadam

Then, in the /etc/nginx/sites-available directory, I created a new file called http://www.jamesadam.me, and configured it thusly:

 server{
        listen 80;
        server_name jamesadam.me www.jamesadam.me;

        access_log /var/log/nginx/website.access_log;
        error_log /var/log/nginx/website.error_log;

        root /home/jcadam/public_html;
        index index.php index.htm index.html;

        location ~ .php$ {
                fastcgi_pass   127.0.0.1:9000;
                fastcgi_index  index.php;
                fastcgi_param  SCRIPT_FILENAME /home/jcadam/public_html$fastcgi_script_name;
                include fastcgi_params;
        }

        location / {
                index index.php index.html index.htm;
                try_files $uri $uri/ /index.php?q=$uri&$args;
        }
}

I made a link to this file in the /etc/nginx/sites-enabled directory also:

ln -s /etc/nginx/sites-available/www.jamesadam.me /etc/nginx/sites-enabled

I want my web files to be served from /home/jcadam/public_html. The ‘location ~ .php$’ section is needed to enable PHP support. That last line containing the ‘try_files’ command I added later, after I discovered that the pretty permalinks in WordPress wouldn’t work without it.
My first test was to create a test page at /home/jcadam/public_html/index.php that contained nothing but the phpinfo() command. Since that worked, I figured it was time to go on and install wordpress.
First, I downloaded wordpress:

wget http://wordpress.org/latest.tar.gz

and unzipped it into my public_html directory.
Next, I suppose WordPress would like a database to work with. So, I logged into mysql as the root database user(which I set up when I installed MySQL) and issued the following commands:

CREATE DATABASE wordpress;
CREATE USER the_user@localhost; #No, not my real user name :)
SET PASSWORD FOR wordpressuser@localhost= PASSWORD("12345"); #No, not really my password.  Luggage combination, yes.
GRANT ALL PRIVILEGES ON wordpress.* TO the_user@localhost IDENTIFIED BY '12345'
FLUSH PRIVILEGES;
exit

After that, I connected to my server using a web browser and went though the WordPress web-based configuration with no trouble… though I did need to set up my wp-config.php file manually, but that was not difficult 🙂

In-app purchase scams

A few weeks ago, my wife’s iPhone 4 started to give out.  You see, she’s very abusive when it comes to cell phones (other items subject to her abuse include laptop computers, automobiles, my ego, and geese).  Her iPhone 3G found itself at the bottom of bathtubs, sinks, etc. more than once, and it wasn’t more than two months after she received her iPhone 4 that I was performing an LCD transplant on it.  So, now that her contract with AT&T was up, I decided to look for something a bit more durable.

My search yielded this thing:

galaxy-rugby-pro

The Samsung Rugby Pro.  A supposedly ruggedized smartphone running Android.  Since I’ve been in a bit of an anti-Apple mood lately, I managed to sell my wife on trying an Android phone.

The phone arrived in the mail, I activated it, did some initial setup (email, wifi, etc., all quite easy) and handed it to her with a “Here you go.”  The hardware itself is actually pretty good.  Android is snappy, and I personally love the customization options.  Problems soon arose, however, with her Google PlayStore account.

Within a few days, I noticed several large charges appearing on our credit card.  $50 here, $30 here, etc., all supposedly going through the Google PlayStore and associated with an app development company called Team Lava.  I immediately called my bank, canceled and reissued our credit cards, and reported the fraudulent charges.

A bit of research on these jokers at Team Lava revealed some interesting results.  Complaints against them are legion (they have a grade of F with the Better Business Bureau, for one).  Apparently their business model consists of creating free games and crafting them such that they trick users into making in-app purchases.  Large ones.  I mean, who would spend upwards of $200 to get ahead in a silly, casual game made for smartphones?

I searched my wife’s smartphone for any apps made by these Team Lava cretins and found one.  I deleted it immediately and advised my wife  to never install anything by TeamLava ever again.  Not sure what happened here, but she swore she hadn’t made the purchases (at least not knowingly), so that’s good enough for me.  I also set a PIN on her phone for in-app purchases.  Which leads me to…

Why are in-app purchases enabled by default if they are so easy for unscrupulous developers to abuse?  Android has been around for years now, so why hasn’t this been fixed?  Also, given TeamLava’s reputation, why are they still permitted to sell their scamware on Google’s PlayStore?  I would say that reflects rather poorly on Google.  Though, given my previous interactions with Google from a ‘paying customer’ perspective (Adwords, etc.) I’m not really surprised.

I’ll probably get an Android phone myself when my iPhone 4 gives out, though I’m not sure I’ll be trusting Google with my credit card information again anytime soon.  I will give Apple this, my family and I have been using iOS devices for years (and iTunes for even longer than that) and have never had mysterious charges show up on our account.  That could just be luck, since this sort of thing has been a problem with iOS as well.

Testing ActiveScaffold

I’m currently neck-deep in a new project using Rails 3.2 and Active Scaffold.  If you’re unfamiliar with Active Scaffold, it’s a great plugin that takes care of most of your application’s CRUD functionality.  If you’re working on a data-heavy application, you should check it out.

Anyway, as great as Active Scaffold is, it certainly isn’t perfect.  The controller_generator script generates functional tests for you.  Nice, except that several of them don’t work out of the box — primarily the tests for creating and updating records.

For example, here is the create record test that was automagically generated for one of my controllers:

  test "should create student" do
    assert_difference('Student.count') do
      post :create, student: {
          age: @student.age,
          first_name: @student.first_name,
          gender: @student.gender,
          grade_level: @student.grade_level,
          last_name: @student.last_name }
    end
    assert_redirected_to student_path(assigns(:student))
  end

Nothing obviously amiss, right? Wrong.  Running the functional tests gets you the following error:

Error: test_should_create_student(StudentsControllerTest)
  NoMethodError: undefined method `each' for nil:NilClass

A little bit of troubleshooting reveals that the error is coming from deep within the bowels of Active Scaffold.  There was a time when I would have just jumped in and started hacking third party code, assuming that I was right and the API/library I was using was wrong.  These days I’m a little older and hopefully a little wiser, so I figured I should probably at least try to make changes to my code first.

A little bit of poking around on Active Scaffold’s github site reveals that I’m not the only person experiencing this issue.

Anyway, the solution is to change the params key from your model name to record, so the line:

     post :create, student: {

becomes:

     post :create, record: {

Rerunning the tests, I get a new error:

    ActionController::RoutingError: No route matches {:action=>"show", :controller=>"students"}

After I gaped dumbly (blinking occasionally) at both the test log and my code, I noticed that Active Scaffold had generated the last line of my test incorrectly:

      assert_redirected_to student_path(assigns(:student))

It’s kind of subtle, but that should be students_path, not the singular student_path.  Making that change now allows the test to pass.

Making similar changes to the “should update student” test allows that one to pass as well:

  test "should update student" do
    put :update, id: @student, record: {
        age: @student.age,
        first_name: @student.first_name,
        gender: @student.gender,
        grade_level: @student.grade_level,
        last_name: @student.last_name }
    assert_redirected_to students_path(assigns(:student))
  end

That should do it, right?  Um, not exactly.  The real bashing-head-into-keyboard-repeatedly-error has yet to be dealt with.  You see, Active Scaffold generated a pretty standard looking test for retrieving the index page:

  test "should get index" do
    get :index
    assert_response :success
    assert_not_nil assigns(:students)
  end

But running the test yields yet another problem:

=========================================================================
Failure:  expected to not be nil.
test_should_get_index(StudentsControllerTest)
test/functional/students_controller_test.rb:13:in `block in '
     10:   test "should get index" do
     11:     get :index, :current_user => @user
     12:     assert_response :success
  => 13:     assert_not_nil assigns(:students)
     14:   end
     15: 
     16:   test "should get new" do
=========================================================================

Say what?! C’mon… that should just work.  After all, getting the index page in a web browser works just fine.

Consider, however, what Active Scaffold is actually doing when you request the index page from a particular controller.  It doesn’t just display a list of records, it calls a :list action, which displays a list of records.  So perhaps we should test the ‘list’ functionality separately, while making sure that get :index at least renders the list template.

So, first we add a new test for :list

  test "should get list" do
    get :list
    assert_response :success
    assert_template 'list'
    assert_not_nil assigns(:page)
    assert_not_nil assigns(:records)
  end

Then, we modify the get :index test as follows:

  test "should get index" do
    get :index
    assert_response :success
    assert_template 'list'
  end

Whew! Everything passes. Time to go get some more coffee…

Why I’m Switching from Mac to Linux

tux2I started using Macs shortly after the release of OS X back in 2001. Not being a lifelong Mac user, I tended to view Macs running OS X as UNIX boxes with a great UI and the ability to run some mainstream software.

I’ve spent most of my career in software development working on various UNIX platforms, so I used my Macs more as UNIX workstations (lots of terminal windows and X11 apps open at any time) than as consumer machines. Since I have a dislike for all-in-one computers, my machine of choice for the longest time had been the PowerMac. Despite the top end Apple tower being a multi-processored monster, there was always a low-end model available that, while not as awesome, was always just as expandable, easy to work on, and reasonably priced. My first Mac was a second-hand PowerMac G3, followed several years later by a G4. Finally, I splurged and bought a mid-range dual processor G5 Tower. All were great machines, in fact, the G4 is still being put to use as a test server running Debian.

That changed when Apple switched processor architectures from PowerPC to x86. The PowerMac was replaced with the

Mac Pro, a machine crammed with server grade components that is unjustifiably expensive for all but the most demanding (and

macintosh_g3_dt
My first Macintosh.

wealthy) users. In order to fill the gap, Apple beefed up their iMacs, which now sport large IPS screens, quad-core processors, and decent (though not high-end) GPUs. Priced out of my preferred machine configuration of tower-plus-dual-monitors, I decided to buy a 27″ quad-core iMac, a machine which proceeded to validate all of my long-held negative opinions regarding all-in-one computers, while shattering my long-suffering brand loyalty towards Apple.

A few months after bringing home my new iMac, I noticed a problem. Dark smudges were appearing INSIDE the LCD. Since the machine was under a year old, I took it to the nearest Apple Store. I lost the machine for several days, thus illustrating my number 1 beef against all-in-ones — no user-serviceable components. If the monitor (or any other part) dies, you can’t just swap it out for another and keep working, you get to take the whole machine in for service, causing you to get behind on whatever project you’re working on at the time. Anyway, I got the machine back with a new LCD, and figured it was just a defective component.

Next, the optical drive quit. I was in the middle of working on a project, and so decided not to take it back to the Apple Store right away (I was forced to dig an old external USB CD-R drive out of my closet). Not long after this however, the LCD started dying again in the same manner. So, back to the Apple store I went, and after about a week I had my machine back, with another new LCD.  Since I was now nearing the end of my 1-year warranty, I purchased AppleCare for the first time in my Mac-using life.

At this point, I had a serious conversation with myself about why I was spending so much money on Apple products when the quality had clearly become sub-par. I also asked myself what I could do on a Mac that I couldn’t do on a Linux box, and the answer was: not much. Maybe Photoshop, but that can be run on Windows, either via WINE or a Windows install on a virtual machine (or if the performance wasn’t good enough with the aforementioned options, I could always install both Linux and Windows on one machine and dual-boot).

For the time being, however, I had a once-again operational Mac so the issue wasn’t pressing. Then, it happened again. LCD number THREE bit the dust. Same failure. I walked into the Apple Store and had to fight back the urge to hurl the thing in the general direction of the “Genius” Bar and walk out. But alas, I just walked up to the counter, plopped my POS iMac down, and demanded a full-replacement. The “Genius” behind the counter gave me a quick deer-in-the-headlights look, then went to fetch the store manager.

I explained to the manager that my iMac had had its LCD replaced TWICE, it’s clearly an unreliable machine, and I need to get a new one. He then suggested that the damage was due to environmental factors and asked me if I smoke. I explained that I don’t smoke, that I use my computers in a suburban home office setting (with a few non-Apple LCDs that have never had a problem), and that no, I can’t set up a class-1 cleanroom in my house in which to use my Apple products.

After some back-and-forth, he told me that they can’t replace the machine, but they’ll happily replace the LCD again. I then told the “Genius” that after 10 years as a loyal Mac user, I would be moving my work flow to Linux, and that this iMac would be the last Mac I ever bought. But, as a matter of principle, I’m going to bring the machine back every few months for another $700 LCD replacement since I had purchased AppleCare. I left, went directly to a PC parts shop, and bought all the components I needed to build a nice Linux workstation.

tux-eating-an-appleI’m pretty sure Apple doesn’t care about users like me, as they seem to have turned their focus toward producing products for the consumer market while systematically dismantling their professional user base. The recent Final Cut Pro X introduction fiasco is a good example of this. There are even rumors circulating that Apple is considering killing the Mac Pro — they already killed the XServe (without warning), basically exiting the enterprise market and screwing over whatever corporate customers they had. The uncertainty over Apple’s future plans is driving away a lot of professional users. It’s kinda sad to watch, but I guess it won’t be my problem anymore.

Digital Download Sales with PayPal IPN, Rails, and Refinery CMS

filecartWhile building an online store for one of my past projects using Rails and the Refinery CMS, I was presented with an interesting problem. I had a requirement to allow customers to purchase and download individual music tracks directly from an online store that also sells physical goods. On its surface, this sort of thing does not seem much different from selling tangible goods via the web: User selects items, places them in his/her cart, and checks out. After payment processing, the only thing different in the case of digital downloads would be in actually fulfilling the order.

The problem is not in getting the purchased tracks to the customer. It’s easy enough to email a link to the purchased files. However, a permanent link to your digital downloads would allow anyone to grab the files you’re trying to sell, for free. So, clearly a better solution is needed that will deny everyone but paying customers access to downloadable products.

A thought briefly occurred to me to just email the purchased tracks to the customer after a purchase was made. But that’s head-smackingly stupid for a number of reasons, not the least of which is the fact that many email providers place a size limit on attachments that would prevent this solution from working. Plus, what if a customer loses his emailed copy and wants to get the files he payed for a second time? Sure, you could tell him/her to just buy them again, but that’s not the kind of customer service that will engender loyalty and/or referrals.

A better solution would be to provide each customer with an access code that will allow him/her to access a download page containing all of the files purchased as part of a particular order. The Refinery CMS does a decent job of preventing unauthorized access to resources (our downloadable files, for example) that aren’t meant to be public, so a good amount of the work has already been done.  However, one requirement of this project introduced another problem: PayPal was being used for order processing. The issue with using a third-party payment processor like PayPal is that you can’t know for sure if a customer’s payment was processed successfully after being forwarded from the main store.

Fortunately, PayPal provides a service it calls Instant Payment Notification (IPN). Basically, it works like this:
1.) A customer is forwarded to PayPal from your website to complete his purchase.
2.) Paypal processes the payment, and then sends a notification to a listener you’ve set up on your site
3.) The listener verifies that the payment notification is valid by sending it, unaltered, to PayPal
4.) If the notification is legit, PayPal sends back a final, one-word response, either INVALID or VERIFIED.

The IPN also contains all the information forwarded to PayPal when transferring the user from your site, along with some details provided to PayPal, such as the customer’s email address.  So, if you set up your store correctly you should receive everything you need to fulfill the order.

So, armed with this information, the final process for digital download sales is as follows:

1.) Customer purchases files via PayPal
2.) PayPal sends an IPN to the website’s listener
3.) The listener verifies the IPN.
4.) If verified, the listener generates an access code and sends an email to the customer containing a link to a download page.

Setting up a listener is easy if you use ActiveMerchant, as it includes built in support for PayPal. Here’s an example using a simple five digit number for an access code (not the most secure thing in the world, you might want to use a longer string that can consist of the full gamut of alphanumeric characters). We hash the access code using MD5 and store the hash in the database.  Here’s an example controller for a PaymentNotification model:

class PaymentNotificationsController  [:create]

  def create
    notify = Paypal::Notification.new(request.raw_post)
    Rails.logger.info "In create"
    #Verify IPN with PayPal
    if notify.acknowledge
      Rails.logger.info "Acknowledge"
        if notify.complete?
          Rails.logger.info "payment complete"
          notification = PaymentNotification.create!(:params => params, 
            :cart_id => params[:invoice], :status => params[:payment_status], 
            :transaction_id => params[:txn_id])
          Cart.find(params[:invoice]).update_attribute(:purchased_at, 
            Time.now)
          #Generate access code
          random_number_generator = Random.new
          random = random_number_generator.rand(10000...99999)
          hashed_random = Digest::MD5.hexdigest(random.to_s)
          #store access code in database
          notification.update_attribute(:access_code, hashed_random)
          #Send Email notification
          OrderNotifier.received(notification, random).deliver
        else
          Rails.logger.error("Failed to verify Paypal's notification")
        end
    end
    render :nothing => true
  end
end

And if you’re curious about the PaymentNotification model itself, it’s pretty simple:


class PaymentNotification  :create
  validates_presence_of :cart_id, :on => :create
  validates_uniqueness_of :transaction_id
  validates_uniqueness_of :cart_id

  serialize :params
end

The data we need is pulled from the IPN itself (contained in :params). The cart_id is the id number of a cart in the site’s database that contains the customer’s order. This was forwarded to PayPal when the customer checked out and is returned as params[:invoice]. The :transaction_id (returned as params[:txn_id]) is a unique identifier assigned to the order by PayPal.

OrderNotifier is just a child of ActionMailer, and it’s used to send the transaction_id and access code to the customer:

class OrderNotifier  

  def received(notification, random)
    @notification = notification
    @code = random
    @greeting = "Hello"
    @cart = Cart.find(@notification.cart_id)
    mail :to => @notification.params[:payer_email]
    #mail :to => "nobody@nowhere.com"
  end
end

The email sent to the customer would contain a section that goes something like this:

To get your files, please visit the following link and enter your transaction ID and access code:
  http://www.example.com/carts/
  Transaction ID : 
  Access Code    :

When the customer visits the provided link, he’s presented with a form requesting the transaction ID and access code:

ddform

Entering the correct information would then present the user with a page containing links to the downloads he ordered. There are numerous variations on this basic system one could do. For example, with a few small modifications one could easily limit the number of times a user could download his/her files to, say, five. There are also third party solutions available to manage digital download sales, though I haven’t tried any yet.

Two Weeks with a Standing Desk

standing_desk2If many of the recent studies on the subject are to be believed, sitting all day is horrible for you. Regardless of how much we may exercise when we aren’t chained to our desks, sitting for long periods places us at greater risk for various cancers, obesity, and diabetes, not to mention neck and back problems.

Unfortunately for most of us, we don’t have much of a choice in the matter. I’m currently working a full time job that requires me to sit in a veal-fattening pen all day writing code. I also spend several hours a day at home, seated in front of my iMac, working on web development projects. I do my best to get up an move around as much as possible, but I’m definitely spending several hours a day more than I should with my butt parked in a chair.

There isn’t much I can do about my work environment at the office (Clearly, The Man is trying to kill me). At home, however, I figure I can do whatever I want (subject to the approval of my wife) so after reading about the new ‘standing desk’ fad in Silicon Valley, I decided to be Mr. Trendy and try it for myself.

My first thought was to price a few desks online, where I discovered much to my dismay that standing desks have a way of costing two to three times more than regular old sitting desks. I’m sure this is partly due to the new-found popularity of standing desks and partly due to the fact that standing desks either need to be customized to the height of the intended user (For some reason, human beings can vary quite a bit in the height department), or be made ‘adjustable’, which surely adds to manufacturing costs. I did find several good adjustable-height desks online, but the prices pretty much eliminated them from consideration.

Undeterred, I pulled out a piece of notebook paper and began sketching a design for a standing desk I could build myself. I took a few measurements, and decided on a height of 42″ for my desk (about an inch below my elbows). Then I realized another issue: While 42″ may be a good height for my keyboard and mouse, it would definitely be too low for my monitor.

According to OSHA’s ergonomic guidelines, the top of a monitor should be about at eye level. Personally, I can’t STAND that, but I’m a bit apple_iiodd in that I like my monitors to be set high. At the office, my monitor is set on top of a stack of books, and I still don’t think I have it high enough. I unconsciously tend to slouch in my chair until the center of the screen is at or slightly above eye level. It’s possible this is due to the fact that I spent the formative years of my computing experience as a child sitting in an adult-sized chair at an adult-sized desk, peering up at a tiny Apple monochrome monitor that sat perched atop an Apple ][ and a set of chunky Disk ][ drives. I chose not to fight my subconscious conditioning and decided to build a split-level desk, with my 27-inch iMac sitting on a second platform about 6 inches above the keyboard/mouse platform.

So, sketches and measurements in hand, I headed over to Lowe’s to buy supplies. The legs of the table consisted of four 48″ 2x4s and two 42″ 2x4s. I decided on a width of 30 inches and a depth of 24, so I bought a 72×12 piece of pine board, cutting it into two 30×12 pieces to serve as the desktops for my two platforms. For bracing, I bought a few 1x3s, cutting two 30″ pieces to brace the back, and four 24″ pieces (two for each side) to brace the sides. To support the back of the keyboard platform, I bought two metal L-brackets, attaching one side of each bracket to the front legs of the monitor platform, and the other side to the bottom of the pine board serving as the keyboard platform. I also bought a box of 2-inch wood screws, and some half-inch ones for attaching the L-brackets.
After putting it all together, the result was better than I expected. Kinda ugly, but sturdy:

I’ve got this set up in the little home office corner of my bedroom (my wife forbade me to put in the living room), and have been using it constantly for the last two weeks. I’ll admit it took a lot of getting used to, as after a few hours my feet started to burn and my calves started to ache. The discomfort faded as the weeks went by and now, I can say, I much prefer this to sitting in front of a desk for hours on end. Standing helps keep me alert and focused on the task at hand, and when I get into a flow, pounding out code or intently focused on a design, I hardly notice that I’ve been standing for a long period of time. I do, however, have a bar stool that I occasionally use when I’m too tired to stand after a hard leg workout.

standing_desk1

So, my recommendation: Get a standing desk. If you have the money to spend and are concerned about aesthetics, go out and buy one. If you’re cheap like me, build one; at least you’ll be able to try out the experience at a low cost and decide if you want to shell out the big money for a nicer desk later.