Testing Stripe transactions with Scala

I’m nearing completion on my latest project, Training Sleuth, and I’ve once again decided to use Stripe as my payment processor.  I used Stripe on my previous project, Rhino SchoolTracker, and have absolutely nothing negative to say about it :).

While with Rhino SchoolTracker I tested my stripe payments manually, I decided I wanted to write some automated integration tests this time around (particularly since I’m using Scala for this project which has a decidedly slower and more unwieldy edit-compile-test loop than Ruby does).

The Stripe API is actually split into two parts, a client side javascript library (stripe.js), and a server-side API available in several of the popular server-side web languages (php, java, ruby, python, and javascript (for node.js) last time I checked).  Anyway, the basic concept goes like this:

  1. You serve an HTML payment form (with fields for CC number, etc.) from your server to the client.
  2. When the client submits the form, instead of sending it to your sever, you use stripe.js to grab the form data and sends it to Stripe’s servers, which will validate the card and return a unique token (or an error message in case of invalid/expired credit cards, etc.) via an ajax request.
  3. Once you have the stripe card token, you send it up to your server, do whatever processing you need to do on your end (grant access to your site, record an order, etc.), and then submit a charge to Stripe using the Stripe API.

The key feature of all of this is that the user’s credit card information never touches your server, so you don’t need to worry about PCI compliance and all the headaches that go with it (yes, stripe does require you to use SSL, and despite their best efforts it is possible to mis-configure your server in such a way as to expose user payment info if you don’t know what you’re doing).

Now Stripe offers a test mode, which is what we’ll be using here, with a variety of test card numbers to simulate various conditions (successful charge, declined card, expired card, etc.).  The main problem I ran into writing automated tests in Scala was that I needed to use stripe.js to generate a card token before I could interact with the server-side (Java) API.

Enter Rhino, a Javascript interpreter for Java.  Using Rhino, I was able to whip up some quick-and-dirty javascript to generate a stripe token and call it from Scala.  Of course, Rhino alone wasn’t enough — I also needed to bring in Envjs and create some basic HTML to simulate a browser environment for stripe.js.

First, here’s my stripetest.js:

Packages.org.mozilla.javascript.Context.getCurrentContext().setOptimizationLevel(-1);
load("resources/env.rhino.js");
load("https://js.stripe.com/v2/");
//Stripe REALLY wants some sort of HTML loaded, so here you go:
window.location = "resources/stripetest.html"
Stripe.setPublishableKey('pk_test_PUT-YOUR-STRIPE-TEST-KEY-HERE');
var cardNumber
var token = ""
Stripe.card.createToken({
    number: cardNumber,
    cvc: '123',
    exp_month: '12',
    exp_year: '2016'
},function(status, response){
    this.token = response['id'];
});

And you need to provide some basic HTML, I created a file called ‘stripetest.html’ which merely contained this:

<!DOCTYPE html>
<html>
<head>
    <title></title>
</head>
<body>
</body>
</html>

Simple, but this was enough to get things working.

I dropped these files (along with env.rhino.js which I obtained from the Envjs website) into my test/resources folder.

With all of that in place, I was able to write some specs2 tests:

import org.mozilla.javascript.{Context, ContextFactory}
import org.mozilla.javascript.tools.shell.{Main, Global}
import org.specs2.mutable._

class SubscriptionServiceSpec extends Specification {
//Get Stripe token via Stripe.js
  val cx: Context = ContextFactory.getGlobal.enterContext()
  cx.setOptimizationLevel(-1)
  cx.setLanguageVersion(Context.VERSION_1_5)
  val global: Global = Main.getGlobal
  global.init(cx)

  "'SubscriptionServiceActor'" should{
    "handle Create Subscription Request with bad credit card" in{
      //stripe test pattern to simulate bad card number
      global.put("cardNumber", global, "4000000000000002")
      Main.processSource(cx, "resources/stripetest.js")
      val badToken: String = global.get("token", global).toString
      //Now test and make sure you handle a bad credit card correctly 
    }
    "handle Create Subscription Request with valid credit card" in{
     //stripe test pattern to simulate valid credit card number
     global.put("cardNumber", global, "4242424242424242")
     Main.processSource(cx, "resources/stripetest.js")
     val stripeToken: String = global.get("token", global).toString
     //Now test that you can actually take money from customers 🙂
    }
}

There you go, kind of painful to get set up, but definitely nice to have.

Graham’s Scan in Scala

Sometimes my job throws an interesting problem my way.  This week I was presented with a very odd geometry problem 🙂

I needed to generate KML files from geographic data and one of my requirements was to represent certain geographic areas as polygons, the vertices of which would be supplied (along with the rest of the data) by another OSGi service running elsewhere.  Seems fairly straightforward — In KML, the vertices of a polygon are usually specified as follows:

<Placemark>
  <name>LinearRing.kml</name>
  <Polygon>
    <outerBoundaryIs>
      <LinearRing>
        <coordinates>
          -122.365662,37.826988,0
          -122.365202,37.826302,0
          -122.364581,37.82655,0
          -122.365038,37.827237,0
          -122.365662,37.826988,0
        </coordinates>
      </LinearRing>
    </outerBoundaryIs>
  </Polygon>
</Placemark>

The coordinates tag requires at least four longitude/latitude/altitude triples to be specified, with the last coordinate being the same as the first.   Here is where the problem comes in — The order in which these coordinates are specified matters (they must be specified in counter-clockwise order).  To mix up the order of the coordinates would have unpredictable results (e.g. crazy geometric shapes) when the data is later displayed via Google Earth (or some other application that supports KML files).  However, the area vertices are indeed fed to my KML generator in no particular order (and the services providing the data cannot be changed to guarantee a particular ordering).

So… how do I put the points in order?  “Surely this is a solved problem.”  I thought, turning to the all-knowing internet.  A bit of searching turned up an algorithm called Graham’s Scan.  Basically, this algorithm takes a bag of random coordinates and generates a convex hull with vertices defined in counter-clockwise order (Note: This may not be suitable if you’re trying to faithfully recreate complex geometries, fortunately I’m mostly concerned with rectangular areas).  Roughly, the algorithm works as follows:

  1. Find the coordinate with the lowest y-value.
  2. Sort the remaining points by the polar angle between the line defined by the current point and the point found in step 1, and the x-axis.
  3. Go through the list, evaluating 3 points at a time (you’re concerned with the angle of the turn made by the two resulting line segments).  Eliminate any non counter-clockwise turns (i.e., non-convex corners).

I found several example implementations for this algorithm in various languages: C++, Java, etc.  Since I’m coding this up in Scala, I wasn’t too happy with any of those, and I couldn’t find an example in Scala to rip off draw inspiration from.  However, I did manage to find an implementation in Haskell which I used as a rough guide.  Anyway, here’s my attempt at Graham’s Scan in Scala:

type Coordinate = (Double, Double) //Longitude, Latitude

  protected def processArea(coords: List[Coordinate]): List[Coordinate] = {
      //returns > 0 if points form a counter clockwise turn, 
      // < 0 if clockwise, and 0 if collinear
      def ccw(p1: Coordinate, p2: Coordinate, p3: Coordinate) =
        (p2._1 - p1._1)*(p3._2 - p1._2) - (p2._2 - p1._2)*(p3._1 - p1._1)

      //Scan the List of coordinates and find our vertices
      def scan(theCoords: List[Coordinate]): List[Coordinate] = theCoords match {
        case xs if xs.isEmpty => List()
        case xs if xs.size == 2 => theCoords
        case x::y::z::xs if ccw(x,y,z) > 0 => x::scan(y::z::xs)
        case x::y::z::xs => scan(x::z::xs)
      }
      //find the coordinate with the lowest latitude
      val origin = coords.minBy(_._2)
      //sort the rest of the points according to their polar angle (the angle between the line
      //defined by the origin and the current point, and the x-axis)   
      val coordList = origin :: coords.filterNot(_ == origin).
        sortBy(point => atan2(point._2 - origin._2, point._1 - origin._1))
      //do the graham scan
      scan(coordList)
    }

I think you’ll find that’s a bit shorter than some of the imperative language implementations out there 🙂

Akka and Scalatra

On my current project, I’ve been using Akka to handle the Service Layer of my application while using Scalatra for my REST controllers.  This combination works quite well, though it took me a little bit of time to figure out how to integrate Scalatra and Akka.  The examples presented on the Scalatra website didn’t exactly work for me (it’s possible they’ve since been fixed).  But after some studying of the Akka and Scalatra API documentation and some good ol’ fashion trial-and-error, I got to something that worked.  First, Akka actors are set up and initialized in ScalatraBootstrap.scala thusly:

class ScalatraBootstrap extends LifeCycle with DatabaseInit {
  //initialize the Actor System
  val system = ActorSystem(actorSystemName)
  //initialize Service Actors
  val userServiceActor = system.actorOf(Props[UserServiceActor].withRouter(
    SmallestMailboxRouter(nrOfInstances = 10)), "userRouter")

  override def init(context: ServletContext) {

    //mount REST controllers
    context.mount(new UsersController(system, userServiceActor), usersPath)

  }

  override def destroy(context: ServletContext) {
    system.shutdown() // shut down the actor system
  }
}

I’m initializing each actor with a router (in this case, a SmallestMailboxRouter, though others, such as a RoundRobinRouter are also available).  The router will create up to 10 child actors and route incoming messages to the actor with the least number of ‘messages’ in its inbox.

The Scalatra controller responds to a request for a resource by sending a message to the appropriate actor (I’m using one Actor type per resource) using a Future and returning the result.  Scalatra provides an AsyncResult construct that helps here:

  /** Get a specific user's information
    * User-id specified by id
    */
  get("/:id", operation(getUser)){
    basicAuthWithCustomerCheck() match {
      case None => //do nothing
      case Some(user) =>
        new AsyncResult{
          val is: Future[_] =
            ask(userServiceActor, new GetUserByIdMessage(user,
              params("id").toLong))(timeout.toMillis).
              mapTo[Either[(Int, String), UserDto]] map {
              case Right(userDto) => userDto
              case Left((errorCode, msg)) => response.sendError(errorCode, msg)
            }
        }
    }
  }

My actor here happens to return an ‘Either’ type in response to a request.  By convention, a ‘Left’ response indicates an error condition (in this case a tuple containing the HTTP error code to return and a message), and a ‘Right’ response indicates success and contains the requested data (a ‘User’ object). The actor itself looks like this:

case class GetUserByIdMessage(user: User, id: Long)

class UserServiceActor extends Actor{

  def receive = {
    case getUserByIdMessage: GetUserByIdMessage =>
      sender ! handleGetUserByIdMessage(getUserByIdMessage)
  }

  def handleGetUserByIdMessage(getUserByIdMessage: GetUserByIdMessage):
  Either[(Int, String), UserDto] = {
    //Process request.  Return an (error code, message) tuple on failure, and the
    //data on success  
  }
}

The message types are implemented as case classes, and enter the actor in the ‘receive’ method, which passes each message to a handler and returns the result to the message’s ‘sender’ (the controller).

Getting Scala, HATEOAS, and JSON to work together

I’ve been working with Scala for the last few months on a new project, and I’ll confess that it’s starting to grow on me (this is in stark contrast to Java, which I’m liking less the more I learn about it).

My current project has me creating a REST API using Scalatra along with a front-end built with Coffeescript and Backbone.js.  This definitely has a different feel to it than a typical web application built using one of the uber-frameworks like Rails.  The lack of tight integration between back-end and front-end has its advantages, but also introduces a few issues that must be sorted out.  One of these issues that I’ve recently happened upon involves controlling how a user may interact with resources on the server based on his or her access level (or ‘role’).  For example, if I have a database table called ‘people’, each containing a record for a member of an organization, I probably want to control who can do what with said records.  Perhaps standard users are only allowed to view these people records, managers are allowed to edit them, and Administrators may delete or create a new records.

This is a trivial problem with a traditional web app, but in the case of a REST API, consider this:  I request a list of people records from my server by issuing a GET request to http://myserver.com/api/persons.  The server checks my credentials, and returns a list of 20 records of people in, say, the accounting department.  The client (whether it be a web app, mobile app, etc.) renders a nice, spiffy table full of people records.  The client interface also has several buttons that allow me to manipulate the data.  Buttons with such labels as ‘View Record’, ‘Edit Record’, ‘Delete Record’, etc.

Now we have an issue.  Let’s say I’m the manager of 6 people in the accounting department, but the other 14 belong to other managers.  It has been decided that managers should be able to view the records of other personnel in the organization, but should only be able to edit records for their own.  Further, only Administrators (let’s say HR folks) can delete a record. No problem, you might say, just have the server check the user’s role regarding a person record before executing a request to update or delete a record.  We can make this easy by adding a ‘manager_id’ field to the ‘person’ table identifying each person’s manager.

Of course, that would work fine.  The problem, however, is not in ‘correctness’ of the application, but in the user-friendliness of the client interface.  The client has no way of knowing your permissions in regards to each person record so it displays buttons for every possible action that can be taken for each and every one, relying on the server to sort things out on the back-end and return an error if you try to do something illegal.  It would be better if we could have the server send down a list of actions the authenticated user is allowed to take for each record, then we could simply not display (or grey-out) the related interface elements (buttons, drop-down items, etc.) for non-specified actions, giving the user an instant visual cue regarding what he’s allowed to do.  While we’re at it, why not send down a link to the REST call for each of the allowed actions as well?

This is where HATEOAS (Hypermedia as the engine of application state) comes in.  For a more thorough explanation, go to the Wikipedia page.  Basically, a HATEOAS compliant REST service requires the server to, along with the resource data itself, send a list of actions (and links) that may be performed on or with that resource.  It’s probably easier explained via example.

First, here’s a plain JSON object returned from a non-HATEOAS compliant service:

{
  "id":35,
  "employeeId":"7",
  "lastName":"NewGuy",
  "firstName":"Steve",
  "middleName":"",
  "email":"steve@acme.com",
  "title":"Clerk",
  "hireDate":"01/02/2013",
  "dateOfBirth":"01/01/1980"
}

Just a bag of data — no information regarding what I should, or what I’m allowed to do with it. Well, how about this:

{
  "_links": 
  { 
    "self": {"href":"/api/persons/35","method":"GET"},
    "update":{"href":"/api/persons/35","method":"PUT"},
    "delete":{"href":"/api/persons/35","method":"DELETE"}
  },
  "id":35,
  "employeeId":"7",
  "lastName":"NewGuy",
  "firstName":"Steve",
  "middleName":"",
  "email":"steve@acme.com",
  "title":"Clerk",
  "hireDate":"01/02/2013",
  "dateOfBirth":"01/01/1980"
}

The _links section of this object tells me that I’m allowed to update AND delete this record, and provides links to the REST calls necessary to perform those actions. It also includes a link to itself. By the way, there are several “standard” formats out there for returning these links, I’m attempting to follow HAL. For more fun, you could also include the MIME-type for the data that each action would return (JSON, HTML, PDF, whatever).

The concept is rather simple, and definitely beats the hackish ideas I initially had for solving this issue. However, and this could just be my relative new-ness to Scala, it did take a bit of effort to figure out how to get the server to spit out correctly formatted JSON for the HAL links (I didn’t want my _links section to be sent as an array, for example, or the myraid other ways the Jackson default serializer decided to do it before I sorted it out). I eventually came up with something like this (ok, exactly this):

//package object full o' utility functions for creating some HAL-style HATEOAS links
package object Hateoas{
  //could add an additional field specifying MIME-type, for example
  case class Link(href: String, method: String)
  type HateoasLinks = Map[String, Link]
  //case class for a response containing a Collection of items
  case class ListResponse(_links: HateoasLinks, _embedded: Map[String, List[Any]])
  object HateoasLinkFactory{
    //could (should) add a function for generating a "custom" action link
    def createSelfLink(uri: String) = {
      ("self" -> new Link(uri, "GET")) 
    }
    //create Create!
    def createCreateLink(uri: String) = {
      ("create" -> new Link(uri, "POST")) 
    }

    def createUpdateLink(uri: String) = {
      ("update" -> new Link(uri, "PUT"))
    }

    def createDeleteLink(uri: String) = {
      ("delete" -> new Link(uri, "DELETE"))
    }
  }
}

I use this code to generate each object’s _link section before pushing it down to the client. It’s not by any means a fully-realized HAL implementation, but it solves my main issue for now, and I can easily add more functionality as needed.

Scala and Scalatra

I’ve been using Ruby on Rails almost exclusively for my web projects over the last year or two. Recently, when I had an idea for a new project, I decided to try something a little different.

My current Rails project, Rhino SchoolTracker, is a traditional CRUD-type web application that is fairly well suited to the Rails way of doing things. For this new project, however, I wanted to completely decouple my server side code from my front-end web application.

My idea is to create a simple REST API for the back-end services, and build the web UI using Backbone and Bootstrap. This also has the benefit of providing significant flexibility for possible mobile clients later. For the server side stuff, I could have turned to Rails again, but that seemed like overkill when I would only be using a subset of its features.

I stumbled upon Scala while researching alternative server-side languages. While I would never use Java if I had a choice in the matter, the idea behind Scala is a good one. Fix the basic problems with Java (the language) and add functional programming support, all while retaining compatibility with the vast Java ecosystem and the ability to run on the mature (mostly, after all these years/decades) JVM. It should also be significantly faster and scale better than anything written in interpreted languages like ruby or python.

Scalatra

Scala has a number of web frameworks available to it.  Lift and Play are probably the most popular.  However, I wanted something lightweight, so I looked and found a minimalistic framework called Scalatra, which attempts to mimic the excellent Sinatra framework over in Ruby-land.  So, I decided to give it a shot.

Scalatra relies on the Simple Build Tool (sbt), and setting up a new project is fairly simple using g8:

g8 scalatra/scalatra-sbt

Firing up the build system is not difficult either, just execute the following in the project root directory:

g8 scalatra/scalatra-sbt

starting the build system is done by running the following in the project directory:

./sbt

I’m using IntelliJ IDEA for my development environment, and it just so happens there’s a helper plugin for sbt called gen-idea that generates all of the proper project files. I believe there is a similar plugin for eclipse users, if you’re one of those people.

Adding dependencies to the project is surprisingly easy compared to say, maven, or ivy.  And when I say easy, I mean NO XML.  To add support for my database and json, for example, I add the following lines to my project’s build.scala file:

"org.scalatra" %% "scalatra-json" % "2.2.1",
"org.json4s"   %% "json4s-jackson" % "3.2.4",
"org.json4s"   %% "json4s-ext"     % "3.2.4",
"org.squeryl"  %%  "squeryl" % "0.9.5-6",
"postgresql"   % "postgresql" % "9.1-901.jdbc4",
"c3p0"         % "c3p0" % "0.9.1.2",

squeryl is an ORM for Scala.  Not quite as easy to work with as ActiveRecord, but at least it’s not Hibernate.  C3p0 handles connection pooling.

Scalatra Routes

Scalatra handles routes much like Sinatra. Pretty easy actually, here’s a simple controller for a hypothetical record called “Person”:

import org.scalatra._
import org.json4s.{DefaultFormats, Formats}
import com.caffeinatedrhino.db.DatabaseSessionSupport
import com.caffeinatedrhino.testproj.models.Person
import org.scalatra.json.JacksonJsonSupport
import org.json4s.JsonAST.JValue

class PersonsController extends ScalatraServlet with DatabaseSessionSupport with JacksonJsonSupport {

  protected implicit val jsonFormats: Formats = DefaultFormats

  before() {
    contentType = formats("json")
  }

  get("/") {
    Person.allPersons
  }

}

What does it do? all requests to “/” — the servlet’s root, not necessarily the web root, result in a request to our Person model for all of the “Person” objects in the database. One thing that may not be obvious is that the response is sent as JSON… the before() filter automagically runs before all requests, setting the output type for each controller action to JSON. To enable this we have to mixin JacksonJsonSupport (it’s a Scala trait) and tell json4s which formats we want it to use when doing its serialization by setting that implicit variable (jsonFormats).

If you’re wondering how we register all of our servlets(i.e., controllers), Scalatra projects have a single ‘ScalatraBootstrap.scala’ file, that goes something like this:

import com.caffeinatedrhino.testproj.controllers.PersonsController
import org.scalatra._
import javax.servlet.ServletContext
import com.caffeinatedrhino.db.DatabaseInit

class ScalatraBootstrap extends LifeCycle with DatabaseInit {
  override def init(context: ServletContext) {
    configureDb()
    context.mount(new PersonsController, "/persons")
  }

  override def destroy(context: ServletContext) {
    closeDbConnection()
  }
}

So our Persons servlet is mounted at “/persons” — so a request to http://example.com/persons should result in retrieving our “Person” objects.

Database Support

In our ScalatraBootstrap class, you’ll also notice we call configureDb() in the init method (and a corresponding closeDbConnection() in the destroy method).  The appliction is stood up and torn down here, so this is the natural place to set up our database (and close it).  There’s a trait mixed into our ScalatraBootstrap class called DatabaseInit that provides these methods.  Here it is:

import org.slf4j.LoggerFactory
import java.util.Properties
import com.mchange.v2.c3p0.ComboPooledDataSource
import org.squeryl.adapters.PostgreSqlAdapter
import org.squeryl.Session
import org.squeryl.SessionFactory

trait DatabaseInit{

  val logger = LoggerFactory.getLogger(getClass)
  var cpds = new ComboPooledDataSource

  def configureDb() {
    val props = new Properties
    props.load(getClass.getResourceAsStream("/c3p0.properties"))
    cpds.setProperties(props)
    SessionFactory.concreteFactory = Some (() => connection)

    def connection = {
      logger.info("Creating connection with c3p0 connection pool")
      Session.create(cpds.getConnection, new PostgreSqlAdapter)
    }
    logger.info("Created c3p0 connection pool")
  }

  def closeDbConnection() {
    logger.info("Closing c3p0 connection pool")
    cpds.close
  }

}

The usual properties needed to connect to the database are stored in a separate c3p0.properties file:

c3p0.driverClass=org.postgresql.Driver
c3p0.jdbcUrl=jdbc:postgresql://localhost:5432/testdb
user=testuser
password=testpass
c3p0.minPoolSize=1
c3p0.acquireIncrement=1
c3p0.maxPoolSize=50

Easy enough, but what about the DatabaseSessionSupport trait that we mixed into the controller? Oh, here it is, lifted almost verbatim from the scalatra documentation:

package com.caffeinatedrhino.db

import org.squeryl.Session
import org.squeryl.SessionFactory
import org.scalatra._

object DatabaseSessionSupport {
  val key = {
    val n = getClass.getName
    if (n.endsWith("$")) n.dropRight(1) else n
  }
}

trait DatabaseSessionSupport { this: ScalatraBase =>
  import DatabaseSessionSupport._

  def dbSession = request.get(key).orNull.asInstanceOf[Session]

  before() {
    request(key) = SessionFactory.newSession
    dbSession.bindToCurrentThread
  }

  after() {
    dbSession.close
    dbSession.unbindFromCurrentThread
  }

}

Finally, if you’re curious about our “Person” model, here it is:

package com.caffeinatedrhino.testproj.models

import com.caffeinatedrhino.db.DBRecord

import org.squeryl.PrimitiveTypeMode._
import org.squeryl.{Query, Schema}
import org.squeryl.annotations.Column

import java.sql.Timestamp

class Person(val id: Long,
             @Column("USER_ID") val userID: Long,
             @Column("LAST_NAME") var lastName: String,
             @Column("FIRST_NAME") var firstName: String,
             @Column("DATE_OF_BIRTH") var dateOfBirth: Timestamp,
             @Column("CREATED_AT") val createdAt: Timestamp,
             @Column("UPDATED_AT") var updatedAt: Timestamp) extends DBRecord{
  def this() = this(0, 0, "NO_NAME", "NO_NAME", new Timestamp(0), new Timestamp(0), new Timestamp(0))
}

/**
 * Kind of a cross between a Schema and a DAO really.  But I'll call it a Dao anyway
 * because it pleases me to do so.
 */
object PersonDao extends Schema {
  val persons = table[Person]("PERSONS")

  on(persons)(p => declare(
    p.id is(autoIncremented, primaryKey)
  ))
}

object Person{
  def create(person: Person):Boolean = {
    inTransaction {
      val result = PersonDao.persons.insert(person)
      if(result.isPersisted){
        true
      } else {
        false
      }
    }
  }
  def allPersons = {
    from(PersonDao.persons)(p => select(p)).toList
  }
}

You’ll notice we’re using a Java type here, java.sql.Timestamp, as if it belonged in our scala code.  Neat, eh?  You also might have noticed that we have both a class and a singleton object named ‘Person’ in the same source file.  In Scala, the object ‘Person’ would be said to be the companion object of class ‘Person.’  A class and its companion object can access each other’s private members (and they must both be defined in the same source file).

Well, that’s enough code for one blog entry.  That wasn’t nearly as bad as I feared it would be.  I’ve definitely seen more convoluted ways of accomplishing much the same thing in other languages/frameworks (*cough* Java/Spring/Hibernate *cough*).  I’m enjoying Scala so far, hopefully it continues to grow on me.