The Ironism

The Ironism

The lair of Lars J. Nilsson. Contains random musings on beer, writing and this thing we call life.

July 2014


Slick with Quasar Actors


Following my little NIO framework Slick I was curious about how the Quasar actors would perform.

Let’s imagine a poker table, there are up to 10 people seated, and actions need to be coordinated between them. In reality it could be any game: it’s the coordination between multiple actors that is the interesting problem.  This is what lies at the heart of our own Cubeia Firebase server.

There need to be someone responsible for seating new connections, let’s call it a lobby actor. Each “table” should have a maximum of 10 connections, and when someone connects they should automatically be directed to an old non-full table or a new table. After that we’ll make it simple and have each table act as an echo server. Here then is the lobby actor:

At line 14 we register the actor with a string name, which makes it possible to get a reference to it from anywhere in the JVM. Then we listen for messages on the actor inbox. In this case it is a special form of messages which Quasar have provided which you can use for request / response patterns. The rest is trivial.

Here’s what the table looks like:

This is just a simple echo server – but now as an actor (of course, if this was a real game, the table would have to broadcast to all sitting players). Again we register the table with its ID, listen to the inbox and use the Quasar request / response helper class to send back the answer.

Not too bloody hard is it?

To connect this to the slick server we only need to write a pipe that forwards the byte buffers to the actor.

The ActorRegistry is the Quasar registry for all registered actors. OK, so static references aren’t that cool as they are hard to unit test. If this was real production code you’d have to hide them behind factory interfaces to make this testable. Other that that? Piece of cake.

The server looks much like the last one.

And that’s it! Again I’m impressed, the API’s make it really simple to use Quasar: figuring out and writing this code took less than an hour. Neat!

How does it perform then? Well, there is a latency penalty involved obviously. In the last post we had negligible latencies for our echo calls but then there was no contention. This time ten clients have to contend about the table actor’s time and Quasar need to manage that. The clients still pushed 1 RPS (request per second) per client, so loading up 1000 clients you’d expect a throughput of roughly 1000 RPS at the server. The latencies at low load was about 40 ms though and that’s enough to lower your overall throughput. We’re also paying a CPU price for the coordination, and predictably it was significantly higher: at 5000 RPS I was close to the limit and at 6000 RPS I hit CPU starvation – the server degraded reasonably good with enough memory though, but latencies started creeping upwards.

But again, this is not about raw performance, rather it’s about real world scalability. The only downside here was the rather high latency for the message passing between the actors, and the CPU load. But Quasar is still in version 0.5 so I’m sure this will be trimmed quickly, and hey: 5000 RPS with reasonable latency is sitll plenty for most scenarios!

This is fun!

The proprietor of this blog. Lunchtime poet, former opera singer, computer programmer. But not always in that order. Ask me again tomorrow.

    Comments 0
    There are currently no comments.