Following my little NIO framework Slick I was curious about how the Quasar actors would perform.
Let’s imagine a poker table, there are up to 10 people seated, and actions need to be coordinated between them. In reality it could be any game: it’s the coordination between multiple actors that is the interesting problem. This is what lies at the heart of our own Cubeia Firebase server.
There need to be someone responsible for seating new connections, let’s call it a lobby actor. Each “table” should have a maximum of 10 connections, and when someone connects they should automatically be directed to an old non-full table or a new table. After that we’ll make it simple and have each table act as an echo server. Here then is the lobby actor:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | public class LobbyActor extends BasicActor<SeatingRequest, Void> { private static final int MAX_SEATED = 10; public static final String LOBBY_NAME = "lobby"; private int tableIdTicker = 0; private String currentActor; private int currentSeated = -1; @Override protected Void doRun() throws InterruptedException, SuspendExecution { // register register(LOBBY_NAME); while(true) { // receive from inbox SeatingRequest ref = receive(); // find "current" table to seat at String name = getCurrentActor(); // reply to caller RequestReplyHelper.reply(ref, name); } } private String getCurrentActor() { // if this is the first, or we're full if(currentSeated == -1 || currentSeated == MAX_SEATED) { // create new name - just a ticker currentActor = String.valueOf(tableIdTicker++); // create and start new table new TableActor(currentActor).spawn(); currentSeated = 0; } currentSeated++; return currentActor; } } |
At line 14 we register the actor with a string name, which makes it possible to get a reference to it from anywhere in the JVM. Then we listen for messages on the actor inbox. In this case it is a special form of messages which Quasar have provided which you can use for request / response patterns. The rest is trivial.
Here’s what the table looks like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | public class TableActor extends BasicActor<EchoRequest, Void> { private String id; public TableActor(String id) { this.id = id; } @Override protected Void doRun() throws InterruptedException, SuspendExecution { register(id); while(true) { EchoRequest req = receive(); // this is an echo - just answer RequestReplyHelper.reply(req, req.getRequest()); } } } |
This is just a simple echo server – but now as an actor (of course, if this was a real game, the table would have to broadcast to all sitting players). Again we register the table with its ID, listen to the inbox and use the Quasar request / response helper class to send back the answer.
Not too bloody hard is it?
To connect this to the slick server we only need to write a pipe that forwards the byte buffers to the actor.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | public class EchoPipe extends TerminalPipe<ByteBufferEvent> { private Sink<ByteBufferEvent> output; @Override public void attachDownstreamOuput(Sink<ByteBufferEvent> output) { this.output = output; } @Override public Sink<ByteBufferEvent> getUpstreamInput() { return new ByteBufferSink() { @Override public void offer(ByteBufferEvent e) throws SuspendExecution, InterruptedException, IOException { // ask lobby actor for a table String id = getTableId(e.getChannel()); // get hold of the table actor ActorRef<Object> table = ActorRegistry.getActor(id); // get response from table and offer back to client ByteBuffer b = RequestReplyHelper.call(table, new EchoRequest(e.getAttachment())); output.offer(new ByteBufferEvent(e.getChannel(), b)); } private String getTableId(Channel channel) throws SuspendExecution, InterruptedException { // we're storing any table id in the channel String id = channel.getProperties().get("table"); if(id == null) { // we have no table - ask the lobby for one ActorRef<Object> lobby = ActorRegistry.getActor(LOBBY_NAME); id = RequestReplyHelper.call(lobby, new SeatingRequest()); // store table in channel channel.getProperties().put("table", id); } return id; } }; } } |
The ActorRegistry is the Quasar registry for all registered actors. OK, so static references aren’t that cool as they are hard to unit test. If this was real production code you’d have to hide them behind factory interfaces to make this testable. Other that that? Piece of cake.
The server looks much like the last one.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | public class TestServer { private static class TestListener implements ChannelListener { @Override public boolean connect(Channel channel) throws SuspendExecution { // use a frame with variable length determined by a prefixed int field VariableFrameFaucet framer = VariableFrameFaucet.createIntFrameFaucet(); framer.attachPipeline(new EchoPipe()); // attach our echo pipe! framer.attachChannel(channel); return true; } @Override public void closed(Channel ch) { } } public static void main(String[] args) { try { // spawn the lobby new LobbyActor().spawn(); CoreServer coreServer = new CoreServer(new InetSocketAddress("localhost", 6666), new TestListener()); coreServer.start(); } catch(Exception e) { e.printStackTrace(); } } } |
And that’s it! Again I’m impressed, the API’s make it really simple to use Quasar: figuring out and writing this code took less than an hour. Neat!
How does it perform then? Well, there is a latency penalty involved obviously. In the last post we had negligible latencies for our echo calls but then there was no contention. This time ten clients have to contend about the table actor’s time and Quasar need to manage that. The clients still pushed 1 RPS (request per second) per client, so loading up 1000 clients you’d expect a throughput of roughly 1000 RPS at the server. The latencies at low load was about 40 ms though and that’s enough to lower your overall throughput. We’re also paying a CPU price for the coordination, and predictably it was significantly higher: at 5000 RPS I was close to the limit and at 6000 RPS I hit CPU starvation – the server degraded reasonably good with enough memory though, but latencies started creeping upwards.
But again, this is not about raw performance, rather it’s about real world scalability. The only downside here was the rather high latency for the message passing between the actors, and the CPU load. But Quasar is still in version 0.5 so I’m sure this will be trimmed quickly, and hey: 5000 RPS with reasonable latency is sitll plenty for most scenarios!
This is fun!