Explaining the Whiteboard

Creating a Game Server

A simplified diagram of the architecture that we’re about to discuss

So you’re expected to write a backend for a game. If you’ve never done it before, you freak out a bit. “A game?! That’s a lot of information going around. Where do I even start?” Hopefully by me explaining the whiteboard I will be able to help the me’s of the future.

A game isn’t special. It’s still an application, it just has more data and maybe some more specific logic. Depending on the game, it might need speed and throughput as a priority. Say you’re working on an MMO for example, here’s how I would work through it.


 TCP or UDP? I would go with UDP here because the information is constantly updating, and although UDP doesn’t track that all the data was successfully transported, a few dropped packets is well worth the speed of UDP. The non-game aspects can be done simply with a REST API for things like leaderboards, achievements, and so on.


How is this going to scale? If it really is a Massively Multiplayer game, you can’t have it breaking every 100 users. How do we keep the latency down and still do all the heavier work we need? This is where the idea of separating the UDP server and the workers come in. For a game that we’re building, we are using Redis to hold the data in a queue so the workers can pop the data off and work on it without putting load on the UDP server itself. This also allows multiple workers to scale and never conflict with each other. You can use any language that can read from Redis, but I urge the use of a compiled language implementation so that you capitalize on the speed.

Game State

You can store the game state in data in any database. There are no real pointers I have here, except try and use a database that’s as fast as possible since low latency is a priority.

The Technical Details

Docker: We’re using Docker Compose to unify all of the services running in the server itself, this also allows us to scale any of the systems we could want, like the workers, and keep some of the services in their own private subnet.

Defining a packet

Protocol Buffers are perfect for this. They give you a way to define a packet structure outside of the code and will compile into code in nearly any language. For example, I can make a protobuf called Packet and define its parameters like so:

message Packet {
enum PacketType {
AUTH = 0; // For handshake
POSITION = 1; // For world positions
PLAYER = 2; // For player data like actions, spells used, etc
HEARTBEAT = 3; // UDP Heartbeat
PacketType type = 1; // The type of packet, as defined above
uint32 sequence = 2; // The packet number, first sent is 1, second 2, etc
fixed32 messageLength = 3; // Number of bytes in the message, should be set 2nd to last, this is also always 4 bytes
bytes message = 4; // The protobuf message, AES encrypted(RSA-encrypted AES key if type == AUTH) See Security below
fixed32 timestamp = 5; // Seconds since epoch
fixed32 crc = 6; // The CRC32 checksum of the packet
string playerID = 7; // A uuid for the user

Then creating a packet is as simple as creating a Packet object, setting its members as needed, then using its built-in serialize function. The server and the client both keep the exact copy of this protobuf file so they always share the same definitions for data traveling back and forth.


Since UDP has no mechanism for making sure a connection is alive, a heartbeat is used to say “Hey this connection is still alive, don’t forget about me.” This can be done in real-time applications by sending the player coordinates or something that updates at a nearly fixed rate. For applications that aren’t real-time, a simple ping/pong packet can be sent every x seconds, and if the application hasn’t gotten a response after x*5 or so seconds, the connection isn’t alive.

UDP Server

The UDP server is a surprisingly simple part of the application. It’s basically a router that receives packets and throws them into the Redis queue as fast as possible. The reason we want it super fast is that, on high-bandwidth connections, there’s a possibility that the socket buffer can overflow if we take too long to process a packet. The worker comes in, does the hard work, and puts the response packet into the Redis queue for the server to send back to the client in a separate thread. To check for corruption, use a CRC32 checksum in your packets: it’s computationally light and makes it easy to check for malformed packets. For dropped packets, you can check a sequence field in your packet, and make sure it’s processed in order.


One of the big ones. You don’t want just any person to be able to connect to your game server and start sending malicious packets. Cheating can destroy a game. For security, we don’t trust the client with important parameters such as lives or magic level, and instead, do the game logic in the workers to avoid any potential cheating. This is an effective way to keep a client from doing anything too shady. The other security measure is against packet sniffing. On Docker container creation, an RSA key is generated, and the public key is available through the REST API. The client generates an AES key, uses the public key to encrypt it, and sends the key to the server with an Auth packet. The server then decrypts and uses that key to create a secure transport layer. This is done on every new connection.


This is used for querying non-game resources such as leaderboards, achievements, records, and the likes. Since this isn’t part of the “game server” and more like a companion, it doesn’t need the performance of the of the UDP server.

I attempted to keep this article as generic as possible as this is a case study on the theory of a game server versus a direct how-to article. The concepts expressed are cross-platform and give a baseline for some good practices.

Jacob McSwain is a DevOps Engineer at Clevyr, Inc. Clevyr makes software using all the buzzwords like AI, Machine Learning, Augmented Reality and Virtual Reality.

Schedule a Free Consultation with Clevyr today to discuss your software needs! /contact-us/

Matthew Williamson CEO