Back to blog

Building Real-Time Chat with Go and WebSockets

the hub that holds it all together

so i built a real-time chat app in Go called Chattorum. the idea was pretty standard — users join chat rooms, send messages, see messages from everyone else in real time. but the implementation taught me a lot about how Go's concurrency model maps perfectly onto the WebSocket problem.

the core of the whole thing is what i call the hub pattern. there's a single goroutine running in the background that manages every active WebSocket connection. clients register, unregister, and broadcast messages — all through channels. no mutexes, no locks, just channels doing what channels do best.

type Hub struct {
    clients    map[*Client]bool
    broadcast  chan []byte
    register   chan *Client
    unregister chan *Client
}
 
func (h *Hub) run() {
    for {
        select {
        case client := <-h.register:
            h.clients[client] = true
        case client := <-h.unregister:
            delete(h.clients, client)
            close(client.send)
        case message := <-h.broadcast:
            for client := range h.clients {
                client.send <- message
            }
        }
    }
}

this is the entire brain of the chat server. the select statement blocks until one of the channels has something, then it handles it. a new user connects? add them to the map. someone disconnects? remove them and close their send channel. a message comes in? fan it out to every connected client. the beauty is that because only one goroutine touches the clients map, there's zero contention.

upgrading http to websockets

the WebSocket handler uses gorilla/websocket to upgrade a regular HTTP connection. once upgraded, each client gets two goroutines — one for reading messages off the socket and one for writing messages back. the read pump pushes incoming messages to the hub's broadcast channel, and the write pump pulls from the client's personal send channel.

var upgrader = websocket.Upgrader{
    ReadBufferSize:  1024,
    WriteBufferSize: 1024,
}
 
func serveWs(hub *Hub, w http.ResponseWriter, r *http.Request) {
    conn, err := upgrader.Upgrade(w, r, nil)
    if err != nil {
        log.Println(err)
        return
    }
    client := &Client{hub: hub, conn: conn, send: make(chan []byte, 256)}
    client.hub.register <- client
 
    go client.writePump()
    go client.readPump()
}

each connection is cheap. two goroutines per client, a buffered channel, and a pointer in a map. Go handles thousands of these without breaking a sweat. i load tested it with a few hundred concurrent connections on my laptop and the memory usage barely moved.

the stock bot and rabbitmq

here's where it gets interesting. i wanted a /stock=AAPL command that fetches a real stock quote and posts it to the chat. the naive approach would be to handle that inline — parse the command, call some API, format the response, broadcast it. but that means if the stock API is slow or down, it blocks the message handler. bad.

instead i went with RabbitMQ. when the chat server sees a /stock= command, it publishes a message to a queue and moves on. a completely separate microservice consumes from that queue, fetches the stock quote from an external API, and publishes the result back on a different queue. the chat server picks it up and broadcasts it like any other message.

the decoupling here is the whole point. the stock bot service can crash, restart, or be deployed independently. the chat keeps working. if the stock API rate limits us, messages just queue up. if we want to add a /weather= command tomorrow, we spin up another consumer. the chat server doesn't change at all.

this is the kind of architecture decision that feels like overkill for a side project until something actually breaks. i had the stock bot crash during testing because of a malformed API response. chat didn't even hiccup. users kept sending messages. the bot came back up, processed the queued requests, and posted the quotes a few seconds late. that's exactly the behavior you want.

testing the whole thing

i went a little overboard on testing — 630+ unit tests and 63 end-to-end tests. the unit tests cover everything from message parsing to hub registration logic. the e2e tests spin up the full server, connect real WebSocket clients, send messages, and verify they arrive at the right destinations.

the e2e tests were the most valuable honestly. there's a whole class of bugs that only show up when you have multiple concurrent connections — race conditions in the hub, messages arriving out of order, clients not getting cleaned up properly on disconnect. table-driven tests in Go made it easy to cover a ton of scenarios without the test file becoming unreadable.

what i took away

Go's concurrency primitives are genuinely good for this kind of problem. the hub pattern with channels feels natural in a way that callback-based WebSocket servers in Node never did for me. and the goroutine-per-connection model means you write straightforward sequential code instead of juggling event loops.

the RabbitMQ piece taught me that decoupling isn't just an architecture astronaut thing. it's a practical tool that makes your system more resilient. even in a side project, having services that can fail independently changes how you think about error handling. you stop trying to prevent every failure and start designing for recovery instead.

chattorum is nothing groundbreaking. but building it from scratch — the hub, the WebSocket lifecycle, the message queue integration — gave me a much deeper understanding of how real-time systems actually work under the hood.