Building a real-time leaderboard
I handle both backend and frontend for a team running a Web3-based ranking service. (I'm not a fan of the term "full-stack.") Users collect and fuse NFTs to climb the ranks, earning rewards based on their seasonal placement. It's basically a "level up" loop — and the leaderboard was my headache.
The Initial Setup
The ranking system started simple. The API only returned three things:
- Top 100 ranked players
- My rank
- Total participant count
const playerScores = await scoreService.getSeasonScoreRecords({
seasonId: currentSeason.id,
limit: scoreCalculationLimit,
});
// Finding my rank (the brute-force way...)
const myScoreIndex = playerScores.findIndex(
(score) => score.user?.walletAddress === userAddress
);
const myPosition = myScoreIndex + 1 <= scoreCalculationLimit ?
myScoreIndex + 1 : 0;
Here's where the problems began. Rankings were calculated for the top 1,000, which meant sorting and iterating through all 1,000 players just to find one rank. This API was called from the user profile, the leaderboard page, and several other places.
The API architecture was messy too. Every time a user queried their score, the server hit the blockchain for on-chain data and updated the DB. Response times grew whenever blockchain lookups were involved. And there were more issues piling up:
- Overfetching: the user profile didn't need the full ranker list, but got it anyway
- No score change history
- No caching whatsoever
This wasn't just a caching problem. The entire ranking system needed a redesign — better performance, better UX, easier operations, and something that wouldn't need a major overhaul as the user base grew.
I started with the low-hanging fruit. First, I split the API. One endpoint for the ranker list, another for just my rank. The frontend was updated to call only what each page needed. That solved the overfetching.
Next was static data. Season metadata changes every 4–6 weeks — no reason to query it every time. Cached it in Redis. The real question was how to manage the ranker list. Arrays were painfully slow for lookups and updates. So I looked for alternatives.
Switching to Redis Sorted Sets
Implementing rankings with arrays was where the pain started. In terms of performance:
- Insert new score: O(log n) — binary search to find position
- Update score: O(n) — find and re-sort
- Look up a specific user's rank: O(n) — scan from the start
Then I found Redis Sorted Sets.
- Insert/Update: O(log n) — thanks to Skip Lists
- Rank lookup: O(log n)
- Range query (top 100): O(log n + m)
Sure, with only 1,000 users, the difference isn't dramatic. But the service had only been live for a month. Growth was inevitable. If the effort to prepare is roughly the same, it's worth thinking ahead.
How Does a Skip List Work?
To find the 50th element in a sorted array, you'd count from the beginning, one by one. But you wouldn't flip through a dictionary starting from "A" to find "skip." You'd open it somewhere in the middle and decide which direction to go.
A Skip List works the same way. It organizes data in multiple layers and searches top-down. Say we're looking for 40:
- Top layer: see 10 and 50 — "40 is between these."
- Middle layer: see 10, 30, 50 — "40 is between 30 and 50."
- Bottom layer: found it.
How I Actually Used It
Enough theory — let's look at real code. Redis Sorted Sets offer a surprisingly intuitive API.
// Update player score
async updatePlayerPoints(seasonId: string, playerId: string, points: number) {
const key = `leaderboard:${seasonId}`
await this.redis.zadd(key, points, playerId)
}
// Get my rank
async getPlayerRanking(seasonId: string, playerId: string) {
const key = `leaderboard:${seasonId}`
const position = await this.redis.zrevrank(key, playerId)
return position === null ? null : position + 1
}
// Get top players
async getTopPlayers(seasonId: string, count: number = 1000) {
const key = `leaderboard:${seasonId}`
return await this.redis.zrevrange(key, 0, count - 1, 'WITHSCORES')
}
Results and Takeaways
In numbers: P991 response time dropped from 5.38s to 1.8s. Average response time improved from 999ms to 771ms. Personally, I find P99 more meaningful — it means the system holds up even when users flood in at the end of a season. Never forget: even 1% of users having a bad experience can turn them into your loudest critics.
Behind this seemingly simple change was a long journey. Identifying problems, finding solutions, implementing, then iterating again. For this kind of work — changing the wheels on a moving car — step by step is everything. First, I split the APIs to reduce the blast radius of the leaderboard overhaul. Then, while introducing Redis, I cached static data too. Redis Sorted Sets alone didn't solve everything. The drop from 5.38s to 1.8s was the cumulative result of multiple improvements. The optimization of how on-chain data was queried played a significant part as well. No silver bullet — just many small improvements that added up to meaningful change.
When you do this kind of work, you sometimes hear "it's working fine, why keep touching it?" But seeing the user experience improve makes it worth it. I'm a pretty demanding user myself sometimes, so fixing the things that bother me is as much for me as anyone.