How to Reduce Navigation Latency in Your Web App with Client-Side Caching and Service Workers

By

Introduction

When working through a backlog—opening an item, jumping to a linked thread, then back to the list—latency isn't just a metric; it's a context switch. Even small delays accumulate and hit hardest when developers are trying to stay in flow. The bottleneck isn't that your app is “slow” in isolation; it's that too many navigations still pay the cost of redundant data fetching, breaking flow repeatedly. This guide walks you through the same approach used to modernize GitHub Issues navigation: shifting work to the client, optimizing perceived latency, and making navigation feel instant without a full backend rewrite.

How to Reduce Navigation Latency in Your Web App with Client-Side Caching and Service Workers
Source: github.blog

What You Need

Step-by-Step Guide

Step 1: Identify Navigation Bottlenecks

Before changing anything, measure the current performance of your most common navigation paths. Use the Performance tab in DevTools to record user flows—opening a detail page from a list, going back, and cross-linking. Look for repeated server roundtrips, redundant API calls, and client boot times. The key metric you'll optimize for is perceived latency: the time from user action to meaningful on-screen content. In the GitHub Issues case, the team found that navigations paid the full cost of server rendering, network fetches, and client boot even when data hadn't changed.

Step 2: Implement a Client-Side Caching Layer with IndexedDB

The core of the solution is a local cache that stores fetched data so subsequent navigations can load instantly. Use IndexedDB because it supports large amounts of structured data and survives page reloads. Build a cache interface with methods like get(key), set(key, data, ttl), and invalidate(pattern). Store each resource (issue, pull request, etc.) with its original timestamp and an expiration time. When a navigation occurs, serve data from cache first—rendering instantly—then revalidate in the background by fetching fresh data from the server and updating the cache. This “stale-while-revalidate” pattern eliminates visible loading states.

Step 3: Add a Preheating Strategy to Improve Cache Hit Rates

To minimize cache misses, preheat the cache with data likely to be needed soon. Use heuristics based on user behavior: when a user views a list page, fetch the details for the first few items in the background. In GitHub Issues, the team examined real usage patterns to predict which issues would be opened next. Implement a work queue that prioritizes preheating based on signals like viewport position, hover, or recent activity. Preheating must be efficient—avoid spamming requests by batching and throttling. The goal is to have data ready before the user clicks.

Step 4: Integrate a Service Worker for Offline and Hard Navigations

A service worker intercepts network requests and can serve cached responses even when the browser navigates to a new page (hard navigation). Register a service worker that, on install, caches essential app shell assets. On fetch, use a cache-first strategy for API endpoints that return data used in navigation. In GitHub Issues, the service worker made cached data available on hard navigations—when the user types a URL directly, clicks a back button (which may trigger a page reload), or navigates from an external link. The service worker also helps with network failures, showing cached content with a note that it may be outdated. Ensure your service worker respects cache headers and provides a way to clear stale data.

How to Reduce Navigation Latency in Your Web App with Client-Side Caching and Service Workers
Source: github.blog

Step 5: Measure and Optimize Perceived Performance

Use Real User Monitoring (RUM) tools to track metrics like Largest Contentful Paint (LCP) and First Input Delay (FID) for navigation paths. Compare before and after: you should see a dramatic reduction in time-to-interactive for repeat visits. In GitHub Issues, the results showed that navigations that previously took hundreds of milliseconds often felt instant. But beware of tradeoffs: client-side caching increases memory/disk usage, requires cache invalidation logic, and can serve stale data if not revalidated quickly. Monitor cache hit rates and adjust TTLs accordingly. Also, test edge cases—hard reloads, incognito mode, and multiple tabs.

Tips

By following these steps, you can transform a data-heavy web app from feeling “slow” to feeling “instant” for the most common navigation paths—without a full rewrite. The principles are directly transferable: shift work to the client, render from local data, and revalidate asynchronously. Your users will thank you with fewer context switches and more productive flow.

Related Articles

Recommended

Discover More

How to Build Type-Safe LLM Agents with Pydantic AI: A Step-by-Step GuideImplementing Continuous Purple Teaming: A Step-by-Step Guide for Modern EnterprisesSpotify Engineers Unveil Revolutionary AI-Powered Ads Manager Built with Claude PluginsUnlock Easy App Deployment: A Beginner's Guide to Docker ContainersHow Meta Uses AI Agents to Boost Hyperscale Efficiency: Q&A