Back to blog
Dev

Web Performance Optimization: Practical Tips That Matter

performanceweb-vitalsoptimization

Performance is not a luxury. It is a fundamental aspect of user experience that directly affects bounce rates, conversion, and even search engine rankings. As an indie developer who has shipped multiple web projects, I have learned that performance optimization is not something you bolt on at the end — it needs to be woven into every decision you make from the start.

In this post, I will walk you through the practical performance optimization techniques I actually use in my projects. No abstract theory. Just things that make a measurable difference.

Understanding Core Web Vitals

Google's Core Web Vitals are the three metrics that matter most for user-perceived performance. If you only track three numbers, make it these.

Largest Contentful Paint (LCP) measures how long it takes for the largest visible element — typically a hero image or heading — to finish rendering. A good LCP is under 2.5 seconds. Anything above 4 seconds is considered poor.

First Input Delay (FID) measures the time between when a user first interacts with your page (clicking a button, tapping a link) and when the browser actually begins processing that interaction. The threshold for "good" is under 100 milliseconds. Note that FID is being gradually replaced by Interaction to Next Paint (INP), which measures responsiveness across all interactions, not just the first one.

Cumulative Layout Shift (CLS) measures how much the page layout shifts unexpectedly during loading. You know that frustrating experience where you are about to tap a button and the page jumps, causing you to click an ad instead? That is what CLS captures. Keep it under 0.1.

To measure these in the real world, use the web-vitals library:

import { onLCP, onFID, onCLS } from 'web-vitals';

onLCP(console.log);
onFID(console.log);
onCLS(console.log);

For lab testing, Chrome DevTools' Lighthouse panel gives you all three metrics plus actionable suggestions. I run Lighthouse on every major deploy.

Image Optimization: The Biggest Win

Images are usually the heaviest assets on any web page. Optimizing them is often the single highest-impact change you can make.

Use modern formats. WebP offers 25-35% better compression than JPEG at comparable quality. AVIF goes even further, often achieving 50% smaller file sizes. Both have excellent browser support now.

Serve responsive images. Do not send a 2400px-wide hero image to a phone with a 390px-wide screen. Use the srcset attribute and the <picture> element:

<picture>
  <source srcset="/hero.avif" type="image/avif" />
  <source srcset="/hero.webp" type="image/webp" />
  <img
    src="/hero.jpg"
    srcset="/hero-400.jpg 400w, /hero-800.jpg 800w, /hero-1200.jpg 1200w"
    sizes="(max-width: 600px) 400px, (max-width: 1000px) 800px, 1200px"
    alt="Hero image"
    width="1200"
    height="600"
    loading="lazy"
  />
</picture>

Always set width and height attributes. This lets the browser reserve the correct space before the image loads, preventing layout shift (CLS). This one small habit eliminates a huge source of CLS problems.

Lazy load offscreen images. The native loading="lazy" attribute is supported in all modern browsers. There is no reason not to use it for images below the fold. However, do not lazy load your LCP image — that will hurt your LCP score.

If you are using Next.js, the next/image component handles format conversion, responsive sizing, and lazy loading automatically. It is one of the framework's best features.

Code Splitting and Bundle Analysis

Shipping a 500KB JavaScript bundle to render a landing page is a common mistake. Code splitting lets you break your application into smaller chunks that load on demand.

Route-based splitting is the most straightforward approach. In React with React Router or Next.js, each page becomes its own chunk by default. But you can go further.

Component-level splitting with React.lazy and Suspense lets you defer loading heavy components:

const HeavyChart = React.lazy(() => import('./HeavyChart'));

function Dashboard() {
  return (
    <Suspense fallback={<ChartSkeleton />}>
      <HeavyChart />
    </Suspense>
  );
}

Dynamic imports work beyond React. Any import() call creates a split point:

button.addEventListener('click', async () => {
  const { processData } = await import('./heavyProcessor.js');
  processData(data);
});

To understand what is actually in your bundles, use analysis tools. For webpack, webpack-bundle-analyzer creates a treemap visualization. For Vite, rollup-plugin-visualizer does the same. I was shocked the first time I ran one of these and discovered that a date formatting library was adding 70KB to my bundle when I only used one function from it.

Tree shaking is your friend here. Import only what you need: import { format } from 'date-fns' instead of import * as dateFns from 'date-fns'. And make sure your bundler's tree shaking is actually working — some libraries do not support it well.

Font Optimization

Custom fonts are a common source of performance problems. Here is how to handle them properly.

Use font-display: swap. This tells the browser to show text immediately using a fallback font, then swap in the custom font once it loads. Users see content right away instead of staring at invisible text:

@font-face {
  font-family: 'CustomFont';
  src: url('/fonts/custom.woff2') format('woff2');
  font-display: swap;
}

Preload critical fonts. For fonts used above the fold, add a preload hint:

<link rel="preload" href="/fonts/custom.woff2" as="font" type="font/woff2" crossorigin />

Subset your fonts. If you only use Latin characters, do not ship the full Unicode range. Tools like glyphhanger or Google Fonts' built-in subsetting can reduce font file sizes dramatically. I have seen fonts go from 200KB to 20KB after subsetting.

Use WOFF2. It offers the best compression for web fonts and has universal browser support. There is no reason to ship TTF or OTF files for the web anymore.

Consider system fonts. The system font stack (-apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif) loads instantly and looks native on every platform. For body text, this is often the best choice.

Caching Strategies

Effective caching can make returning visits nearly instant.

Set long cache lifetimes for hashed assets. If your bundler adds content hashes to filenames (like main.a1b2c3.js), you can safely cache them for a year:

Cache-Control: public, max-age=31536000, immutable

The immutable directive tells the browser not to even bother revalidating — the hash guarantees the content has not changed.

Use short cache lifetimes for HTML. Your HTML files should always be revalidated so users get the latest version:

Cache-Control: public, max-age=0, must-revalidate

Leverage service workers for offline-first caching. Workbox makes this manageable:

import { precacheAndRoute } from 'workbox-precaching';
import { registerRoute } from 'workbox-routing';
import { StaleWhileRevalidate } from 'workbox-strategies';

precacheAndRoute(self.__WB_MANIFEST);

registerRoute(
  ({ request }) => request.destination === 'image',
  new StaleWhileRevalidate({ cacheName: 'images' })
);

The "stale while revalidate" strategy is my favorite for most assets: serve the cached version immediately, then update the cache in the background.

CDN Usage

A Content Delivery Network serves your assets from edge servers geographically close to your users. This reduces latency significantly, especially for users far from your origin server.

Put static assets on a CDN. Images, fonts, CSS, and JavaScript files benefit the most. Services like Cloudflare, AWS CloudFront, and Vercel's Edge Network handle this automatically.

Consider edge rendering. Modern platforms let you run server-side code at the edge, bringing dynamic content closer to users too. Vercel Edge Functions and Cloudflare Workers are examples.

Use HTTP/2 or HTTP/3. Modern CDNs support these protocols, which allow multiplexing multiple requests over a single connection. This eliminates the old need to bundle everything into as few files as possible — though you should still avoid hundreds of tiny requests.

The Critical Rendering Path

Understanding how browsers render pages helps you make better optimization decisions.

The browser must complete these steps before showing content: parse HTML, fetch and parse CSS, build the render tree, compute layout, and paint pixels. Anything that blocks this pipeline delays the first render.

CSS is render-blocking by default. The browser will not render anything until all CSS in the <head> is loaded and parsed. Keep your critical CSS small, and consider inlining it directly in the HTML for the fastest possible first paint. Tools like critical can extract the CSS needed for above-the-fold content automatically.

JavaScript blocks parsing by default. Scripts in the <head> without async or defer stop the HTML parser until they download and execute. Always use defer for scripts that do not need to run immediately:

<script src="/app.js" defer></script>

The difference between async and defer: async scripts execute as soon as they download (in any order), while defer scripts execute after HTML parsing is complete (in document order). For application code, defer is usually what you want.

Measuring Performance in Production

Lab testing with Lighthouse is useful, but real user data tells the true story. Real User Monitoring (RUM) captures performance data from actual users on real devices and networks.

Google's CrUX (Chrome User Experience Report) provides real-world Core Web Vitals data aggregated from Chrome users. You can access it through PageSpeed Insights or the CrUX API.

For custom RUM, use the PerformanceObserver API or the web-vitals library to collect metrics and send them to your analytics service:

import { onLCP, onINP, onCLS } from 'web-vitals';

function sendToAnalytics(metric) {
  fetch('/api/vitals', {
    method: 'POST',
    body: JSON.stringify({
      name: metric.name,
      value: metric.value,
      id: metric.id,
    }),
  });
}

onLCP(sendToAnalytics);
onINP(sendToAnalytics);
onCLS(sendToAnalytics);

Set performance budgets. Decide upfront that your JavaScript bundle must stay under 200KB, your LCP must be under 2 seconds, and your CLS must be under 0.05. Then monitor these in CI. Tools like bundlesize and Lighthouse CI can fail your build if you exceed your budgets.

Quick Wins Checklist

Here are the optimizations I apply to every project as a baseline:

  1. Enable gzip or Brotli compression on the server
  2. Set proper cache headers for all asset types
  3. Use WebP/AVIF images with proper srcset and sizes
  4. Lazy load offscreen images and iframes
  5. Preload the LCP image and critical fonts
  6. Use font-display: swap for all custom fonts
  7. Defer non-critical JavaScript
  8. Inline critical CSS
  9. Set explicit width and height on images and videos
  10. Run bundle analysis monthly and remove unused dependencies

My Performance Workflow

When I build a new project, I follow this workflow:

  1. Start with Lighthouse. Get a baseline score before any optimization.
  2. Fix the biggest problem first. Usually it is images or a bloated JavaScript bundle.
  3. Measure again. Verify the improvement with Lighthouse and real user data.
  4. Set budgets. Add performance budgets to CI so regressions get caught early.
  5. Monitor continuously. Check CrUX data and RUM metrics weekly.

Performance optimization is iterative. You will never be "done." But by building good habits and measuring consistently, you can maintain fast, responsive websites that keep users happy and search engines satisfied.

The bottom line: every millisecond matters. Users notice when a site is fast, and they definitely notice when it is slow. The techniques in this post are not exotic — they are the fundamentals that every web developer should apply to every project. Start with the biggest wins (images and JavaScript), measure your progress, and keep iterating.

Web Performance Optimization: Practical Tips That Matter