• MCP
  • Rules
  • Leaderboard
  • Generate Rule⌘U

Designed and Built by
GrowthX

  • X
  • LinkedIn
    1. Home
    2. Rules
    3. API Payload Size Optimization Rules

    API Payload Size Optimization Rules

    Comprehensive coding rules focused on minimizing API payload size and improving transfer efficiency in JavaScript/TypeScript back-ends (Node.js, Serverless, GraphQL, REST).

    Stop Bleeding Bandwidth: Your API Payload Optimization Playbook

    Your API just hit a 413. Your mobile users are abandoning requests. Your AWS Lambda costs are climbing because you're shipping megabytes of JSON when kilobytes would do. Sound familiar?

    The Hidden Cost of Bloated APIs

    Every extra byte in your API responses translates to real costs: slower mobile experiences, higher bandwidth bills, increased Lambda execution time, and users hitting the back button. The math is brutal—a 500KB JSON response that could be 50KB isn't just 10x larger, it's potentially 10x slower to transmit and parse.

    Modern APIs ship way too much data. We return entire user objects when we need just an ID. We fetch all fields when the client displays three. We send uncompressed JSON over the wire like it's 2010.

    What These Rules Actually Do

    These Cursor Rules transform your API development workflow into a payload-first approach. Instead of building APIs that accidentally ship bloated responses, you'll build APIs that are architected for efficiency from day one.

    The rules establish four core optimization strategies:

    Compression First: Automatic gzip/Brotli compression for all text payloads over 1KB, with proper AWS API Gateway integration and content negotiation.

    Field Selection Everywhere: Built-in TypeScript utilities for partial field selection, GraphQL resolvers that strip unspecified fields, and consistent REST query parameter patterns.

    Smart Size Limits: Defensive payload size checking at every boundary, with clear error handling and fallback strategies (like S3 presigned URLs for large transfers).

    Binary When It Matters: Seamless Protocol Buffer integration for high-frequency endpoints where every byte counts.

    Key Benefits You'll See Immediately

    60-80% Smaller Payloads: Compression alone typically reduces JSON payloads by 60-80%. Add field selection and you're looking at 90%+ reductions for oversized responses.

    Faster Mobile Performance: Smaller payloads mean faster parsing, lower battery drain, and better user experience on slower connections.

    Lower AWS Costs: Reduced Lambda execution time, lower API Gateway data transfer costs, and fewer timeout-related retries.

    Bulletproof Error Handling: No more mysterious 413 errors or timeouts. Clear HTTP status codes with actionable remediation guidance.

    Real Developer Workflows

    Before: The Bloated User Profile API

    // Returns 2MB of user data including full post history
    app.get('/api/users/:id', async (req, res) => {
      const user = await db.users.findById(req.params.id, {
        include: ['posts', 'comments', 'followers', 'following']
      });
      res.json(user); // 😱 2MB response
    });
    

    After: Optimized with Field Selection

    // Returns only requested fields, compressed
    app.get('/api/users/:id', compression(), async (req, res) => {
      const fields = parseFields(req.query.fields, USER_ALLOWED_FIELDS);
      const user = await db.users.findById(req.params.id, { select: fields });
      
      const payload = selectFields(user, fields);
      assertPayloadSize(payload, MAX_USER_PAYLOAD);
      
      res.json(payload); // 🎉 12KB response (compressed to 3KB)
    });
    

    GraphQL Field Stripping

    // Automatically removes unselected fields before serialization
    const resolvers = {
      User: {
        posts: stripUnselectedFields(async (parent, args, context, info) => {
          const selectedFields = getSelectedFields(info);
          return fetchPosts(parent.id, selectedFields);
        })
      }
    };
    

    Binary Protocol Buffer for High-Frequency APIs

    // Switch to protobuf for APIs called 1000+ times/minute
    app.get('/api/metrics/realtime', async (req, res) => {
      const metrics = await getRealtimeMetrics();
      
      if (req.accepts('application/x-protobuf')) {
        const buffer = MetricsProto.encode(metrics).finish();
        res.contentType('application/x-protobuf').send(buffer); // 70% smaller
      } else {
        res.json(metrics); // fallback
      }
    });
    

    Implementation Guide

    1. Install and Configure Core Dependencies

    npm install @fastify/compress protobufjs zlib http-errors
    npm install -D size-limit @size-limit/preset-big-lib
    

    2. Add the Cursor Rules to Your Project

    Copy the rules configuration into your .cursorrules file in your project root.

    3. Set Up Payload Size Monitoring

    // utils/payload/size-check.ts
    export const MAX_PAYLOAD = 1 * 1024 * 1024; // 1MB
    
    export function assertPayloadSize(data: any, maxSize = MAX_PAYLOAD) {
      const size = Buffer.byteLength(JSON.stringify(data));
      if (size > maxSize) {
        throw createHttpError(413, {
          code: 'PAYLOAD_TOO_LARGE',
          maxBytes: maxSize,
          actualBytes: size,
          hint: 'Use pagination or upload to S3'
        });
      }
      return size;
    }
    

    4. Enable Compression Middleware

    // Express
    app.use(compression({ threshold: 1024 }));
    
    // AWS API Gateway - add to serverless.yml
    custom:
      apiGatewayCompression:
        minimumCompressionSize: 1024
    

    5. Add Size Limit CI Checks

    // package.json
    {
      "size-limit": [
        {
          "path": "dist/api-responses/*.json",
          "limit": "10 KB"
        }
      ]
    }
    

    6. Create Field Selection Utilities

    // utils/payload/field-selection.ts
    export type PartialFields<T, K extends keyof T> = Pick<T, K>;
    
    export function selectFields<T, K extends keyof T>(
      obj: T, 
      fields: K[]
    ): PartialFields<T, K> {
      const result = {} as PartialFields<T, K>;
      fields.forEach(field => {
        if (obj[field] !== undefined) {
          result[field] = obj[field];
        }
      });
      return result;
    }
    

    Results & Impact

    Immediate Wins:

    • 60-80% reduction in response sizes through compression
    • 90%+ reduction for over-fetched endpoints through field selection
    • 25-50% reduction in Lambda execution time
    • Elimination of 413 timeout errors

    Long-term Benefits:

    • Consistent sub-200ms API response times
    • 40% reduction in mobile app bandwidth usage
    • Proactive payload size monitoring prevents regressions
    • Binary protocol support for performance-critical endpoints

    CI Integration: The size-limit integration catches payload bloat during development, not in production. Your builds fail when response fixtures exceed thresholds, forcing conscious decisions about payload growth.

    This isn't just about smaller payloads—it's about building APIs that respect your users' time, devices, and network conditions. Every byte you don't send is a byte that loads faster, costs less, and provides a better experience.

    Ready to stop shipping unnecessary data? Your mobile users (and your AWS bill) will thank you.

    JavaScript
    API Development
    Backend Development
    AWS Lambda
    AWS API Gateway
    Google Cloud APIs
    GraphQL
    Payload Size Optimization

    Configuration

    You are an expert in JavaScript/TypeScript, Node.js, AWS Lambda, AWS API Gateway, GraphQL, REST, Protocol Buffers, gzip/Brotli, and S3 presigned-URL workflows.
    
    Key Principles
    - Treat every byte as a cost: smaller payloads improve latency, cost, and battery life.
    - Send **only** the data a client needs now; defer or offload the rest.
    - Prefer structured pagination + field selection over bulk transfers.
    - Compress everything that is text; use binary formats when density matters.
    - Enforce size limits defensively at every trust boundary (client ⇄ API ⇄ downstream).
    - Fail fast with clear HTTP status and remediation guidance (e.g., 413 + `Retry-After`).
    - Automate audits: size regressions must fail CI.
    
    JavaScript / TypeScript Rules
    - Default to TypeScript. Use `interface` definitions that map 1-to-1 with response schemas; keep field lists minimal and explicit.
    - Prefer `camelCase` for internal variables but **abbreviate JSON keys** shipped over the wire (e.g., `u` instead of `username`). Maintain a lookup map in code to avoid magic strings.
    - Use generics to create `Paginated<T>` and `PartialFields<T,K>` utility types for pagination/field-selection helpers.
    - Import `zlib` or `@fastify/compress` and apply gzip or Brotli (`br`) for all `Content-Type` = `application/json` > 1 kB.
    - Use `protobufjs` or `@bufbuild/protobuf` when transferring >10 kB objects repeatedly.
    - Keep payload size checks at file top:
      ```ts
      const MAX_PAYLOAD = 1 * 1024 * 1024; // 1 MiB
      if (Buffer.byteLength(JSON.stringify(body)) > MAX_PAYLOAD) {
        throw createHttpError(413, 'Payload too large. Use pagination or S3 upload.');
      }
      ```
    
    Error Handling and Validation
    - Hard-limit incoming request bodies (`express.json({ limit: '1mb' })` or API Gateway `maxBodyBytes`).
    - On 413, return JSON:
      `{ code: 'PAYLOAD_TOO_LARGE', maxBytes: 1_048_576, hint: 'Upload to /sign-s3 instead' }`.
    - Validate query parameters for `limit`, `offset`, `fields` against whitelists to prevent unbounded results.
    - Always gzip error responses > 256 B to limit blast radius when stack traces are enabled in lower environments.
    
    AWS Lambda / API Gateway Rules
    - Enable **HTTP compression** in API Gateway (`gatewayresponse`) with `minCompressionSize` = 1024.
    - Register `application/x-protobuf` in *Binary Media Types* and set `Content-Encoding: gzip` for JSON.
    - Keep Lambda responses < 6 MB after compression; otherwise return a 302 redirect to an S3 presigned URL.
    - Use `aws-sdk/clients/s3` to generate presigned PUT URLs for large uploads; accept only the key in the follow-up POST body.
    - Stream responses using `event.body = zlib.createGzip()` for large server-to-client transfers to avoid memory pressure.
    
    GraphQL Rules
    - Implement **field selection** via `@graphql-tools/resolvers-composition` that strips out unspecified fields before serialization.
    - Batch queries with `DataLoader` to avoid multiple heavy payloads.
    - Provide a `first` argument default of 20 items and hard-cap at 100.
    
    REST / Express Rules
    - Attach `compression()` middleware as the **first** after security middlewares.
    - Adopt consistent pagination query params: `?limit=<1-100>&offset=<0+>`.
    - Support `Accept-Encoding: br, gzip` negotiation; fall back to identity encoding only when none requested.
    
    Testing
    - Add Jest snapshot tests that fail when response fixture size > 10 kB unless intentionally updated (`EXPECT_PAYLOAD_INCREASE`).
    - Integrate `size-limit` CLI in CI to track static asset and JSON fixture sizes.
    - Run monthly API audits: script fetches top 50 endpoints, logs size, compares historical trend.
    
    Performance
    - Pre-compress common immutable JSON (schemas, dictionaries) at build time; serve with `Content-Encoding: gzip` + `Cache-Control: immutable, max-age=31536000`.
    - Cache frequently requested pages in CloudFront or API Gateway `cache cluster` with TTL 60 s.
    - For mobile clients, aggressively use `if-none-match` with ETags to transfer 304 instead of payload.
    
    Security
    - Strip sensitive PII before compression to avoid BREACH-style attacks.
    - Limit message size before decryption to defend against *zip bombs* in encrypted uploads.
    - Reject unexpected `Content-Encoding` headers (e.g., `compress`, `deflate`) to prevent request smuggling.
    
    Tooling
    - `@fastify/compress` – runtime Brotli/gzip.
    - `protobufjs` / `@bufbuild/protobuf` – binary serialization.
    - `npx size-limit` – CI gate for bundle/payload size.
    - `madge --circular` – detect large dependency graphs that bloat Lambda bundles.
    
    Directory & File Conventions
    - `/handlers/*` – Lambda handlers (max 200 LOC each).
    - `/schemas/*.proto` – Protobuf definitions.
    - `/utils/payload/` – shared compression, abbreviation maps, size-check helpers.
    
    Common Pitfalls & Anti-Patterns
    - "One endpoint to fetch them all" patterns returning megabytes—always split into granular resources.
    - Overusing GraphQL fragments that re-introduce unused fields.
    - Leaving compression disabled in staging → missed defects in production.
    - Forgetting to strip logs before sending JSON to clients (e.g., `debugInfo`).
    
    Quick Reference Cheat-Sheet
    - 1 MiB hard limit on requests/responses.
    - Enable gzip (server) + accept Brotli (client).
    - Use pagination (`limit`) + sorting (`orderBy`).
    - Employ field selection/GraphQL selection sets.
    - Switch to Protobuf for chatty high-frequency traffic.
    - Offload big blobs (images, reports) to S3 via presigned URLs.