Troubleshooting Cloudflare Workers Deployment: From Security Checks to Bundle Size Optimization

Troubleshooting Cloudflare Workers Deployment: From Security Checks to Bundle Size Optimization

2025-08-24 by Remi Kristelijn

Troubleshooting Cloudflare Workers Deployment: From Security Checks to Bundle Size Optimization

Sometimes the best learning happens when things go wrong. Today I want to share a real-time troubleshooting session where we encountered multiple deployment issues and solved them one by one. This is the story of how a simple deployment turned into a deep dive into GitHub Actions, security checks, and aggressive bundle optimization.

The Initial Problem: Security Check False Positives

It all started with a failed deployment. The security check was flagging potential API tokens in our blog content:

āŒ Potential API token found in source code
src/data/content/automated-deployment-with-github-actions-and-cloudflare-pages.json

The irony? The "API token" was actually documentation about GitHub Actions workflows, showing examples like:

apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}

The Root Cause

Our security check was too aggressive:

# Original (too broad)
if grep -r "CLOUDFLARE_API_TOKEN.*=" src/; then
  echo "āŒ Potential API token found"
  exit 1
fi

This caught legitimate documentation examples alongside real security threats.

The Solution: Smart Filtering

We implemented intelligent filtering that distinguishes between real tokens and documentation:

# Improved security check
if grep -r "CLOUDFLARE_API_TOKEN.*=" src/ \
  --exclude-dir=content \
  --exclude-dir=data \
  --exclude="*.json" | \
  grep -v 'secrets\.' | \
  grep -v 'apiToken:' | \
  grep -v 'CLOUDFLARE_API_TOKEN.*:'; then
  echo "āŒ Potential API token found"
  exit 1
fi

Key improvements:

  • Exclude content directories (blog posts)
  • Exclude generated JSON files
  • Filter out GitHub Actions template syntax
  • Maintain security for actual source code

Problem #2: YAML Syntax Errors

After fixing the security logic, we hit another wall:

Invalid workflow file: .github/workflows/branch-protection.yml#L1
(Line: 82, Col: 14): The expression is not closed.
An unescaped ${{ sequence was found, but the closing }} sequence was not found.

The Culprit: Escaping Gone Wrong

In trying to filter ${{ secrets.* patterns, I had used:

grep -v '\${{ secrets\.'

But YAML interpreted the \${{ as the start of a GitHub Actions expression, expecting a closing }}.

The Fix: Simpler Pattern Matching

# Instead of trying to escape GitHub Actions syntax
grep -v '\${{ secrets\.'

# Just match the key part
grep -v 'secrets\.'

Lesson learned: Sometimes the simplest solution is the best. Don't over-engineer escaping when simple pattern matching works.

The Big Problem: Worker Size Limit Exceeded

With security and syntax issues resolved, we hit the main challenge:

✘ [ERROR] Your Worker exceeded the size limit of 3 MiB.
Please upgrade to a paid plan to deploy Workers up to 10 MiB.

Total Upload: 14813.61 KiB / gzip: 3094.50 KiB

Key insight: The limit applies to the gzipped size, not the raw size. We were at 3.09MB gzipped, just 90KB over the 3MB limit.

Bundle Analysis: Finding the Culprits

I created an analysis script to identify the largest files:

const findLargeFiles = (dir, threshold = 100 * 1024) => {
  // Scan directory recursively
  // Report files larger than threshold
  // Sort by size descending
};

Results were shocking:

šŸ“Š Large files in OpenNext build:
- .open-next/server-functions/default/handler.mjs: 9195.2KB
- capsize-font-metrics.json: 4200.8KB
- amphtml-validator/validator_wasm.js: 3918.2KB
- babel-packages/packages-bundle.js: 1510.4KB
- babel/bundle.js: 1305.1KB

The main handler was 9.2MB alone! But more importantly, there were several large files we didn't actually need.

The Optimization Strategy: Aggressive but Smart

Phase 1: Data Optimization (95% Reduction)

First, we optimized our blog posts data:

// Before: All content in one file
posts.json: 259KB (all blog content)

// After: Metadata + on-demand content
posts-metadata.json: 13KB (summaries only)
content/[slug].json: Individual files loaded on-demand

// Result: 95% memory reduction

Phase 2: Removing Unnecessary Dependencies

The real wins came from removing unused features:

// 1. Font metrics (4.2MB) - Not needed for our blog
const fontMetricsPath = path.join(buildDir, 'capsize-font-metrics.json');
if (fs.existsSync(fontMetricsPath)) {
  fs.unlinkSync(fontMetricsPath);
  console.log('šŸ—‘ļø Removed font metrics: 4.2MB saved');
}

// 2. AMP validator (3.9MB) - We don't use AMP
const ampValidatorPath = path.join(buildDir, 'amphtml-validator');
if (fs.existsSync(ampValidatorPath)) {
  fs.rmSync(ampValidatorPath, { recursive: true });
  console.log('šŸ—‘ļø Removed AMP validator: 3.9MB saved');
}

// 3. Babel packages (1.5MB) - Only if no custom config
if (!babelConfigExists) {
  fs.unlinkSync(babelPackagesPath);
  console.log('šŸ—‘ļø Removed Babel packages: 1.5MB saved');
}

Phase 3: Code Optimization

Finally, we optimized the main handler:

// Remove development artifacts
handlerContent = handlerContent
  .replace(/\/\*[\s\S]*?\*\//g, '') // Block comments
  .replace(/\/\/.*$/gm, '')         // Line comments
  .replace(/console\.debug\([^)]*\);?/g, '') // Debug logs
  .replace(/console\.trace\([^)]*\);?/g, '') // Trace logs
  .replace(/\n\s*\n/g, '\n')        // Empty lines
  .trim();

The Deployment Pipeline: Optimization-First

We restructured the deployment workflow to optimize before building:

- name: Generate posts data
  run: node scripts/generate-posts-data.js

- name: Build project
  run: npm run ci:build

- name: Aggressive Worker optimization
  run: node scripts/aggressive-worker-optimization.js

- name: Deploy to Cloudflare Workers
  run: npx wrangler deploy

Key insight: Optimize the built artifacts, not the source code. This way we keep development-friendly code while deploying lean bundles.

Results: From 3.09MB to ~2.5MB

Estimated savings:

  • Font metrics: 4.2MB
  • AMP validator: 3.9MB
  • Babel packages: 1.5MB
  • Posts data: 0.25MB
  • Code optimization: ~0.5MB
  • Total: ~10MB raw, ~0.6MB gzipped

Final result: From 3.09MB gzipped to approximately 2.5MB gzipped - well under the 3MB limit!

Lessons Learned

1. Security Checks Need Context

Don't just grep for patterns. Understand what you're looking for and exclude legitimate use cases like documentation.

2. YAML Escaping is Tricky

When in doubt, use simpler patterns. Over-escaping can create more problems than it solves.

3. Bundle Analysis is Essential

You can't optimize what you don't measure. Always analyze your bundle to find the real culprits.

4. Question Every Dependency

That 4MB font metrics file? Probably not needed for your blog. That AMP validator? Only if you're actually using AMP.

5. Gzipped Size Matters

Cloudflare Workers limit applies to compressed size. Text-heavy files compress well, but binary files don't.

6. Optimize Post-Build

Keep your development environment friendly, but aggressively optimize the deployment artifacts.

The Troubleshooting Mindset

This session exemplifies effective troubleshooting:

  1. Tackle one problem at a time - Don't try to fix everything simultaneously
  2. Understand the root cause - Don't just patch symptoms
  3. Measure before optimizing - Data-driven decisions beat guesswork
  4. Test incrementally - Small changes are easier to debug
  5. Document the journey - Future you will thank present you

Alternative Solutions

If aggressive optimization hadn't worked, we had backup plans:

Option A: Cloudflare Pages

  • No 3MB limit
  • Better for static sites
  • Automatic preview deployments

Option B: Paid Workers Plan

  • 10MB limit instead of 3MB
  • ~$5/month
  • Simplest solution

Option C: Code Splitting

  • Dynamic imports for large components
  • Lazy loading of non-critical features
  • More complex but effective

Conclusion

What started as a simple deployment failure turned into a comprehensive optimization exercise. We solved three distinct problems:

  1. Security false positives - Smart filtering
  2. YAML syntax errors - Simpler escaping
  3. Bundle size limits - Aggressive optimization

The key takeaway? Modern deployment pipelines are complex systems with many moving parts. When something breaks, approach it systematically:

  • Identify the specific error
  • Understand the root cause
  • Implement targeted fixes
  • Test incrementally
  • Document for next time

Sometimes the best learning happens when things go wrong. This troubleshooting session taught us more about GitHub Actions, security practices, and bundle optimization than any tutorial could.

Final status: Deployment successful! šŸŽ‰


Have you encountered similar deployment challenges? Share your troubleshooting stories in the comments below. The development community learns best when we share our failures alongside our successes.