Lesson 10 — Advanced Configuration and Best Practices

⏱ Est. reading time: 3 min Updated on 5/7/2026

This lesson covers how to optimize Firecrawl's performance through advanced settings and shares best practices for complex production environments.

10.1 Advanced MCP Server Configuration

You can configure the MCP Server at two levels:

  • Global Configuration (~/.claude/settings.json): Shared across all projects.
  • Project Configuration (./.claude/settings.json): Applies only to the current project.

Environment Variable Reference:

  • FIRECRAWL_API_KEY: Required for Cloud services.
  • FIRECRAWL_API_URL: Required for Self-hosted instances (e.g., http://localhost:3002).

10.2 Browser Profile Reuse (Persistent Login)

If you need to scrape the same authenticated website multiple times, use the Profile feature to maintain the session:

"profile": {
  "name": "my-session-name",
  "saveChanges": true
}

Requests with the same name will share Cookies and LocalStorage, avoiding repeated logins and bypassing some security checks.


10.3 Proxy Strategy Selection Guide

Strategy Ideal For Description
basic Standard sites Lowest cost.
stealth Basic anti-scraping Simulates real browser fingerprints.
enhanced Heavy protection (e.g., Cloudflare) Residential IPs + advanced fingerprints.
auto Unknown protection level System tries automatically; recommended choice.

10.4 Production Best Practices

1. Reducing Token Consumption (Cost/Speed)

  • Enable onlyMainContent: true to strip webpage noise.
  • Use includeTags to extract only the target area precisely.
  • Set removeBase64Images: true to reduce response size.

2. Caching Strategy

For pages that don't change frequently (like technical docs or API references), enabling the cache can save a significant amount of Credits:

"storeInCache": true,
"maxAge": 3600 // Cache for 1 hour

3. Polite Scraping

  • Maintain an interval of at least 2 seconds between requests to the same domain.
  • Prefer Map + Targeted Scrape over blind site-wide Crawling.
  • Respect the target site's robots.txt (Firecrawl complies by default).