Skip to main content

Overview

Rate limits protect the API from abuse and ensure fair usage for all users. Limits are applied per API key and vary by plan.
PlanRequests/SecondEmails/MonthBurst Limit
Free10 req/s5,000 emails20 requests
Pro ($20/mo)100 req/s50,000 emails200 requests
Scale ($100/mo)500 req/s200,000 emails1000 requests

Rate Limit Headers

Every API response includes headers to help you track your rate limit status:
HeaderDescription
X-RateLimit-LimitMaximum requests allowed per window
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when window resets
Retry-AfterSeconds to wait (only on 429 response)

Example Response Headers

response-headers.txt
HTTP/1.1 200 OK
X-RateLimit-Limit: 200
X-RateLimit-Remaining: 150
X-RateLimit-Reset: 1705320000
Content-Type: application/json

Check Current Usage

You can check your current rate limit status with any API request, or use the dedicated endpoint:
Terminal
curl -X GET "https://www.unosend.co/api/v1/usage" \
  -H "Authorization: Bearer un_your_api_key" \
  -i

Response

response.json
{
  "plan": "pro",
  "rate_limit": {
    "requests_per_second": 100,
    "burst_limit": 200
  },
  "usage": {
    "emails_sent": 12450,
    "emails_limit": 50000,
    "period_start": "2024-01-01T00:00:00Z",
    "period_end": "2024-01-31T23:59:59Z"
  }
}

Handling Rate Limits

When you exceed the rate limit, the API returns a 429 Too Many Requests response:
rate-limit-response.json
{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "Rate limit exceeded. Retry after 60 seconds.",
    "retry_after": 60
  }
}

Implementing Retry Logic

# Use --retry flag for automatic retries
curl -X POST "https://www.unosend.co/api/v1/emails" \
  -H "Authorization: Bearer un_your_api_key" \
  -H "Content-Type: application/json" \
  --retry 3 \
  --retry-delay 5 \
  -d '{"from": "[email protected]", "to": "[email protected]", "subject": "Hello", "html": "<p>Hi!</p>"}'

Exponential Backoff

For robust retry logic, use exponential backoff with jitter to avoid thundering herd:
exponential-backoff.ts
async function sendWithExponentialBackoff(
  payload: EmailPayload,
  maxRetries: number = 5
): Promise<Response> {
  const baseDelay = 1000; // 1 second
  
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      const response = await fetch('https://www.unosend.co/api/v1/emails', {
        method: 'POST',
        headers: {
          'Authorization': `Bearer ${apiKey}`,
          'Content-Type': 'application/json'
        },
        body: JSON.stringify(payload)
      });
      
      if (response.status === 429) {
        // Calculate delay with exponential backoff + jitter
        const delay = baseDelay * Math.pow(2, attempt);
        const jitter = Math.random() * 1000;
        const waitTime = delay + jitter;
        
        console.log(`Rate limited. Waiting ${waitTime}ms (attempt ${attempt + 1})`);
        await sleep(waitTime);
        continue;
      }
      
      if (!response.ok) {
        throw new Error(`HTTP ${response.status}`);
      }
      
      return response;
      
    } catch (error) {
      if (attempt === maxRetries - 1) throw error;
      
      const delay = baseDelay * Math.pow(2, attempt);
      await sleep(delay);
    }
  }
  
  throw new Error('Max retries exceeded');
}

Queue Pattern for Bulk Sending

For sending many emails, use a queue with rate limiting to stay within limits:
rate-limited-queue.ts
class RateLimitedQueue {
  private queue: EmailPayload[] = [];
  private processing = false;
  private requestsPerSecond: number;
  
  constructor(requestsPerSecond: number = 10) {
    this.requestsPerSecond = requestsPerSecond;
  }
  
  async add(payload: EmailPayload): Promise<void> {
    this.queue.push(payload);
    this.process();
  }
  
  private async process(): Promise<void> {
    if (this.processing) return;
    this.processing = true;
    
    const interval = 1000 / this.requestsPerSecond;
    
    while (this.queue.length > 0) {
      const payload = this.queue.shift()!;
      
      try {
        await sendEmailWithRetry(payload);
      } catch (error) {
        console.error('Failed to send email:', error);
      }
      
      await sleep(interval);
    }
    
    this.processing = false;
  }
}

// Usage
const queue = new RateLimitedQueue(10); // 10 req/sec

for (const recipient of recipients) {
  queue.add({
    from: '[email protected]',
    to: recipient.email,
    subject: 'Hello!',
    html: '<p>Your email content</p>'
  });
}
For bulk sending, consider using the /v1/emails/batch endpoint which allows up to 100 emails per request, significantly reducing API calls.

Best Practices

1

Monitor rate limit headers

Check X-RateLimit-Remaining and slow down before hitting limits.
2

Use batch endpoints

Send multiple emails in one request using POST /v1/emails/batch to reduce API calls.
3

Implement queuing

Queue emails during high-traffic periods and process them at a controlled rate.
4

Use webhooks instead of polling

Instead of polling for status, use webhooks to receive delivery updates asynchronously.
5

Cache responses when possible

Cache responses from endpoints like GET /v1/domains to reduce unnecessary requests.

Need Higher Limits?

If you need higher rate limits for your use case, upgrade your plan or contact us for Enterprise options with custom limits.