Skip to main content

Overview

Rate limits protect the API from abuse and ensure fair usage for all users. Limits are applied per organization using an in-memory sliding window.

API Request Limits

Rate limits scale with your plan to support high-throughput sending:
PlanRequests/MinuteRequests/HourRequests/Day
Free10100500
10K2005,00015,000
25K30010,00030,000
60K50020,00060,000
100K80040,000120,000
200K1,00060,000250,000
300K1,50080,000350,000
400K2,000100,000500,000
600K3,000150,000700,000
800K4,000200,000900,000
1M5,000250,0001,200,000
Enterprise10,000500,0002,000,000

Email Sending Limits

Monthly email limits and daily sending limits vary by plan:
PlanPriceEmails/MonthDaily LimitContacts
Free$05,000200/day1,000
10K$8/mo10,000Unlimited2,500
25K$15/mo25,000Unlimited5,000
60K$20/mo60,000Unlimited10,000
100K$50/mo100,000Unlimited25,000
200K$100/mo200,000Unlimited50,000
300K$165/mo300,000Unlimited75,000
400K$220/mo400,000Unlimited100,000
600K$340/mo600,000Unlimited150,000
800K$470/mo800,000Unlimited200,000
1M$600/mo1,000,000Unlimited500,000
EnterpriseCustomCustomUnlimitedUnlimited
The Free plan has a 200 emails/day sending limit per sender domain. All paid plans have unlimited daily sending within their monthly quota.

Domain Sending Limit

There is a global daily limit of 200,000 emails per sender domain across all plans. This protects shared infrastructure and ensures deliverability.

Domain Creation Rate Limit

All plans include unlimited domains. To prevent abuse, domain creation is rate limited to 10 domains per hour per organization. This is a velocity cap — there is no limit on the total number of domains you can add.
LimitValue
Domains per hour10
Total domainsUnlimited (all plans)
If you exceed this limit, the API returns 429 Too Many Requests. Wait and retry after a few minutes.

Rate Limit Headers

Every API response includes headers to help you track your rate limit status:
HeaderDescription
X-RateLimit-LimitMaximum requests allowed per window
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when window resets
Retry-AfterSeconds to wait (only on 429 response)

Example Response Headers

response-headers.txt
HTTP/1.1 200 OK
X-RateLimit-Limit: 500
X-RateLimit-Remaining: 485
X-RateLimit-Reset: 1705320060
Content-Type: application/json

Check Current Usage

You can check your current rate limit status with any API request by inspecting the response headers, or use the usage endpoint:
Terminal
curl -X GET "https://api.unosend.co/usage" \
  -H "Authorization: Bearer un_your_api_key" \
  -i

Response

response.json
{
  "emails_sent": 12450,
  "email_limit": 50000,
  "period_start": "2026-01-01T00:00:00Z",
  "period_end": "2026-01-31T23:59:59Z"
}

Handling Rate Limits

When you exceed the rate limit, the API returns a 429 Too Many Requests response:
rate-limit-response.json
{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "Rate limit exceeded. Retry after 60 seconds.",
    "retry_after": 60
  }
}

Implementing Retry Logic

# Use --retry flag for automatic retries
curl -X POST "https://api.unosend.co/emails" \
  -H "Authorization: Bearer un_your_api_key" \
  -H "Content-Type: application/json" \
  --retry 3 \
  --retry-delay 5 \
  -d '{"from": "hello@yourdomain.com", "to": "user@example.com", "subject": "Hello", "html": "<p>Hi!</p>"}'

Exponential Backoff

For robust retry logic, use exponential backoff with jitter to avoid thundering herd:
exponential-backoff.ts
async function sendWithExponentialBackoff(
  payload: EmailPayload,
  maxRetries: number = 5
): Promise<Response> {
  const baseDelay = 1000; // 1 second
  
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      const response = await fetch('https://api.unosend.co/emails', {
        method: 'POST',
        headers: {
          'Authorization': `Bearer ${apiKey}`,
          'Content-Type': 'application/json'
        },
        body: JSON.stringify(payload)
      });
      
      if (response.status === 429) {
        // Calculate delay with exponential backoff + jitter
        const delay = baseDelay * Math.pow(2, attempt);
        const jitter = Math.random() * 1000;
        const waitTime = delay + jitter;
        
        console.log(`Rate limited. Waiting ${waitTime}ms (attempt ${attempt + 1})`);
        await sleep(waitTime);
        continue;
      }
      
      if (!response.ok) {
        throw new Error(`HTTP ${response.status}`);
      }
      
      return response;
      
    } catch (error) {
      if (attempt === maxRetries - 1) throw error;
      
      const delay = baseDelay * Math.pow(2, attempt);
      await sleep(delay);
    }
  }
  
  throw new Error('Max retries exceeded');
}

Queue Pattern for Bulk Sending

For sending many emails, use a queue with rate limiting to stay within limits:
rate-limited-queue.ts
class RateLimitedQueue {
  private queue: EmailPayload[] = [];
  private processing = false;
  private requestsPerSecond: number;
  
  constructor(requestsPerSecond: number = 10) {
    this.requestsPerSecond = requestsPerSecond;
  }
  
  async add(payload: EmailPayload): Promise<void> {
    this.queue.push(payload);
    this.process();
  }
  
  private async process(): Promise<void> {
    if (this.processing) return;
    this.processing = true;
    
    const interval = 1000 / this.requestsPerSecond;
    
    while (this.queue.length > 0) {
      const payload = this.queue.shift()!;
      
      try {
        await sendEmailWithRetry(payload);
      } catch (error) {
        console.error('Failed to send email:', error);
      }
      
      await sleep(interval);
    }
    
    this.processing = false;
  }
}

// Usage
const queue = new RateLimitedQueue(10); // 10 req/sec

for (const recipient of recipients) {
  queue.add({
    from: 'hello@yourdomain.com',
    to: recipient.email,
    subject: 'Hello!',
    html: '<p>Your email content</p>'
  });
}
For bulk sending, consider using the /v1/emails/batch endpoint which allows up to 100 emails per request, significantly reducing API calls.

Best Practices

1

Monitor rate limit headers

Check X-RateLimit-Remaining and slow down before hitting limits.
2

Use batch endpoints

Send multiple emails in one request using POST /v1/emails/batch to reduce API calls.
3

Implement queuing

Queue emails during high-traffic periods and process them at a controlled rate.
4

Use webhooks instead of polling

Instead of polling for status, use webhooks to receive delivery updates asynchronously.
5

Cache responses when possible

Cache responses from endpoints like GET /v1/domains to reduce unnecessary requests.

Need Higher Limits?

If you need higher rate limits for your use case, upgrade your plan or contact us for Enterprise options with custom limits.