Overview
Rate limits protect the API from abuse and ensure fair usage for all users. Limits are applied per API key and vary by plan.
| Plan | Requests/Second | Emails/Month | Burst Limit |
|---|
| Free | 10 req/s | 5,000 emails | 20 requests |
| Pro ($20/mo) | 100 req/s | 50,000 emails | 200 requests |
| Scale ($100/mo) | 500 req/s | 200,000 emails | 1000 requests |
Every API response includes headers to help you track your rate limit status:
| Header | Description |
|---|
X-RateLimit-Limit | Maximum requests allowed per window |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when window resets |
Retry-After | Seconds to wait (only on 429 response) |
HTTP/1.1 200 OK
X-RateLimit-Limit: 200
X-RateLimit-Remaining: 150
X-RateLimit-Reset: 1705320000
Content-Type: application/json
Check Current Usage
You can check your current rate limit status with any API request, or use the dedicated endpoint:
curl -X GET "https://www.unosend.co/api/v1/usage" \
-H "Authorization: Bearer un_your_api_key" \
-i
Response
{
"plan": "pro",
"rate_limit": {
"requests_per_second": 100,
"burst_limit": 200
},
"usage": {
"emails_sent": 12450,
"emails_limit": 50000,
"period_start": "2024-01-01T00:00:00Z",
"period_end": "2024-01-31T23:59:59Z"
}
}
Handling Rate Limits
When you exceed the rate limit, the API returns a 429 Too Many Requests response:
{
"error": {
"code": "rate_limit_exceeded",
"message": "Rate limit exceeded. Retry after 60 seconds.",
"retry_after": 60
}
}
Implementing Retry Logic
# Use --retry flag for automatic retries
curl -X POST "https://www.unosend.co/api/v1/emails" \
-H "Authorization: Bearer un_your_api_key" \
-H "Content-Type: application/json" \
--retry 3 \
--retry-delay 5 \
-d '{"from": "[email protected]", "to": "[email protected]", "subject": "Hello", "html": "<p>Hi!</p>"}'
Exponential Backoff
For robust retry logic, use exponential backoff with jitter to avoid thundering herd:
async function sendWithExponentialBackoff(
payload: EmailPayload,
maxRetries: number = 5
): Promise<Response> {
const baseDelay = 1000; // 1 second
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await fetch('https://www.unosend.co/api/v1/emails', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify(payload)
});
if (response.status === 429) {
// Calculate delay with exponential backoff + jitter
const delay = baseDelay * Math.pow(2, attempt);
const jitter = Math.random() * 1000;
const waitTime = delay + jitter;
console.log(`Rate limited. Waiting ${waitTime}ms (attempt ${attempt + 1})`);
await sleep(waitTime);
continue;
}
if (!response.ok) {
throw new Error(`HTTP ${response.status}`);
}
return response;
} catch (error) {
if (attempt === maxRetries - 1) throw error;
const delay = baseDelay * Math.pow(2, attempt);
await sleep(delay);
}
}
throw new Error('Max retries exceeded');
}
Queue Pattern for Bulk Sending
For sending many emails, use a queue with rate limiting to stay within limits:
class RateLimitedQueue {
private queue: EmailPayload[] = [];
private processing = false;
private requestsPerSecond: number;
constructor(requestsPerSecond: number = 10) {
this.requestsPerSecond = requestsPerSecond;
}
async add(payload: EmailPayload): Promise<void> {
this.queue.push(payload);
this.process();
}
private async process(): Promise<void> {
if (this.processing) return;
this.processing = true;
const interval = 1000 / this.requestsPerSecond;
while (this.queue.length > 0) {
const payload = this.queue.shift()!;
try {
await sendEmailWithRetry(payload);
} catch (error) {
console.error('Failed to send email:', error);
}
await sleep(interval);
}
this.processing = false;
}
}
// Usage
const queue = new RateLimitedQueue(10); // 10 req/sec
for (const recipient of recipients) {
queue.add({
from: '[email protected]',
to: recipient.email,
subject: 'Hello!',
html: '<p>Your email content</p>'
});
}
For bulk sending, consider using the /v1/emails/batch endpoint which allows up to 100 emails per request, significantly reducing API calls.
Best Practices
Monitor rate limit headers
Check X-RateLimit-Remaining and slow down before hitting limits.
Use batch endpoints
Send multiple emails in one request using POST /v1/emails/batch to reduce API calls.
Implement queuing
Queue emails during high-traffic periods and process them at a controlled rate.
Use webhooks instead of polling
Instead of polling for status, use webhooks to receive delivery updates asynchronously.
Cache responses when possible
Cache responses from endpoints like GET /v1/domains to reduce unnecessary requests.
Need Higher Limits?
If you need higher rate limits for your use case, upgrade your plan or contact us for Enterprise options with custom limits.