10. Design A Rate Limiter
设计单机限流器与分布式限流器
Reference: ByteByteGo
In a network system, a rate limiter is used to control the rate of traffic sent by a client or a service. In the HTTP world, a rate limiter limits the number of client requests allowed to be sent over a specified period. If the API request count exceeds the threshold defined by the rate limiter, all the excess calls are blocked. Here are a few examples:
- A user can write no more than 2 posts per second.
- You can create a maximum of 10 accounts per day from the same IP address.
- You can claim rewards no more than 5 times per week from the same device.
In this chapter, you are asked to design a rate limiter. Before starting the design, we first look at the benefits of using an API rate limiter:
- Prevent resource starvation caused by Denial of Service (DoS) attack [1]. Almost all APIs published by large tech companies enforce some form of rate limiting. For example, Twitter limits the number of tweets to 300 per 3 hours [2]. Google docs APIs have the following default limit: 300 per user per 60 seconds for read requests [3]. A rate limiter prevents DoS attacks, either intentional or unintentional, by blocking the excess calls. 速率限制器通过阻止过多的调用来防止有意或无意的 DoS 攻击。
- Reduce cost. Limiting excess requests means fewer servers and allocating more resources to high priority APIs. Rate limiting is extremely important for companies that use paid third party APIs. For example, you are charged on a per-call basis for the following external APIs: check credit, make a payment, retrieve health records, etc. Limiting the number of calls is essential to reduce costs. 降低成本,限制过多的请求意味着更少的服务器并将更多的资源分配给高优先级的 API。
- Prevent servers from being overloaded. To reduce server load, a rate limiter is used to filter out excess requests caused by bots or users’ misbehavior. 防止服务器过载。
Step 1 - Understand the problem and establish design scope
Rate limiting can be implemented using different algorithms, each with its pros and cons. The interactions between an interviewer and a candidate help to clarify the type of rate limiters we are trying to build.
Requirements
Here is a summary of the requirements for the system:
- Accurately limit excessive requests. 准确地限制过多的请求。
- Low latency. The rate limiter should not slow down HTTP response time. 低延迟,不会拖慢HTTP响应时间。
- Use as little memory as possible. 尽可能少的内存。
- Distributed rate limiting. The rate limiter can be shared across multiple servers or processes. 多个服务器或进程之间共享速率限制器。
- Exception handling. Show clear exceptions to users when their requests are throttled. 异常处理。
- High fault tolerance. If there are any problems with the rate limiter (for example, a cache server goes offline), it does not affect the entire system. 高容错性,确保速率限制器在出现问题时不会影响整个系统。
Step 2 - Propose high-level design
Where to put the rate limiter?
Instead of putting a rate limiter at the API servers, we create a rate limiter middleware, which throttles requests to your APIs as shown in below Figure. 使用中间件。
- Assume our API allows 2 requests per second, and a client sends 3 requests to the server within a second.
- The first two requests are routed to API servers.
- However, the rate limiter middleware throttles the third request and returns a HTTP status code 429. The HTTP 429 response status code indicates a user has sent too many requests.
While designing a rate limiter, an important question to ask ourselves is: where should the rater limiter be implemented, on the server-side or in a gateway? There is no absolute answer. It depends on your company’s current technology stack, engineering resources, priorities, goals, etc. Here are a few general guidelines:
- Identify the rate limiting algorithm that fits your business needs. When you implement everything on the server-side, you have full control of the algorithm. However, your choice might be limited if you use a third-party gateway.
- If you have already used microservice architecture and included an API gateway in the design to perform authentication, IP whitelisting, etc., you may add a rate limiter to the API gateway.
Algorithms for rate limiting
Rate limiting can be implemented using different algorithms, and each of them has distinct pros and cons. Even though this chapter does not focus on algorithms, understanding them at high-level helps to choose the right algorithm or combination of algorithms to fit our use cases. Here is a list of popular algorithms:
- Token bucket 令牌桶
- Leaking bucket 漏桶
- Fixed window counter 固定窗口计数器
- Sliding window log
- Sliding window counter 滑动窗口计数器
Token bucket algorithm 令牌桶算法
The token bucket algorithm is widely used for rate limiting. It is simple, well understood and commonly used by internet companies. Both Amazon [5] and Stripe [6] use this algorithm to throttle their API requests.
The token bucket algorithm work as follows:
- A token bucket is a container that has pre-defined capacity. Tokens are put in the bucket at preset rates periodically. Once the bucket is full, no more tokens are added. 令牌桶是一个具有预定义容量的容器。令牌定期以预设速率放入桶中。一旦桶满了,就不再添加令牌。
- Each request consumes one token. When a request arrives, we check if there are enough tokens in the bucket. Figure 5 explains how it works. 每个请求消耗一个令牌。当请求到达时,我们检查桶中是否有足够的令牌。
- If there are enough tokens, we take one token out for each request, and the request goes through.
- If there are not enough tokens, the request is dropped. 如果没有足够的令牌,请求将被丢弃。
Figure 6 illustrates how token consumption, refill, and rate limiting logic work. In this example, the token bucket size is 4, and the refill rate is 4 per 1 minute.
The token bucket algorithm takes two parameters:
- Bucket size: the maximum number of tokens allowed in the bucket. 桶的最大令牌数量。
- Refill rate: number of tokens put into the bucket every second. 每秒放进桶中的令牌数量。
How many buckets do we need? This varies, and it depends on the rate-limiting rules. Here are a few examples.
-
It is usually necessary to have different buckets for different API endpoints. For instance, if a user is allowed to make 1 post per second, add 150 friends per day, and like 5 posts per second, 3 buckets are required for each user. 通常需要为不同的 API 端点设置不同的桶。
-
If we need to throttle requests based on IP addresses, each IP address requires a bucket. 如果我们需要根据 IP 地址限制请求,每个 IP 地址都需要一个桶。
-
If the system allows a maximum of 10,000 requests per second, it makes sense to have a global bucket shared by all requests.
-
Pros
- The algorithm is easy to implement.
- Memory efficient.
- Token bucket allows a burst of traffic for short periods. A request can go through as long as there are tokens left.
-
Cons
- Two parameters in the algorithm are bucket size and token refill rate. However, it might be challenging to tune them properly.
-
Challenges
- 如何更小的降低对流量的折损?
- Adjective Load Control 自适应限流器: 对下游负载情况进行分析根据反馈指标调节令牌桶平均放入令牌的速率,来做到使用限流减少误限率。
Leaking bucket algorithm 漏桶算法
The leaking bucket algorithm is similar to the token bucket except that requests are processed at a fixed rate. It is usually implemented with a first-in-first-out (FIFO) queue. The algorithm works as follows:
- When a request arrives, the system checks if the queue is full. If it is not full, the request is added to the queue.
- Otherwise, the request is dropped. 请求到达时,如果队列已满,则丢弃请求。
- Requests are pulled from the queue and processed at regular intervals. 请求从队列中拉出并定期处理。
Leaking bucket algorithm takes the following two parameters:
- Bucket size: it is equal to the queue size. The queue holds the requests to be processed at a fixed rate. 队列大小。队列以固定速率保存要处理的请求。
- Outflow rate: it defines how many requests can be processed at a fixed rate, usually in seconds. 输出请求的固定速率。
Shopify, an ecommerce company, uses leaky buckets for rate-limiting[7].
- Pros
- Memory efficient given the limited queue size.
- Requests are processed at a fixed rate therefore it is suitable for use cases that a stable outflow rate is needed.
- Cons
- A burst of traffic fills up the queue with old requests, and if they are not processed in time, recent requests will be rate limited.
- There are two parameters in the algorithm. It might not be easy to tune them properly.
- Challenges
- 漏桶算法中,出水的速率是恒定的,这就很难应对突发流量在某一毫秒,QPS 可能极高超过剩余容量,大量请求被拒绝或自旋,但下一秒却又恢复。这就会导致 1s 内拒绝掉了更多的请求。
- 我们有了一个新的技术约束: 应对突发流量,尽可能的减少发生限流时被拒绝请求的比例。
Fixed window counter algorithm 固定窗口计数器
Fixed window counter algorithm works as follows:
- The algorithm divides the timeline into fix-sized time windows and assign a counter for each window. 该算法将时间线划分为固定大小的时间窗口,并为每个窗口分配一个计数器。
- Each request increments the counter by one. 每个请求都会将计数器递增 1。
- Once the counter reaches the pre-defined threshold, new requests are dropped until a new time window starts. 一旦计数器达到预定义的阈值,新请求将被丢弃,直到新的时间窗口开始。
Let us use a concrete example to see how it works. In Figure 8, the time unit is 1 second and the system allows a maximum of 3 requests per second. In each second window, if more than 3 requests are received, extra requests are dropped as shown in Figure 8.
A major problem with this algorithm is that a burst of traffic at the edges of time windows could cause more requests than allowed quota to go through. 该算法的一个主要问题是时间窗口边缘的突发流量可能导致超过允许的配额通过的请求。
In Figure 9, the system allows a maximum of 5 requests per minute, and the available quota resets at the human-friendly round minute. As seen, there are five requests between 2:00:00 and 2:01:00 and five more requests between 2:01:00 and 2:02:00. For the one-minute window between 2:00:30 and 2:01:30, 10 requests go through. That is twice as many as allowed requests.
- Pros
- Memory efficient.
- Easy to understand.
- Resetting available quota at the end of a unit time window fits certain use cases. 在单位时间窗口结束时重置可用配额适合某些用例。
- Cons
- Spike in traffic at the edges of a window could cause more requests than the allowed quota to go through. 窗口边缘的流量激增可能导致通过的请求超过允许的配额。
- 比如,在 1s 中请求可能是不均匀的,这就导致在 1s 中切换的边界丢失限流状态,如果在此时流量徒增就会导致服务雪崩。