August 21, 2022 6:00 PM PDT
This document summarizes a mock system design interview focused on creating a rate limiter for public services in an e-commerce company. The discussion included functional and scaling requirements, system design considerations, algorithm choices, and feedback on the interviewee's performance.
Functional Requirements
- Rate Limiter Purpose: To manage request rates for various public services.
- Service Differentiation: Different rate limits for different services.
- Request Limiting: Limit requests based on user ID and service.
- Example: Post API for creating an item.
Scaling Requirements
- User Base: Assume 1 million active users.
- Request Rate: Each user makes 20 requests per day.
- Scaling Estimate:
- Total requests: 20 million requests per day.
- Queries Per Second (QPS): 20 million / 100,000 = 200 QPS.
System Design
Rate Limiter Placement
- Location: Rate limiter should be placed in both service proxy and application proxy.
- IP-based Throttling: Initially considered but deemed unnecessary.
Algorithm Choices
- Token Bucket: Allows for bursty traffic.
- Example: 100 requests to service A per minute.
- Fixed Window Algorithm: Simpler but can allow bursts near time boundaries.
Configuration Setup
- Configuration Files: Use YAML format for configuration.
- Configuration Example:
Domain: service; service_A Descriptors: key: service_name value: service_A rate_limit: Unit: m rate_limit_value: 100
Handling Requests
- Request Allowance Logic:
- For most users: limit to 100 requests/second.
- For important users: limit to 1000 requests/second.
Redis Integration
- Redis Usage: Implement fixed window rate limit algorithm using Redis.
- Key Management:
- Use keys like
service_b+service_A
to track requests. - Set expiration for keys to manage time windows.
- Use keys like
Cache Loading
- Configuration Service: An independent service that updates the rate limiter configuration.
- Delay in Configuration Push: Acknowledge potential delays in configuration updates across nodes.
Interview Feedback
Soft Skills
- The interviewee demonstrated good timing and the ability to ask key questions.
Hard Skills
- Preparedness was noted, but algorithm choices could be improved.
- The interviewer suggested focusing on trade-offs between different algorithms rather than pseudocode.
Self-Review
- The interviewee expressed surprise at the discussion about the placement of the rate limiter and reflected on whether to insist on a better algorithm.
Audience Insights
- Sidecar Pattern: Suggested for deploying rate limiter daemons.
- Token Bucket vs. Leaky Bucket: Discussed the advantages of each based on traffic patterns.
- Configuration Management: Highlighted the importance of centralized control for rate limiting policies.
Additional Considerations
- Fallback Strategies: Discussed default configurations to ensure minimum request guarantees.
- Redis Clustering: Considerations for scaling Redis and handling concurrent users.
- Local Token Bucket: Proposed as a fallback if Redis is down.
This summary captures the key points discussed during the interview, focusing on the technical aspects of designing a rate limiter system.