Architecture
Understand how SuperAPI fits into your infrastructure and how it works
Architectural Components
SuperAPI's architecture consists of these key components working together as an integrated gateway:
The SuperAPI Gateway serves as the entry point for all API requests, handling multiple functions within a single integrated service. It routes traffic to either its internal cache or your origin API, stores and retrieves cached API responses, monitors your database's replication stream to detect data changes, and processes these changes to keep the cache fresh.
SuperAPI connects to your VPC environment where your load balancer, API servers, and database reside, integrating seamlessly with your existing infrastructure.
This integrated approach means you don't need to maintain or configure any additional components - SuperAPI handles all caching functionality within a single managed service.
If the SuperAPI Gateway experiences any issues or failures, it automatically switches to a failover mode that allows all requests to pass through directly to your origin API. This ensures your services remain available even in the unlikely event of a gateway issue.
This architecture enables SuperAPI to:
- Serve cached responses when possible for maximum performance
- Forward requests to your origin when necessary
- Monitor database changes to keep the cache fresh
- Intelligently invalidate only affected cache entries
The result is significantly improved API performance without sacrificing data freshness or requiring changes to your existing API implementation.
Deployment and Integration
Proxy Architecture
SuperAPI functions as a proxy between your clients and your origin API. When a request arrives, SuperAPI first checks for a valid cached response. For cache hits, it immediately returns the cached response without contacting your origin. For cache misses, it forwards the request to your origin, caches the appropriate response, and returns it to the client.
This proxy design allows SuperAPI to seamlessly integrate with your existing infrastructure without requiring changes to your API implementation.
Managed Deployment
When you create and deploy a gateway through the SuperAPI platform website, the entire service is fully deployed and maintained by the SuperAPI team. This managed approach eliminates the need for you to handle any infrastructure management, updates, or scaling concerns.
You simply configure your gateway, connect your database, and define your endpoints through the SuperAPI dashboard - the SuperAPI team handles everything else.
Once deployed, SuperAPI is completely managed by the SuperAPI team, with no additional infrastructure or maintenance required from your side.
Hosting Options
SuperAPI is cloud-based and can be deployed in two ways:
-
SuperAPI Cloud (default): Your gateway is hosted on SuperAPI's cloud infrastructure, managed entirely by the SuperAPI team. The SuperAPI instance is deployed in the exact region and cloud provider you select when creating your gateway, ensuring optimal proximity to your application and users.
-
Enterprise Plan: For customers on the enterprise plan, SuperAPI can be deployed directly to your company's cloud environment, giving you additional control and keeping traffic within your network perimeter.
In either case, the deployment and maintenance are fully managed by SuperAPI, with no infrastructure management required from your team.
Network Placement
By default, SuperAPI is deployed in front of your load balancer, acting as the first point of contact for API requests before they reach your infrastructure. For enterprise customers, SuperAPI can also be deployed behind your load balancer if required by your network architecture or security policies.
Infrastructure Requirements
SuperAPI is a complete, fully-managed solution that includes all necessary caching infrastructure. You don't need to provision Redis or any other cache databases. When you deploy a gateway, SuperAPI automatically provisions and configures all required caching infrastructure, with no additional resources needed from your end.
This one-click approach ensures you can start caching your APIs immediately without worrying about infrastructure setup or maintenance.
Cloud Provider Support
SuperAPI can be deployed to the following cloud providers and regions:
AWS Regions
- US East (N. Virginia) - us-east-1
- US East (Ohio) - us-east-2
- US West (Oregon) - us-west-2
- Europe (Ireland) - eu-west-1
- Europe (Frankfurt) - eu-central-1
- Europe (London) - eu-west-2
- Asia Pacific (Tokyo) - ap-northeast-1
- Asia Pacific (Singapore) - ap-southeast-1
- Asia Pacific (Sydney) - ap-southeast-2
- Asia Pacific (Mumbai) - ap-south-1
- South America (São Paulo) - sa-east-1
Google Cloud Platform (GCP) Regions
- Iowa (us-central1)
- Oregon (us-west1)
- South Carolina (us-east1)
- Northern Virginia (us-east4)
- Belgium (europe-west1)
- London (europe-west2)
- Frankfurt (europe-west3)
- Tokyo (asia-northeast1)
- Singapore (asia-southeast1)
- Sydney (australia-southeast1)
- Mumbai (asia-south1)
Additional regions can be supported upon request for enterprise customers. Contact the SuperAPI team if you need deployment in a region not listed above.
Security Considerations
SuperAPI is designed with security in mind:
- Database Access: The database connection uses a read-only user with minimal permissions
- Data Isolation: Each customer's data is completely isolated
- Enterprise Options: For enterprise customers, SuperAPI can be deployed within your own cloud environment
- Authentication Support: SuperAPI supports JWT authentication and can maintain proper cache isolation based on identity
For more details on authentication and security, see our Authentication documentation.
Next Steps
Now that you understand how SuperAPI fits into your architecture, you're ready to: