Reference Cloud Architecture for Enterprise back end

Nowadays the ‘Micro Services’ is the architecture widely chosen for the modern web apps, irrespective of whether its suitable, worth and quick to market for the problem in hand. Instead of discussing about Micro Services architecture, I would like to present simple service based cloud architecture for small and medium enterprise applications.

Nowadays Front ends are very much decoupled with the back ends. They are typically an SPA or hybrid or native apps. We are not going to discuss about the Front ends in this article. My focus would be only on the back end. Typically an enterprise back end consists of one or more services/modules. Each module can have typically 3 types of responsibilities.

  1. To serve API traffic (i) External from either from own application front ends or from third party apps (ii) Internal originated from other services. These services could be configured as auto scaling groups if the traffic is variable.
  2. To process asynchronous events (via Kafka or any other message queues).
  3. To run scheduled/cron jobs for reports or cleanup kind of use cases.

Have only one code repository per module/service. Do not have one per each of the running types such as api, cron, async event handlers, which will result in lot of code redundancy and maintainability overhead. Instead deploy the same code base with different command line options to serve for each of the above responsibilities.

A cloud Infrastructure to support an enterprise application would consists of below mentioned components. Note that we are not covering edge server based deployments, which are meant for high scale.

Cloud Architecture diagram for small and medium enterprise web apps
  1. External ALB (with WAF) and SSL termination configured. This would just proxy the requests to API Gateway/Auth/Session Layer.
  2. API Gateway/Auth/Session Layer with high availability.
  3. Internal ALB. This is configured with path based routing to the internal (API) services.
  4. RDS &/ DynamoDb/MongoDb Atlas Cluster (VPC peered if using atlas or any other third party database services)
  5. Redis or Aerospike like key,value cache store.
  6. AWS Lambda kind of scheduled runner or separate EC2 instances if the scheduled jobs run for longer duration with high frequency. As there is high effort involved to make your ec2 compliant service code base work seamlessly with lambda, I would prefer going with cron jobs on EC2s.
  7. Amazon SQS or Confluent Kafka for Async communication across services. In case the Async communication is very low you can choose to have shared confluent kafka cloud.
  8. S3 for all your file storage. Don’t store any persistent data on EBS because when more than one instance of the service runs, the state of the application gets inconsistent.
  9. For any search functionality like in e-commerce applications, use elastic search service.

A Typical request flow would consists of following steps.

  1. Request originates from a Web Browser/Mobile App.
  2. SSL termination happens at external ALB. The plain http request is passed through WAF rules, if the request is allowed, will be forwarded to API gateway/Auth/Session layer else the request is silently dropped.
  3. Post successful authorization the plain http request is forwarded to Internal ALB.
  4. Internal ALB uses path based routing rules to forward the request to the respective services.
  5. The services use db/cache/s3 to fulfill the incoming requests

An entrepreneur with a decade of experience in building MEAN stack SaaS platforms.