Architecting using AWS core services
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud, offering over 200 fully featured services from data centers globally.
Among these hundreds of services, we require only a select few known as “core services,” which assist us in hosting our entire tech stack. As depicted in the figure below, our tech stack finds its hosting through these essential core services.
Now that we have discussed core services, let’s delve into designing an architecture for deploying our application, which is composed of these core services.
Overview
- We start with a VPC network responsible for managing all aspects of our infrastructure. It’s important to note that this VPC network is region-specific.
- Within this network, there are multiple subnets, each dedicated to hosting specific components of our application. These subnets are distributed across various availability zones (AZs).
We will start from hitting the URL and getting back the loading of full webpage in our browser.
First and foremost our browser sends a request to a DNS server to translate the human-readable URL (e.g., www.rawdata.com) into an IP address. In our case we are using Amazon Route 53 as your DNS service, it will handle this translation and provide the IP address associated with the domain.
Once the DNS server provides the IP address, your browser now knows where to send the request. It doesn’t go directly to your AWS server at this point; it goes to the IP address obtained from the DNS resolution.
In the subsequent phase, our request reaches the AWS server, where load balancers play a pivotal role by ensures high availability and fault tolerance by distributing incoming requests across multiple application servers. Within the Virtual Private Cloud (VPC) network, our request is meticulously processed. This process may involve querying databases hosted on Amazon RDS (Relational Database Service), retrieving files from Amazon S3 (Simple Storage Service), or executing complex application logic on EC2 (Elastic Compute Cloud) instances.
Upon the completion of this journey, AWS promptly sends back the HTML response. This response is referred to as a SERVER-SIDE RESPONSE, and the request initiated by the user is termed a SERVER-SIDE REQUEST.
Lastly, once we receive the HTML response, the user generates another request, known as a CLIENT-SIDE REQUEST, often involving the retrieval of static assets such as JavaScript files, CSS, and images. To expedite the delivery of these static assets and enhance user experience, this request is directed to a content delivery network (CDN), in our case, AWS CloudFront. AWS CloudFront leverages a global network of edge locations to cache and serve static content from locations geographically closer to the user, reducing latency and improving load times. It also provides advanced security features, such as DDoS protection and web application firewall (WAF) capabilities, to safeguard against online threats.
CONCLUSION
This journey illustrates the power of AWS in simplifying the complexities of web development and hosting. Whether you’re a seasoned AWS practitioner or just embarking on your cloud journey, understanding these intricacies can empower you to build efficient, reliable, and performant web applications.