Design Lessons from AWS Subsystem

While learning AWS, I could get to some of the fascinating insights into the AWS design. Well it could look obvious at first. If you pay close attention to the way different sub systems are integrated, then you can see each subsystem is like a building block. You can mix and match different subsystems based on the need.

For example, you got load balancers in AWS. Its basic functionality is to distribute the load. But the generalization is such that you can distribute the load between EC2 instances, IP addresses, Lambads or other application load balancer.

You need to maintain cohesive parts together, while part with different design philosophy stands apart. It looks like a top down approach of the design where they built the boundaries for each subsystem, and then within each subsystem they laid the boundaries again for each component.

Why am I saying it as fascinating, because we tend to develop system with bottom up approach. Our view is narrowed to UI and backend most of the time. With this skewed vision, it is difficult to comprehend the grand design infront of us.

There is a different class of people who say do not generalize until you meet the problem. But reality is the railings built in the initial days can not be changed. Any modifications later to the design will either be impossible or incurs significant cost. Cost is not just the development cost, it could be the ease at which you can add new feature or delete unwanted part.

I will take a break now letting you pondering on your design. Until then have a good day.

Poor Man’s Cloud

I am learning AWS these days. I have exposure to PCF (Pivotal Cloud Foundry), now VMware Tanzu. AWS, Tanzu, Azure, Google cloud in my opinion use similar hardware virtualization in the background, while offering a web console to manage the container instances.

If you know docker containers and its networking, then imagine you giving control to outside person to your docker setup. Actually docker does give a remote control. If you go to docker settings you will see option to expose daemon without TLS.

Enable this setting. Since I am running a test I am not executing any caution here 🙂

I can go to command prompt and execute the following command while docker is running or you can go to web browser and navigate to http://localhost:2375/v1.38/containers/json

curl localhost:2375/v1.38/containers/json

You will see the response coming from daemon like the following

It is giving you the information about the containers like Id, Image used for container, its network details, its current state etc.

If you write the a UI interface to read the data, you will have a web cloud console. Ofcourse you need to go a little further on controlling the containers, but there are other ways to do it. Thanks for reading. 🙂