It has really been an interesting one and i know we’ve learned – and i personally have also grown learning too, because most of the questions you ask and how you contribute to our labs at the central medias. So today uh. I promise that we’re going to share our elb elastic load balancer, which is one of the lab we did last week saturday. So as a as the custom is sharing that live class, it’s always long and sometimes we may not be able to complete it because of network issues on that issue, so always create time to do offline lab. Just to summarize, what we’ve done so i’m going to start by sharing my screen now, nice, okay, so um load, balancer, as we all know, is – is no more it’s. Also, like other api in amazon web service, so load balancer is not new to the network or the application user or solution architect or devops teams, or even software developer software engineers, so load balancer has been there it’s form of a networking thing. They always help to secure. So what is a load balancer a load balancer is a link that connects that receive and share its own is for you to have. You can connect it’s like a bridge instead of it to connect directly to a system? You use it to share the traffic that is coming from a setting location to the end user to the two available resources you want to connect to.

Unless, for instance, instance, you have instances, you have a two virtual machines and you this this. You want to connect to that virtual machine because you know that is so much load load in. It refers to usage which refers to requests that’s. What load is you know in different fields? Load me many things, but in it most especially in networking most especially in the cloud load referred to the assets, the demand of a resources. So if we have more people demanding for that, resources is called high load and if we have really two people uh demanding for that resources, we call it loan load or no load or average load. So it’s, based on that load that you’re going to use to configure the kind of way your resources to be available. So the amount of demands that you require from a resources understand the bandwidth everything so in order to channel it, so that also resources will not be overused. So we can cluster that resources, meaning we can do that resources to have like two or three four five form of a redundancy of a particular resources. Then you put something that will help share the load to those resources and those that thing that you put in between to share the load of resources is called a load. Balancer in aws is called elastic, load balancer. What is it called elastic because it has the capacity to you can design it in a way that even the load can continue to multiply.

Elastic is not stable, it’s not static, it can be dragged, you know when they say something is elastic, meaning something can increase and something can compress. So the elastic means that, for example, we are using a uh, auto scaling in one of those resources. So this those resources can multiply when the load is high and the council reduce when the load is slow, so that process of expanding and reducing score elastic that’s, why it’s called elastic load, which is the access to the resources balancer, so it’s, balancing between the load? So that is, that is what um the elastic load. Balancer is. What does it work? You use elastic load balancer for one use it to improve availability, meaning you want the resources to be available every time, let’s, for instance, we have five instances that is running and they are running the same thing. You understand me so instead of, for example, when is an issue with a system if you are connected directly to a server and that server is down, you know what it means means that that resource is not available, but with the help of a load balancer, you Can connect three servers with identical application to it, so when one server goes down, the remaining two server takes over the responsibility as if nothing happened, so it makes that load more available then increases for tolerance. You know with that, because when they say what is fossil infrastructure means for a system to keep working when it is fought.

I am so, as i mentioned earlier, when two system calculated three servers, one goes down and two is up so means there is fought, which means one of the system is done, which is effort, but for you to stick it working and it means that the system Can tolerate for 200 increases for tolerance; it provides security. Then there are some load balancer that can listen to private vpc. Instead of you connecting to you know, we mentioned bursting horse, which is part of security. Load balance also increase security. You know it depends on, for example, a load balancer. Can you can that redirects a a a an instance that is connected to a load balancer and that load, but that instance may be applied in public shopping? That instance can also be a passing host connected to another internet under security, another instances that is in private subnet, which you cannot access from the internet, and apart from that, you can also protect the load balancer with our dns, which should be connected to waf. You understand me so i increase security means that you cannot just attack one system and the system totally goes down. You understand all that system will take over you know. So there are so many things. Low balancer do, and you also support https, which is an encrypted way of sending end to end so is increases security then improves profitability. You know if you are in business and your system goes down.

You know you. It costs a lot when it is, for example, for a telecom company to cool down for one minute. For every one minute you can lose up to a million error understanding. So if the application that you needed to be assessed is not available, so profit goes down, so increase profitability i’m talking about that’s for the business people, then he improve improv reduces system resilience makes it seem to be more rugged and you get immediate and it provides Flexibility, you know you can decide to short cancer to bring it down you can that will mix it. The way you want, you can decide to design design your load balancer, based on the way you want it. You can just say: okay, delivers, actually listen to uh. To ip – and you can say no, i don’t want you to listen to ip. I want you to listen to a certain application in those applications. Let me i will get back to that. So increases uh actually makes flexibility. You can be flexible about it and it makes internet routing easy so meaning you can connect to the internet very easy. You can link you’re, not a web server that has a very interesting load balancer. So the set of few five people trying to connect with server makes its track. You can kind of increase um, you can increase your call latency, you can increase uh, it can be least to drag. You can make system to be out, but, with the help of a load, balancer you’ll be able to balance the load set, the load to difference, so no uh web server will be over over stressed.

Yes, i mean it makes you assessing it easy and also make it more available so uh. This is an example of the main function of a load. Balancer is to help manage and control flow of inbound requests. Testing to a group of target submission load – that’s inbound request – is called load. You know i mean target is what you have attached to the back. So, as you can see on my screen, we have. We have a setting a load balancer. This is a load balancer and it’s connected to at least an instant it’s. A lambda, they say compute instance is an application, so you can decide to design your load balancer. How you want it you understand, then this target could be easy. To instance, range of ip addresses, even containers like you see, these ones are containers. They say this is some there’s, a lambda and it’s a compute distance. The target defined within illumina could be suitable across different availability zone and all places within the same single, easy availability. So we know all i’ve mentioned in our former classes. We you’ll see why it’s an availability, oh it’s, a parameters domain. What is uh so so, most times they say, there’s a region, you understand, there’s, a region which is a particular geographic location of uh, the way they design it in a way that um, for example, we have work world region covering the old africa, which is south Africa, so inside our region here is um like a different location data center spread in different locations called region, and that region, you know, has some availability, so you can grouping availability, so small stands on.

Vpcs is um. Is ac oriented it’s not almost like a region in some, so you have to understand this, so you may it can you can place it in different region? You can put one in availability zone. You can put one in a one, a one b, one c. One d or doesn’t have different instance with a load balancer. You can spread your load across this availability zone and without having any issue it’s a typical elastic load, balancer. Okay, what are we have to two type of three type of cl elastic balance below balancing away? Some of us are familiar with. We have the internet elastic problem, but we have the ip ip configure that signal balance. Now we have the application, the balancer we have, the sam. We have the uh whole school, which is called classic load balancer. So basically, we we have two load balancer. We have one that is hold. What is team supported? So what is what is application load balancer? What is a ip global answer? How your business? I will go straight to the point. Um, an application load balancer means that you have designed your load. Are you getting me with an application on them? So you have told you’ve just designed your load balancer, that, when this is setting application is being called i’m going to be sending this request to this server. For example, let’s assume dc2 here is having a html web web application, which is we support apache with support in gx.

So you can design these load balancer to listen to to end that any up any requests requesting for http dot html.indes.html should send it to this. But this one could be php in index.js yeah. You know that means hp. If you send it to this, or a certain application depends on the kind of application you have. So you, you configure the application, the load balancer, to listen to a particular request relating to the accident application. Why a ip configured load balancer is a load. Balancer that’s immunity requests come in and you get me you can spread it to all uh load balancer without specifically looking at for the application so which is uh round robin mustang’s uses. So many requests come and spread it to the ip and share most times it makes the system more relaxed, more valuable. You know the in the low balance is easier because it’s not trying to look for application to see. Okay, this is the application. Is this application in this server? Is this in the server okay? Send the request is spread it so, most times most networking you use ip configure them no balancer, but for our it’s training to do i’m, going to focus on application load balancer, which is um in different level, things in level. Seven, why? I p is level level four, so for some of us that are into um the layers of of uh ip, you know talk to the first one we’re talking about the normal binary code.

You talked about the second layer. I want to talk about the your system. Mark address and that stuff and go down to tcp and all that stuff, so low balance is in the highest, which is levels seven level, seven, while now it’s in level four, so that is it. So this is an example of an elastic load balancer. So, as i mentioned, this is what make it elastic the auto scaling you know. So the request is coming in maybe true s 53, which means the dns, maybe like a satwamy.com.ng or jumboworld.com.engine. You know this dns request to okay. I want it to connect to http apache server straight to this place, so when the load is increasing, you can design it at all place when they don’t get to like 50 percent for five minutes increase get on that instance. So this is another interesting function. Why people take cloud very importantly, because it can increase and increase the servers and reduce it? This is what makes it elastic. So these are the things we need to configure and last a load balancer. You need an in the elastic load. Balancer. You need to configure the security settings so configure the security group. You configure the route table, the your routing, using the target which you are going to have um, meaning the load that the the the instances resources that the people requesting for it wants to connect to, which are the call targets and you review.

Then, after you, then you launch. So these are the things you need uh. We are going to make use of ppc, which is what i’ve mentioned before virtual private network. At least two subnets low balancer work with at least two subnets. Interestingly carnival. I think there was one configuration i did. I i used one instance. I pulled it to instance in the subnet, so there’s a way you can also do it, but most times it’s always advisable to have two different instances in different subnet and so because load balancer. You must have a and b it’s a must. Then you have the route table, which we know target group which i mentioned: listener, meaning how what is going to be requesting to what is going to be connected to and then what to do. What do i want to talk about um, internet listening to uh part 80 of buffer for 3.? They talking about the internet gateway we’re using that gateway the reason to scaling group. Then this is our architecture. Today, this architecture was collected from aws, so it’s available at aws, so you can take a screenshot of this. You can also you can also go to the site and just click and absorb it, and so look at the i’m talking about many users. So i only put one user – a many user using their own system, maybe one user using the system and the system is connected to the internet, which is http https, then it’s connected is trying to so from.

There is linking to the a er to the eld. You know the airbrush is elastic load, balancer, the elastic load balancer, you know automatically transfer the mode that is coming from the request to either of the instance and this one is you we now the students, the the engineers, the the the infrastructure engineer, the devops team, The the networking team that is in charge of configuring, the node balancer so using the api client, which we are going to use like, i think, i’ll, see you like today, we’ll make making use of putty party. You understand me, then this is the owner that i am you must have, and i am configured to that. I use that to have to have the capacity to launch the i, the load, balancer so i’m going to make use of that add me. I am row, then this is our console, which means what we’re going to use to connect to the aws con aws environment. In this case, we are using now sign single sign, console going to use um normal web console then, and the dns talking about our configuring needs to instead of directly using the ip address to link or i want to convert our ip address or our url from The load balancer to a name, so i want to make use of tns 53, then it’s optional. But this is our training today, then another interesting thing that a load balancer do that i want to also mention is that they help you to check all the instances at the back end the targets and see if they are working well, if they are not working When we tell you, the health check is not working well, but there are sometimes you can also.

Yes. Maybe there are some servers that you don’t want that you want to work on, it will be work, you can manually, configure it and say: okay, just say this survive it’s not working well, so that the load balancer will not send requests to it. You can configure that that will be in advanced class. You can configure that so can also happen. So this is our lab, so i believe, with this our little presentation for going to our lab you’re able to understand load balancer so in our next uh slide. I want to focus on the lab itself, so i hope it comes handy.

https://www.youtube.com/watch?v=5UFGAFfxx78