Fundamentals of Azure Internal Load balancers ILBs

Fundamentals of Azure Internal Load balancers by Bruno Terkaly 

  1. Internal load balancing (ILB) enables you to run highly available services behind a private IP address
  2. Internal load balancers are only accessible only within a cloud service or Virtual Network (VNet)
    • This provides additional security on that endpoint.

Some questions I am hearing

  1. I am able to access internal load balancer using IP address but not via load balancer or service name?
    • See Accessing the ILB below
  2. Is there any option on Azure portal to view load balancer configuration?
    • Internal load balancing cannot be configured through the portal as of today, this will be supported in the future
    • However, it can be configured using powershell cmdlets.
      • ILB can be used in a deployment inside a Regional Virtual Network as well in a new deployment that is outside the Virtual Network
  3. How do I monitor the traffic and which server it is redirecting it to?
  4. How do I setup the probing and rules/alerts for it?
    • See the links below

ILB ENABLES THE FOLLOWING NEW TYPES OF LOAD BALANCING:

  1. Between virtual machines within a cloud service.
  2. Between virtual machines in different cloud services that are themselves contained within a virtual network.
  3. Between on-premises computers and virtual machines in a cross-premises virtual network.

Some diagrams

EXAMPLE OF A MULTI-TIER APPLICATION USING WEB SERVERS AS THE FRONT END AND DATABASE SERVERS AS THE BACK END IN A CLOUD SERVICE.

  1. Multi-Tier Web Appimage001

    Figure 1: Architecture for a Multi-Tier Web App

ILB CAN PERFORM LOAD BALANCING FOR TRAFFIC FROM INTRANET CLIENTS

  1. Traffic from clients on the on-premises network get load-balanced across the set of LOB servers running in a cross-premises virtual network
  2. You don’t need a separate load balancer in the on-premises network or in the virtual networkimage002

    Figure 2: Architecture for an Intranet Network

LOAD BALANCING ON-PREMISES SERVER TRAFFIC

  1. ILB also allows traffic from servers on the on-premises network to be load-balanced across virtual machines running in a cross-premises virtual network.image003

    Figure 3: Architecture for an On-Premises Network

FROM ON PREMISES

  1. When used within a Virtual Network the ILB endpoint is also accessible from on-premises and other inter-connected VNets allowing some powerful hybrid scenarios

ACCESSING THE ILB

FROM INSIDE A CLOUD SERVICE

  1. VMs inside a cloud service have private IP address spaces
  2. You can talk to the ILB using this private IP address

FROM WITHIN A VIRTUAL NETWORK

  1. A customer can specify a static VNet IP address
  2. A customer can retrieve the load balanced IP is acquired from a virtual subnet
  3. This allows you to be connected VNets through the secure IP Sec tunnel

Some useful links

Regional Virtual Networkshttp://azure.microsoft.com/blog/2014/05/14/regional-virtual-networks/#
Internal Load Balancinghttp://azure.microsoft.com/blog/2014/05/20/internal-load-balancing/#
Configure an internal load-balanced sethttp://msdn.microsoft.com/en-us/library/azure/dn690125.aspx#
Azure Load Balancerhttp://msdn.microsoft.com/en-us/library/azure/dn655058.aspx#
Configure a load-balanced sethttp://msdn.microsoft.com/en-us/library/azure/dn655055.aspx#

Comparing Cloud Compute Services

Comparing Cloud Compute Services.

Jason Read at CloudHarmony does a great job sorting through all of the different cloud computer offerings by vendors and then compares their performance in a like-to-like benchmark.  This is no small task and most reports I have read comparing cloud compute do not do a good enough job of comparing similar services when conducting their performance tests.  You can obtain a copy of the full report from here which is 100+ pages long.

Below are some the test results along with my own comments as well.

Compute Service Provider 
Web Server ComparisonSmall Web ServerMedium Web ServerLarge Web Server
Amazon EC2 (Instance Types Explained)c3.large + t2.mediumc3.xlargec3.2xlarge
DigitalOcean (Types Explained)4 GB / 2 Cores8 GB / 4 Cores16 GB / 8 Cores
Google Compute Engine (Types Explained)n1-highcpu-2n1-highcpu-4n1-highcpu-8
Microsoft Azure (Tiers Explained)Medium (A2)Large (A3)Extra Large (A4)
Rackspace (Types Explained)Performance 1 2GBPerformance 1 4GBPerformance 1 8GB
SoftLayer (Types Explained)2 GB / 2 Cores4 GB / 4 Cores8 GB / 8 Cores
Database Server ComparisonSmall Database ServerMedium Database ServerLarge Database Server
Amazon EC2c3.xlargec3.2xlargec3.4xlarge
DigitalOcean8 GB / 4 Cores16 GB / 8 Cores48 GB / 16 Cores
Google Compute Enginen1-standard-4n1-standard-8n1-standard-16
Microsoft AzureLarge (A3)Extra Large (A4)A9
RackspacePerformance 2 15GBPerformance 2 30GBPerformance 2 60GB
SoftLayer8 GB / 4 Cores16 GB / 8 Cores32 GB / 16 Cores

Test Results

CPU Performance Results

In the web server test Amazon EC2 was a little better than the rest of the competition.  And in the database server test Rackspace was slightly better until the testing of large database servers where Azure’s new A9 server won out.

Also included in this test was CPU variability.  In these multi-tenant, shared resource environments the performance over time can be a risk to a customers if that is the case this test should be taken into consideration as well as the other variable tests included in the report.  For this test the lower the score the better, ideally you want the same performance over the life of the service.

In both tests Amazon had the best overall score across all server types.  The changing CPU types as testing across different server types should be considered when looking at these two scores as well.

Disk Performance Testing

In these tests Amazon EC2 and Rackspace were consistently faster and more reliable.  Since they are SSD based storage they should be faster and more consistent than the other services.  DigitalOcean is also SSD based but their performance was not on par with the other SSD based services and also had the highest rate of variability.  SoftLayer is not SSD based but their overall disk performance and consistency were very good.  This could be from using SSD caching but no matter how they are doing it the performance speaks for itself.  Microsoft Azure and Google do not offer SSD storage which is reflected in their testing.

 Web Server Tests

 

Memory Performance

In these test they conduct them only on the database servers and because of Azure’s older hardware based on AMD CPU and MOBO they did not perform well until the large database server test where they are using the newer Intel Sandy Bridge based platform.  Amazon and Google both perform well through all testing and outperformed the others.  Newer hardware is always going to win in these types of tests and that is reflected here.

External and Internal Network Performance

Network testing is always subjective and by testing cloud service providers networking throughput it just adds to the complexity of the configuration as some add additional ways to improve performance through different setups.  Amazon, Google and Rackspace seem to provide higher throughput then Microsoft Azure, DigitalOcean and Softlayer.  Also throughput by some vendors is limited depending on the size of the compute instance your purchase.

They also cover the value of each provider in their report which ultimately should be the deciding factor in choosing which cloud computing provider to use for your solution. After all there is no need to overspend on a service if you can accomplish everything you need with a cheaper solution.

You can download the full report here.