Internet Load Balancers logo
LiveZilla Live Help
##

The Internet Load Balancing Technology and its applications

The Internet Load Balancing is a clustering technology that ensures enhancement of scalability and availability of critical TCP/IP services. These services could be Web, Virtual Private networking, Terminal and media servers. To take care of scalability of performance, the load balancing actually distributes the IP traffic over the multiple cluster hosts. The availability of the services are taken care of by typically detecting the host failures and redistributing traffic to the already existing hosts.

The important applications of the technology and its users are the Small or the Home office users. Internet Load balancing technology is applicable with reference to the communication channels. It is an ideal application within the broad within the broadband market in particular and fits into situations where there is a need to increase bandwidth to allow VPN (Virtual Private Networks) or the Voice over IP.

The Internet load balancing technology is essential within the networks, making it tough or in fact impossible to predict the number of requests for operational information that would be directed to a server. It is most of the busy websites that employ more than one web server with an adopted load-balancing scheme.

Internet Load balancing – Business Benefits and scope

Internet Connectivity (potential to communicate with another system or with the Internet site) is situated at the nerve centre of any business operation. Organizations depend on them to run the business applications that are critical to the productivity and profits. Internet access is not a luxury any more but very critical to the functioning of the business. Internet load balancing provides an efficient and cost effective and a friendly solution to maximize utilization and the availability of Internet access.

To take care of reliability of Internet access, many organizations lease out two ISP links that connect the internal network to the Internet. Of these, one link is a backup while the primary link is used. This solution enhances reliability by providing a backup capacity of usage during the time of ISP link failures. One other way of keeping the backup link idle is to use Border Gateway Protocol that supports the ability to use the multiple ISP links with multi-homing. Despite the complication and challenges, BGP does not offer any efficient solution. For customers who would like to avoid the challenges of the BGP routing without wasting the idle backup ISP link, link load balancing provides a solution with a valuable return on investment. Internet load balancers balance in-bound and out-bound traffic efficiently amongst all the other ISP links making use of intelligent traffic management.

Links are chosen based on load balancing methods, which are based on critical performance metrics like link weight, bandwidth cost, link weight, bandwidth limit, and ISP pricing methods have a direct impact (positive impact) on the business.

The bandwidth optimization techniques reduce bandwidth costs to a great extent. The owners of high volume web services make use of techniques to save a few bytes per page but at the same time, do not compromise on quality.

Customers do not have to rely on high-cost ISP services any more, low-risk services to provide reliability. They can put bandwidth of multiple links together from different ISPs, which not only reduce the cost but also lead to reliability and availability of access. Utilization of all links simultaneously lead to elimination of failure risk associated with any one link. Otherwise, losing a link results in reduction of available bandwidth but not in loss of availability and performance.

Link load balancers make use of cautious checks to monitor the health and performance of ISP links. They typically move traffic to more healthy and links that perform better. More advanced link load balancers have the trait of a classy health check that is beyond the next hop link and they use end to end close measures as well as service response time to calculate the best link to service any given application transaction.

Network and application security is the need of the hour for businesses and organizations. Link load balancers are sited at the point of intersection of the internal and external network to provide this security. Making use of source Network Address Translation compels the return traffic to make use of the same ISP link as a foreword traffic to attain persistence and consistent performance. NAT provides security by permitting some link load balancers to use their layer 4-7 network and application intelligence to prevent Denial of Service. This is done by blocking traffic from hateful clients at the same time not effecting performance for legitimate customers.

Products supporting high-availability (HA) configuration offer a fault-tolerant solution. This solution is very apt for an organization’s critical business needs. In the HA mode, two link load balancers function as a Active and Standby with a session synchronization and a sub-second failover. When one device fails, there is no effect on the existing connections as the other device is operations that are aware of the existing connections and continues with servicing application traffic.

However, the question arises as “what would happen if the Internet itself fails”? Companies have begun to think of this question and consider options in the form of a mixture of wire line and wireless ISPs as well.

Let us now have an insight into the market players and the products and technology of Load Balancing offered by FatPipe as well.

Load Balancing products from FatPipe Networks

About FatPipe

FatPipe Networks is the inventor and multiple patents holders of technology that provides the highest levels of optimization, reliability, security and acceleration of Wide Area Networks (WANs). FatPipe is the world’s most innovative creator of WAN redundancy technology, router clustering, which affords companies automatic and dynamic failover of a downed data line connection due to a WAN component or service failure.

FatPipe’s line of products cover an array of features and benefits for companies that run mission critical applications over any type of WAN infrastructure. Customer benefits include up to seven-nines of WAN redundancy, reliability, and speed. Dynamic load balancing, Quality of Service (QoS), and additional security, as well as compression, and VPN encryption capability are also available as add on features.

How is Fatpipe’s load balancing failover technology different in its operations from its competitors?

Here are the products offered at FatPipe to describe how they serve as load balancing feature.

Site Load Balancing from FatPipe:

WARP units can be configured to automatically load balance site traffic to one or more remote sites, where inbound connectivity to Internet accessible servers is critical. Site Load Balancing also allows for Site Failover. This technology utilizes FatPipe Site Load Balancing, which is an add-on feature.

IPsec, Quality of Service (QoS), and Site Load Balancing are available as add-on features.

Other product features include SmartDNS, Policy Routing and a new load balancing method – the Fastest Route.

The technology aspects of these add ons make FatPipe a leader in the market.

Products from FatPipe Networks for Inbound Load Balancing

Smart DNS –

WARP’s patent pending SmartDNS technology provides for the DNS load balancing as well as inbound line failover. Customers also benefit from using WARP’s bandwidth management and WAN optimization tools to increase speed of data transmission.

SmartDNS accomplishes load balancing through Round Robin DNS. Clients on the Internet will connect to internal servers through different WAN connections at different times, in a round-robin fashion. SmartDNS provides redundancy by allowing internal servers on the LAN to be accessible through multiple connections.

When the DNS server makes the adjustment for a connection that is down, SmartDNS™ will help clients on the Internet connect to internal servers using a route that is open instead of trying to access the host using an IP that is not accessible.

The technology of FatPipe’s SmartDNS feature are:

SmartDNS balances load by advertising the different paths into a host on a LAN. The host appears to be a different IP address at different times, thus using all available lines. The IP addresses are resolved based on the selected interface-to-network mappings.

  • Speed: Through load balancing, FatPipe SmartDNS speeds up the delivery of inbound traffic according to the interface-to-network mappings selected by the administrator.
  • Failover: SmartDNS will dynamically sense when a failure occurs and will make adjustments to the DNS replies. Thus, it will not hand out IP addresses that are associated with connections that are down. SmartDNS allows hosts on a network to have multiple IP addresses associated with them from different providers, and will hand out the IP addresses for these hosts using the interface-to-network mappings. SmartDNS uses the Line Status, determined by the Route Test function, to check when a WAN interface loses connectivity. If the Line Status is marked "Down" for that interface, SmartDNS will change the advertised paths to compensate for the WAN interface that is unavailable, advertising the pathways for whose interface is "Up" only.

Before moving DNS services to WARP, it is recommended to configure SmartDNS first and test resolution locally by querying the WARP directly.

a) The Policy Routing provides administrators more control over their networks. This helps in defining the way the data can be transmitted over the networks based on protocol, source, destination IP address and the destination port.

b) QoS, when enabled, it allows to prioritize WAN traffic. This is especially useful for ensuring that real-time traffic including -- voice and video -- gets priority over other types of traffic. The primary purpose of QoS is assurance that packets are transported from a source to a destination with certain characteristics corresponding to the requirements of the service that the packet flow supports. This becomes a challenge in a situation where multiple streams compete for limited available resources. One of these resources is link transmission capacity, which gets divided into throughputs of individual streams. Another important resource is buffer memory, which affects packet loss.