Load Balancing


Load Balancing improves the distribution of workloads across multiple computing resources, such as computers, a computer cluster, network links, Central Processing Unit, or Data Stores.

Load Balancing aims to optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource.

Using multiple components with Load Balancing instead of a single component may increase reliability and availability through redundancy.

Load Balancing usually involves dedicated software and/or hardware, such as a multilayer switch or a Domain Name System server process.

Load balancing differs from channel bonding in that Load Balancing divides traffic between network interfaces on a Transport Layer (or socket OSI-Model) basis, while channel bonding implies a division of traffic between physical interfaces at a lower level, either per packet (OSI-Model Network Layer) or on a Data-link Layer (OSI-Model Layer 2) basis with a protocol like shortest path bridging.

Load Balancing provides Horizontal Scaling implies scaling by adding more machines (Entities) into your pool of resources.

Load Balancing usually can be set to honor State or to be Stateless

More Information#

There might be more information for this subject on one of the following: