Network infrastructure design is a system design that relates to the transferring of data. The elements that are required to create a network include routing and arbitration, packets, nodes, switching techniques and links. There are two approaches to network design: 1) top-down and 2) bottom up approach. The top-down approach is preferred over the bottom up approach.
Table of Contents
Introduction4
Network Infrastructure4
Wide area networking5
Server loading balancing5
Media multicasting6
Redundancy and high availability6
Conclusion7
Network Infrastructure Design
Introduction
Information technology (IT) and enterprise networks have become the core of many organizations. Critical business functions often depend on a fully functioning IT infrastructure: no network means any ability to generate revenue. To this end, an organization's growth and evolution should be reflected in the growth and evolution of its network. Organizational changes can include new or expanded missions, new factors such as mobile workers, and growth or downsizing in response to purely external factors. Infrastructure changes that stem from these factors can include additional network components (of a type already present), new types of components, and additional subnets or Internet connections. This paper describes the fundamental components of infrastructure design.
Network Infrastructure
Despite improvements in equipment performance and media capabilities, network design is becoming more difficult. The trend towards increasingly complex environments involves multiple media, multiple protocols, and interconnection to networks outside any single organization's dominion of control. Carefully designing networks can reduce the hardships associated with growth as a networking environment evolves.
Wide area networking
The design of Wide Area Networks (WANs) should be a balance between cost and performance. An ideal design optimizes packet services. Service optimization does not necessarily translate into picking the service mix that represents the lowest possible tariffs. Our successful packet-service implementations result from adhering to two basic rules:
1. Balancing cost savings derived by instituting WAN interconnections with your computing community's performance requirements
2. Building an environment that is manageable and scalable as more WAN links are required
Server loading balancing
The explosive growth of Internet business, along with the proliferation of new IP-based enterprise applications, is creating a heightened requirement for continuous availability of mission critical data. Spreading users across multiple independent systems can result in wasted capacity on some systems while others are overloaded. By employing server load balancing within a cluster of systems, the users are spread to available systems based on the load put on each system. When workloads grow beyond the capacity of a single machine, the traditional approach is to replace it with a larger machine. This could be more costly and requires downtime for the hardware upgrade. Server Load Balancing allows a machine to be added to the cluster without disrupting work that is executing on the other machines. When the new machine comes on-line, the work can start to migrate to that machine, thus reducing the load on the existing machines. Individual application instances or machines can fail (or be taken down for maintenance) without shutting down service to end-users. Users on the failed system automatically reconnect to an alternative server without being ...