Home

Understanding Internetworking Infrastructure

Understanding Internetworking Infrastructure

 

 

Understanding Internetworking Infrastructure

Chapter focus: the challenges associated with changing infrastructure.
Internetworking infrastructures = the totality of existing client and server systems, new externally provided services, and older legacy systems.

Internetworking technologies add to, improve, and interconnect older systems to yield infrastructures with complex operational characteristics. Benefits include:

  • Older services can be delivered in new, customer-responsive ways.
  • Cost structures underlying new service delivery methods are superior to older methods.
  • New business models enabled by the new service possibilities have emerged.
  • Industries can restructure to realize greater efficiencies/capabilities as part of a long-term trend that will continue and accelerate regardless of occasional technology market slumps.

Challenges of changing infrastructure:

  • New business models/systems cannot succeed unless they can be relied on to operate at key moments.
  • New technologies provide less value ifthey cannot interoperate effectively with older technologies still present in many companies.
  • IT infrastructure determines a company’s differentiating capabilities.
  • If a company deploys a technology that is a loser in the marketplace, the company can be left with poor (or no) vendor support, inferior business capabilities, and cost-to-maintain infrastructure that cannot easily be shut down/replaced.
  • Infrastructure decisions bridge technology and business domains. As such, infrastructure management requires understanding and appropriate assignment of responsibilities for these technology-business domains.
  • The Drivers of Change: Better Chips, Bigger Pipes

Moore’s Law: In 1965, Gordon Moore, cofounder of Intel, noticed that the performance of memory chips doubled every 189 to 24 months, whereas their size and cost remained roughly constant. (see Figure 5./1, p. 280, and Figure 5.2, p. 282).
Evolution of Corporate IT Infrastructure:

    • Mainframe-based, centralized computing architecture --1960s and 1970s.
      • Specialized data processing staffs manned large mainframe computers accessed via awkward punch card, Teletype, and terminal machines.
      • Dealings between humans and computers were not very interactive.
      • Programs ran infrequently, in batches, often only once a day.
      • Access devices were “dumb”-little inherent capability, they served as murky windows into complex mainframes.
      • Mainframes provided all computational and storage capabilities.
      • Early networks were simple—all they did was handle traffic between a few large mainframe computers.
    • PC-based Distributed Computing –1980s
          • Computing spread throughout the organization and into the hands of business users.
          • Reliance on data processing staff became a distant memory.
          • Local area networks (LANs) allowed business people to share spreadsheets, word processing, and other documents, and to use common printers.

  • Client-Server Movement—late 1980s, early 1990s
  • High-powered, yet distributed computers (servers) combined with more elaborate networks and desktop PCs (clients) to provide IT services formerly delivered by the mainframe, such as payroll, order management, sales support, etc.
  • Internetwork-Based Computing—mid-1990s to present
  • Commercial Internet, the Web, and underlying protocols led to new stages of evolution.
  • Transmission control protocol and Internet protocol, or TCP/IP, provided a robust standard for routing messages between LANs and created the potential to connect all computers on an ever-larger wide area network (WAN).
  • Interworking technologies were legacy of US Dept. of Defense efforts to develop communication networks with communication lines/nodes that could not be targeted by an enemy.
  • TCP/IP and other Internet protocols and technologies were open standards, not owned by any person or company.
  • Self-service computer hookup facilitated rapid growth in the worldwide Internet.
  • At first, the Internet was used primarily for exchanging email and large data files.
    • The Web, with its graphical user interfaces, made Internet communication valuable to non-technical users.
  • The Web also made network resources (like distant databases) and capabilities (like over-the-Net collaboration) accessible.
  • Metcalfe’s Law—the usefulness of a network increases with the square of the number of users connected to the network.
  • Reduction in the cost of computing power and in the cost of exchanging info between computers has driven ongoing changes in the business landscape.
  • However, most companies own a mix of technologies from different computing eras. Some still rely heavily on mainframes and have redefined them as enterprise servers.
  • Understanding how shifting technology can combine with “legacy” systems to change business capabilities is a prerequisite for understanding how to manage IT infrastructures.

B. The Basic Components of Internetworking Infrastructures
IT Infrastructures can be divided into 3 categories:

  • Network—the technologies (hardware and software) that permit exchange of info between processing units and organizations.
  • Processing systems—the hardware and software that together provide an organization’s ability to handle business transactions.
  • Facilities—the physical systems that house and protect computing and network devices.

Table 5.1 lists some of the supporting core technologies and identifies some key management issues that arise for each of the three components.
Internetworking creates many more degrees of freedom in how components can be arranged and managed. More degrees of freedom create possibilities for cost reduction, new capabilities, and new business models, but also pose challenges in understanding the implications of possible infrastructure designs and management actions.

1. The Technological Elements of Networks

  • Local Area Networks
    • Provide a way for computers that are physically close together to communicate.
    • LAN technologies define the physical features of solutions to local communication problems and needs and the protocols (rules) for conversations between computers.
      • Hubs, Switches, Wireless Access Points, and Network Adapters
    • Allow computers to be connected in LANs.
    • Hubs and switches serve as central junctions into which cables from the computers on a LAN are connected.
    • Wireless access points connect wireless devices into hubs and switches.
    • Hubs are simple connection devices, but switches vary in complexity and capability and are used to connect LANs and larger networks to each other.
    • Network adapters that are fitted into a LAN’s computers translate the computer’s communications into a language that can be broadcast over the LAN and understood by listening computers. They also listen for communications from other computers and translate them into terms that can be understood by the connected computers.
  • Wide Area Networks
    • Provide a way for computers physically distant from each other to communicate.
    • Networks of networks that enable LANs to connect and communicate.
    • WANs define the physical features of solutions and the standards for conducting conversations between computers and communication devices over long distances.
    • A WAN inside a company’s physical premises is called an intranet.
    • A WAN that extends outward from a company’s physical premises to business partners is an extranet.
      • Routers
    • Enable internetworking, or relaying of messages across large distances.
    • Listen in on LAN conversations and recognize messages intended for computers that are not on that LAN. Relay these messages to other routers. Eventually, the messages reach a router that knows the destination machine’s LAN and can complete the delivery of a message.
  • Firewalls and Other Security Systems and Devices
    • Firewalls—act as security sentries at the boundaries or an organization’s internal network to protect it from intrusion from the outside.
    • Intrusion Detection Systems (IDSs) –software tools such as network monitoring software and hardware devices such as sensors and probes.
    • Other devices enable the opening of virtual tunnels across public and private networks to create virtual private networks (VPNs).
  • Caching, Content Acceleration, and Other Specialized Network Devices
  • Caching-accelerates the delivery of info across the network by storing it in a location close to the destination machine. Used for info that does not change often.
  • Other devices assure the efficient transmission of time-dependent info such as the sound and image data that accompany internetwork-based video delivery or video teleconferencing.
  • QoS – error-free quality of service.

2. The Technological Elements of Processing Systems

  • Client Devices and Systems
    • Devices--PCs, handheld devices, cell phones, and automotive components.
    • Systems—the software that runs on these devices to perform business functions, manage interaction with other computers, and handle low-level client machine operations.
    • Modern clients can perform significant business functions even when separated from a network. Mobile users can use clients in both network-connected and unconnected modes.
  • Server Devices and Systems
  • Devices --handle the heavy processing required for high-volume business transactions and permit sharing of info across a large number of computer users.
  • Systems—software to carry out mainline business functions, manage transactions from other computers, and handle low-level machine operations.
  • While clients perform front-end processing (interaction with users), servers perform back-end processing (heavy computation or interaction with other back-end computers).
  • Servers are often physically located in data centers and managed by central staff, as their mainframe ancestors were.
  • Increasingly designed as specialized appliances targeted at specific functions: database servers, web servers, application servers.
  • Mainframe Devices and Systems
    • Still do the vast majority of business-critical transaction processing.
    • Some are modern, high-performance machines that interoperate well with internetworks.
    • Others are relics of an earlier era that still perform vital business functions.
    • However, as computing infrastructures become more interconnected, legacy mainframe systems pose complications--
      • Many older mainframes cannot use the open protocols of the internetworking world.
      • Some mainframes process jobs in batches, while modern internetworking systems are designed to operate in real time, to process new orders at the time they occurs.
      • Interfaces between legacy mainframes and internetworks cannot always overcome the problems associated with the interaction of such different technologies.
      • Middleware
    • Enabling utilities, message handling, and queuing systems, protocols, standards, software toolkits, and other systems that help client, servers, mainframes, and their systems coordinate activities in time and across networks.
    • Often runs on servers.
    • Is key to utility, or on demand, or grid computing.
    • Essential for improving flexibility, capacity utilization, efficiency, and effectiveness of modern IT infrastructure.
  • Infrastructure Management Systems
  • Monitor the performance of systems, devices, and networks.
  • Make sure that too many transaction do not flow to one computer, while another is underused.
        • Business Applications
          • Some are custom-built, in-house by IT staff.
          • Others are off-the-shelf, ranging from small client applications, such as a spreadsheet program, to huge ERP systems.

3. The Technological Elements of Facilities
Now an important aspect of infrastructure management, due to demands for always on, 24-hour, 7 days a week operations.

        • Buildings and Physical Spaces—facility size, physical features, how readily it can be reconfigured, and how well it protects its contents from external disruptions are important factors.
        • Network Conduits and Connections—the amount of redundancy in physical network connections, number/selection of partners who will provide “backbone” connectivity to external networks, and the capacity of the data lines leased from service providers.
        • Power—from multiple power grids, uninterruptible power supplies, back generators, or privately owned power plants.
        • Environmental Controls—shielding computers from temperature, humidity changes.
        • Security—protecting computers from physical and network-based attacks. Control access to machines, security guards, cages, and locks.

4. Operational Characteristics of Internetworks
Internetworking technologies are different from other information technologies, in terms of how they perform and should be managed:

  • Internetworking Technologies are based on open standards
    • TCP/IP, the primary common language of internetworking technologies, defines how computers send and receive data packets.
    • Because these standards were developed using public funds, they are public and open, not proprietary.
    • The open standards make systems from different vendors more interoperable.
    • Competition is thus increased, and prices are lower and performance better.
    • The commitment to open standards as led to the development of other standards, such as hypertext transport protocol (HTTP), used to deliver Web content.
      • Internetworking technologies operate asynchronously
    • Information sent over an internetwork does not employ dedicated, bidirectional connections between sender and receiver.
    • Packets of info with accompanying address info are sent toward a destination, sometimes without any prior coordination between send and receiver.
    • For Web information, sender and receiver must be connected to the internetwork at the same time.
    • For other services, like email, the receiver’s computer need not even be switched on at the time the message is sent.
  • Internetwork communications have inherent latency
    • Latency—variable wait time between the sending of a message and the arrival at the destination of the last packet in a message.
    • This occurs because internetworks are connected by links of varying capacity.
    • Some packets that carry info flow quickly through wide links while others move more slowly through narrow links. Packets that together constitute a single message do not arrive at the destination at the same moment.
    • Latency can be unpredictable.
    • However, managers can take actions to ensure that latency remains within certain tolerable limits.
    • New routing technologies make it possible to move high-priority packets to the top of the queues that form at narrow network links.
      • Internetworking technologies are naturally decentralized.
    • Because the Defense Department dictated that computer networks have no single points of failure, internetworks have no central traffic control point.
    • Computers connected to the network do not need to be defined to a central control authority.
    • There is no central authority that oversees/governs the development or administration of the public Internet except the ones that assign TCP/IP addresses.
  • Internetworking technologies are scalable
    • Because communication is routed along multiple paths, an internetwork as a whole is not affected significantly when a path is removed. Paths simply get routed a different way.
    • Additional paths can be added in parallel with overworked paths.
    • If a network segment becomes overloaded, the network can be split up into more manageable subnetworks.
    • These technologies allow more flexible expansion than do most other network technologies.
  • The Rise of Internetworking: Business Implications

Google CEO Dr. Eric Schmidt: high-capacity networks enable a computer to interact just as well with another physically distant computer as with one that is inches away.

  • The communication pathways inside a computer become indistinguishable from the pathways that connect computers. The network itself becomes a computer.
  • Improved connections between machines, departments, companies, and customers mean quicker realization of economic value when parties interact.
  • Because the physical location of processing is less important, new possibilities for outsourcing, partnerships, and industry restructuring emerge.
  • Challenges: rising complexity, unpredictable interactions, and new types of threats to businesses and consumers.
  • The Emergence of Real-Time Infrastructures

With real-time internetworking infrastructures, customers are serviced and economic value is realized immediately, rather than over hours, days, or weeks.
The potential benefits of real-time infrastructures are:

  • Better data, better decisions
    • Until recently organizations had to keep copies of the same data in many places.
    • Keeping the data synchronized was difficult, and discrepancies between copies of data led to errors, inefficiencies, and poor decision making.
    • Now, it is becoming possible to run a large business based on a set of financial and operational numbers that are consistent throughout the enterprise.
      • Improved process visibility
    • New technologies based on open standards ad compatible back-office transaction systems let users instantaneously view transactions with each step in procurement and fulfillment processes.
  • Improved process efficiency—Efficiency improvements result directly from enhanced process visibility. Manufacturing workers who can see what suppliers and orders are coming their way tend to hold less buffer stock t guard against uncertainty—which reduces working capital, shortens cycle times, and improves return on investment (ROI).
  • From make-and-sell to sense-and-respond
    • Real-time infrastructures are necessary for achieving highly responsive operations.
    • If operating infrastructures come close enough to real time, value-adding activities can be performed in response to actual customer demand rather than to forecasted customer demand.
    • Dell Computer’s make-to-order process avoid losses caused by demand-forecasting errors.
    • Large enterprises systems like SAP or Oracle have been used to renew transaction infrastructures, or products from a variety of vendors can be used.
  • Broader Exposure to Operational Threats

On Oct. 19, 1987, the Down Jones Industrial Average fell more than 500 points in the 20th century’s single largest % decrease.
The cause:

  • Computerized program trading by large institutional investors-computers initiated transactions automatically, with human intervention, when certain triggering conditions appear in the markets.
    • Automatic trades themselves created a chain reaction—that set off more automatic trades, which created conditions that set off more automatic trades, and so on.

The issue:

  • As batch-processing delays are eliminated and more transactions move from initiation to completion without intervention by human operators, the potential grows for computerized chain reactions that produce unanticipated effects.
  • Real-time operations demand 24X7 availability, but effective disaster recovery procedures must be in place to respond to incidents.
  • Technologies of the past denied access to systems unless someone intervened to authorize access. Because internetworking systems evolved in an arena to support communities of researchers, access is allowed unless someone intervenes to disallow it.
  • Security measures to support commercial relationships, therefore, must be retrofitted onto the base technologies.
  • New Models of Service Delivery
  • As increasingly reliable networks make the physical location of computers less important, service traditionally provided by internal IT departments can be acquired externally, across internetworks, from service providers.
  • Standardization and technology advances permit specialization by individual firms in value chains, resulting in economies of scale and higher service levels.
  • As IT service models proliferate, service delivery depends on a growing number of service providers and other partners. Selecting strong partners and managing relationships are vital to reliable service delivery.

4.   Managing Legacies

  • Fitting new infrastructure into complex legacy infrastructure is s huge challenge, since legacy systems are often based on outdated, obsolete, and proprietary technologies.
  • Legacy processes, organizations, and cultures must also be managed, as IT infrastructures are changed.
  • The Future of Internetworking Infrastructure

Challenges that internetworking infrastructure still does not fulfill:

  • Markets do not tolerate the uncertainties of unreliable or unavailable infrastructure.
  • Business transaction cannot flourish when infrastructure is not highly secure.
  • Biggest challenge of all: Internetworking technologies must support all or nearly all the elements of business transactions that can occur in face-to-face transactions.

Questions for Class Discussion:

  1. Why is changing IT infrastructure such as challenge?
  2. What are the four stages in the evolution of corporate IT infrastructure?
  3. What are the three main categories of internetworking infrastructures?
  4. How are internetworking technologies different from other information technologies?
  5. What are some potential benefits of real-time infrastructures?
  6. What are the potential dangers of using computerized systems like automatic trading systems?
  7. Why are selecting strong partners and managing relations with them increasingly important?
  8. What kinds of problems do legacy systems pose?
  9. What is the biggest challenge identified by the text for the future of internetworking infrastructure?

Source: http://www.usi.edu/business/aforough/Fall2006/cis601f2006/SummaryChapter%205.doc

Web site to visit: http://www.usi.edu

Author of the text: indicated on the source document of the above text

If you are the author of the text above and you not agree to share your knowledge for teaching, research, scholarship (for fair use as indicated in the United States copyrigh low) please send us an e-mail and we will remove your text quickly. Fair use is a limitation and exception to the exclusive right granted by copyright law to the author of a creative work. In United States copyright law, fair use is a doctrine that permits limited use of copyrighted material without acquiring permission from the rights holders. Examples of fair use include commentary, search engines, criticism, news reporting, research, teaching, library archiving and scholarship. It provides for the legal, unlicensed citation or incorporation of copyrighted material in another author's work under a four-factor balancing test. (source: http://en.wikipedia.org/wiki/Fair_use)

The information of medicine and health contained in the site are of a general nature and purpose which is purely informative and for this reason may not replace in any case, the council of a doctor or a qualified entity legally to the profession.

 

Understanding Internetworking Infrastructure

 

The texts are the property of their respective authors and we thank them for giving us the opportunity to share for free to students, teachers and users of the Web their texts will used only for illustrative educational and scientific purposes only.

All the information in our site are given for nonprofit educational purposes

 

Understanding Internetworking Infrastructure

 

 

Topics and Home
Contacts
Term of use, cookies e privacy

 

Understanding Internetworking Infrastructure