Tuesday, October 19, 2010

Installing Twisted-Pair Cable - Installation guidelines

The hardest part about installing network cable is the physical task of pulling the cable through ceilings, walls, and floors. This job is just tricky enough that I recommend that you don’t attempt it yourself except for small offices. For large jobs, hire a professional cable installer. You may even want to hire a professional for small jobs if the ceiling and wall spaces are difficult to access. Here are some general pointers to keep in mind if you decide to install cable yourself:-
  1. When running cable, avoid sources of interference, such as fluorescent lights, big motors, X-ray machines, and so on. The most common source of interference for cables that are run behind fake ceiling panels are fluorescent lights; be sure to give light fixtures a wide berth as you run your cable. Three feet should do it.
  2. The maximum allowable cable length between the hub and the computer is 100 meters (about 328 feet).
  3. When you run cable above suspended ceiling panels, use cable ties, hooks, or clamps to secure the cable to the actual ceiling or to the metal frame that supports the ceiling tiles. Don’t just lay the cable on top of the tiles.
  4. When running cables through walls, label each cable at both ends.
Getting the tools that you need

Wire cutters: You need big ones for thinnet cable; smaller ones are okay for 10baseT cable. If you’re using yellow cable, you need the Jaws of Life.

A crimp tool: You need the crimp tool to attach the connectors to the cable. Don’t use a cheap $10 crimp tool. A good one will cost $100 and will save you many headaches in the long run. Remember this adage: When you crimp, you mustn’t scrimp.

Wire stripper: You need this only if the crimp tool doesn’t include a wire stripper.

7 Habits for Effectively Leading Healthcare Interoperability Initiatives

Habit 1: Be Proactive

The proactive habit can be applied in multiple ways to foster healthcare interoperability.

Flexibility in Data Transformation

First, there are multiple applications or healthcare providers that require patient information to be communicated in a specific data format. Each vendor or provider, of course, believes that their format should be the one followed. Consequently, one could be reactive and just wait for the other vendor or provider to change the way they accept or send patient data; however, doing this results in a stalemate. The better approach would be to act in a flexible manner and transform the data in the middle to the different specifications. An added benefit to this approach is the ability to implement a best-of-breed application approach, since the differing data formats can be transformed easily in the middle.

Leveraging Engine Technology

Second, working with other application vendors or medical device manufacturers can be a restraining experience. Waiting for point-to-point interfaces to be developed, delivered, and tied to their queues can be frustrating. Being proactive can be liberating. By leveraging interface engine technology, independence from various vendors can be gained while delivering healthcare interfaces to your customers in a more timely fashion.

Regional and Community Initiatives

Third, there are several regional or community based initiatives which are driving RHIOs or other healthcare interoperability efforts. Similarly, the Federal government has dedicated resources and issued directives around a more integrated healthcare system. Why do anything? Let the agencies and communities drive it. Although that is a possible approach to take, it is clearly a reactive one and may result in more pain later.

Organizations that take the initiative and are proactive in connecting with their departments or referring physician communities are realizing benefits today. From saving dollars with more efficient processes to increasing revenues by offering a better way to interact, the proactive approach can have a positive impact today while also offering a direction for struggling community initiatives.

IT Service

Finally, another proactive approach to healthcare interfacing is the way IT service levels are delivered. The reactive approach is to claim ignorance, because the monitoring capabilities are not available. The best proactive approach is to be alerted when an interfacing parameter has not met a defined threshold, and you receive a page or email with the change in status. Essentially, with this approach, you are the first to know and the first to respond. By being proactive in healthcare integration, the end result is:
  • Adaptability – being flexible to adapt to the various data requirements
  • Independence – removing total reliance on others to achieve your objectives
  • Satisfaction – delivering responsive customer service
With a proactive mindset and approach, the move from being dependent to being interdependent begins.

Habit 2: Begin with an End in Mind

What is the end game? Is it streamlined patient data flow? Is it robust, connected healthcare workflows? Is it physician outreach, connecting to practices in an electronic manner? Is it just doing it in a simpler, less costly, and easier to manage way?

Envisioning what healthcare interoperability means for your organization is important in developing and implementing the right strategy and healthcare IT tactics. You need a target. Like the old adage says, “If you aim at nothing, you'll hit it every time.” Direct your aim at the end in mind. If just connecting two applications to each other is the end game, then a point-to-point interface may be the best approach.

If monitoring a point-to-point interface while extending the leverage to other applications, then a mixed approach—point-to-point and interface engine—may be the best approach. If implementing a best of breed application strategy while connecting to referring physicians, laboratories, and imaging centers is the end game, then an integration platform may be the best approach.

Deciding what you want to achieve for your hospital, radiology practice, laboratory or clinic is important in deciding what integration approach should be taken. Without visualizing the end game, it usually translates into just muddling through. Muddling through costs more, frustrates more, and results in less. Recently, an executive director at a radiology practice was determining ways to offer better service to their referring physician community. The end game in her mind was delivering better service, and she knew that certain technology investments were necessary to realize that end game plan.

In her words, read the insights about how habits 1 and 2 came into play to move to a more interdependent approach.

“We knew we needed to integrate more technology across our practice. We needed to increase the efficiency of processes associated with billing and diagnostic reporting. In addition, we are receiving an increasing number of requests from referring physicians for HL7 interfaces. We wanted the control to respond quickly to these requests and the flexibility to accommodate all of the different HIS, PMS and EMR systems they might be using.”

With better service as the vision, the elements that needed to be in place in order to make it a reality came into full view.

Habit 3: Put First Things First

The patient is first. Delivering high quality patient care in a timely and accurate manner is fundamental. What helps facilitate putting patients first? There are many answers to this question. Having the right physicians, nurses, and other personnel is an essential part of the formula. Having the right facilities and equipment is a vital part of the formula. Having the right systems, applications, and ways to connect them is an integral part of the formula.

While the quality of care is largely determined by human hands, an expert mind and caring spirit, the delivery of the care is equally important. Healthcare IT plays a critical role by managing the systems and integrating the data flow. With IT support, the patient care experience becomes seamless through the various workflows.

In healthcare interoperability initiatives, key IT decisions need to be made in order to determine what needs to be put first. Decisions include:

Defining the integration benchmarks and desired results
  • Development cycle time
  • Deployment cycle time
  • Resource requirements
  • Manageability
Defining the desired turn around times
  • Delivering patient reports to referring physicians
  • Response time to correct a connection issue
  • Re-sending an HL7 message from log files
Defining the operational cost structure to the integration platform environment
  • Resource type required (e.g., Java engineer, IT analyst, etc.)
  • Cycle time requirements
  • Manageability requirements
These decisions along with others will drive your healthcare integration approach and aid in identifying which principles should come first.

For example, a large hospital used older technology to facilitate their integration efforts. The platform worked, but several issues arose. First, the existing integration platform required skilled Java developers, and these resources can be expensive. Second, the development and deployment cycle times for new interfaces were long and costly. Third, insight into how the interfaces were performing was challenging. On the surface, everything was working fine. Underneath the surface, challenges and issues were brewing, threatening to undermine the vision of delivering first-class patient experience.

Instead of waiting, the IT department took the initiative, explored new integration platforms, and initiated a migration. The result was better manageability of the integration environment and an exponential improvement in cycle times. In fact, over 30 interfaces were developed and deployed within the first six months after one training class.

Although the change was in the IT infrastructure, patient care was positively impacted. Key comments from the IT department included: “…our patients do not experience delays in the services they receive…” “We are able to deliver high quality patient care… orchestrating the clinical data flow between our healthcare applications.” With patient care coming first, the IT organization aligned itself to deliver.

Getting stuck in the IT issues (e.g., old technology platforms, “we’ve always done it this way,” etc.) keeps organizations in a dependent model. Moving beyond the typical IT issues and focusing on the important mission moves the overall organization beyond dependency.

Habit 4: Think Win/Win

With the first three habits firmly in place, independence is gained, and the transition from dependency to interdependency can begin. What does interdependency mean in healthcare? It means working with external healthcare providers in a seamless, integrated way. It also means facilitating data flow efficiently between different internal applications.

The seamless, productive interaction with external providers in tandem with high quality, effective data flow between internal applications is an interdependent healthcare environment. The end result of an interdependent healthcare community is enhanced patient care, including less frustration because the care experience is connected.

To gain these attributes of an interdependent healthcare environment, the first step is the win/win habit. How can healthcare IT organizations create win/win mindsets with others? Defining mutually beneficial terms is a start, and it needs to happen at three levels – with departments, external providers, and vendors. Often times, the IT mindset is:
  • Departments – “I’ll deal with it later.”
  • External providers – “How are we going to handle all these additional connection points?”
  • Vendors – “They want how much for one interface? What do you mean it will be six months to get that interface?”
All of these may be valid points, but coming in with that thought process will only push the vision back to one of dependency, not move it forward. The right mindset will help to structure the right approach to continue to move the healthcare interoperability initiatives in the right direction.

For example, a laboratory was doing business as usual. Getting interfaces from their LIS vendor was a long and costly process. At the same time, the CIO saw that many of their referring clinics were beginning to install EMR applications. With a win/win mindset, the CIO determined what their referring clinics required, and they explored new technologies to gain independence from their LIS vendor. By deploying an interface engine approach, healthcare interoperability happened to over 200 different physician offices. Without the mindset to determine a better approach, this would have been a story of lost business and lost opportunity. Instead, it is one of making a difference in their network of care.

Habit 5: Seek First to Understand, Then to Be Understood

Many times, we jump to what we need, rather than listening to what our partners are requesting. A simple question to ask to gain greater understanding should be “What are you going to do with the information that I give you?” By asking this question, it provides greater insight to how what you will deliver will be used. Many times, this will highlight additional information that will be required to deliver either to or above expectations.

Having a conversation without first understanding the other organization’s objectives, drivers, or concerns will be hollow. From one viewpoint, it will seem like one organization is dictating to the other. From another vantage point, it will be one of “they just don’t get it.” It is much easier to be understood when you first understand the other person or organization’s perspective.
As outlined above, there are three primary players in the healthcare interoperability picture, and each have a differing set of requirements. Understanding each, rather than assuming, is imperative.

Departments. In hospitals, there are many different ancillary applications that support critical functions for different departments. For example, the emergency room department has unique characteristics that require unique applications to support their activities. This extends to other departments from radiology to laboratory to dietary.

The departments are working diligently to perform their responsibilities in the most cost-effective, efficient manner possible. Interoperability is essential for departments in order to gain access to patient information quickly.

What are the key drivers for each department? Understanding the answer to this question will lead to a better understanding. Two key areas to explore include:
  • Integration points – What patient information is required? Are all the points of integration internal or are there external points as well?
  • Manageability – What level of involvement does the department want in terms of integration? Do they want the flexibility to build their own interfaces? Do they want the insight to know the status of the interface points? Do they want to troubleshoot or resend patient messages if problems occur?
Listening and understanding to what is needed will help craft the right approach.

External providers. External providers depend on your perspective. In many cases, it is the physicians who refer patients; the laboratories who conduct the standard or special tests; or the imaging centers who take, read, and analyze detailed images. The key areas to explore include:

Capabilities – What level of capability do the external providers have to electronically send or receive patient information? What time schedule are they on to be electronically connected with selected hospitals?
Systems – What systems will accept the information (e.g., EMR, RIS, HIS, LIS, etc.), and what data format is acceptable (e.g., HL7, CCR, etc.)?

Vendors. With vendors, the conversations can sometimes be demanding. Granted, vendors create some of the problems in enabling a cost-effective approach to integrating various applications together. It is like a struggle between countries. Each country has their own interests and wants to protect their boundaries and their sovereignty.

Understanding the perspective of the vendor may be critical to determining the best approach. This will be the toughest challenge for many providers to do, but a vital one. By understanding the vendor’s approach to integration or interfacing, you will be able to better define your organization’s healthcare interoperability approach.

Habit 6: Synergize

Although the word “synergize” is an overused term in the business world, it is critical to work with people from other departments or organizations with which you are trying to connect. If interdependence is to be achieved, then the sum of all the parts needs to work consistently and effectively with the whole.

What does synergize actually mean? Another term for synergy is alliance. In the healthcare environment, instead of treating each party as a department or vendor, it may be better to treat them as alliances. For alliances to work, everyone involved needs to work together. That is the point of synergy, and it is necessary to make connected healthcare initiatives work.

A few quick points:
  • Do you involve other departments in the process of determining the best way to improve the flow of patient data?
  • Do you work with your vendors to solve problems?
  • Do you work with your referring physician community or reference laboratories or imaging centers to understand their requirements or to solve interoperability issues?
  • Are you viewed as an alliance partner in your connected healthcare community or as an individual part?
  • How much can your organization take on? Is there another approach to gain leverage?
The key point – recognize the individual difference but work to build an alliance with all the individual organizations involved. It is not an easy task, but each of the habits provide for a direction to realize this important point.

Examples of building synergy occur within provider organizations as well as vendor organizations. One example of building synergy is offered from a vendor perspective. Many development organizations try to do it all – build the best features for their application, build the best infrastructure platform in which their application is based, build the best way to capture customer requests into new releases, etc. In reality, doing it all internally can stretch resources and can become uneconomical.

One such vendor was in that situation. It was trying to release new features while also attempting to offer a robust integration platform to meet every customer requirement and incorporate every new healthcare standard that came along. Fractures soon began to emerge as the weight of their “do-it-all” approach bore down on the development staff. Consequently, the R&D director began to open up the approach and look at alternatives.

One alternative was to create an alliance with another company that could offer the integration platform to meet any client requirement and any healthcare standard. Through a collaborative partnership, focus returned to offering new features to meet growing customer requirements while offering robust integration through a seamless partnership. Growth in features, growth in revenue, and growth in customer satisfaction were happening in tandem. Although different habits were utilized to get to this point, this story illustrates synergy at its best.

Habit 7: Sharpen the Saw

If there is only one thing that we can do in our life or in our organizations, it should be to look continuously for ways to improve. Whether it is in our client relationships, the way we solve our problems, or the way that we approach solutions, keeping our eyes open to new ways to do things is a must. This process of renewal will keep progress moving forward.

To achieve healthcare interoperability in our communities, continuous improvement is a must, because – if for no other reason – there are so many changes to which we need to adapt. There is a simple choice – adapt and improve or maintain the status quo and keep the paper flowing.

Improvements can be realized in many different areas including:
  • Resources required to build, test, and implement a connected community
  • Mindset in working with various constituencies – departments, providers, and vendors, etc.
  • Processes or workflows – understanding the desired flow and mapping the right technology to support the vision
  • Technology platforms to support healthcare interoperability
The improvements can be realized through many different resources. From workshops and trade shows to case studies, white papers, and blogs, there are many different avenues to continue to grow and adapt. There also is simple interaction. Talking with people from similar or different organizations to gain their perspectives can open the thought process. Setting aside the time to learn and improve is the first step.


The demands for healthcare interoperability are clearly increasing. How the demands are met will determine the success rate. Stephen Covey provided a great framework to work through most issues and realize most visions. Although it is a practical approach, it is challenging to adopt the habits and make the changes necessary to stop the inherent dependencies and move to a more interdependent environment.

Organizations are achieving varying degrees of success in pursuing an integrated healthcare community. It may be through brute force, new ways, or just luck. Leveraging and using the 7 Habits is one way to make a longer-term impact on the goals and will make the process of getting there more rewarding.

Healthcare interoperability and the 7 Habits seem like a match made for success.

Among the VoIP Manufacturers

Avaya (www.avaya.com)

Avaya makes a wide variety of communications systems and software, including voice, converged voice and data, customer relationship management, messaging multiservice networking, and structured cabling products and services. According to Gartner, Avaya’s “status as a leader is in part based on the architecture of its Avaya MultiVantage Communications Applications suite, which emphasizes an extensive feature set, scalability, consistent user interface, call processing power, and investment protection.”

Cisco Systems (www.cisco.com)

Cisco Systems makes networking solutions and network hardware and software, including converging voice and data products. According to Gartner, Cisco has “leveraged its strength in large-scale LAN infrastructure markets to win mind share among early adopters of converged networks. Its dealers are extremely effective in selling IT organizations, where many traditional telephony vendors are gaining credibility.”

Alcatel (www.alcatel.com)

Alactel provides communications solutions to telecommunication carriers, Internet service providers, and enterprises. A publicly traded company with fifty-six thousand employees worldwide, Alcatel’s focus is the delivery of voice, data, and video applications to customers and employees. Their OmniPCX communications platform enables a company to selectively operate using traditional or IP telephony methods. The platform is capable of supporting hybrid operations as well.

Siemens (www.siemens.com)

Siemens is a publicly traded company that manufactures electronics and equipment for a range of industries, including information and communications, automation and control, power generation, transportation, medical, and lighting. They provide mobile communication and telephone communication systems to businesses and mobile phones and accessories to consumers. Siemens employs approximately seventy thousand people in the United States and four-hundred-thirty thousand worldwide, with global sales of more than $91 billion in 2004.

NEC (www.nec.com)

Founded in 1905, NEC makes products ranging from computer hardware and software to wireless and IP telephony systems. For the fiscal year ending March 2005, NEC recorded more than $624 million in revenue. They employ about one-hundred-fifty thousand people worldwide.

According to Gartner, “NEC’s portfolio offers various levels of converged IP capabilities, a multitude of features, scalability, and investment protection. Their platforms have an excellent reputation in the education, hospitality, and healthcare vertical markets, with attributes that can attract other organizations with distributed campus environments. NEC Unified Solutions strategy offers a menu of services that support the planning, implementation, network readiness and ongoing service needs of IP telephony.”

Yealink (www.yealink.com)

Yealink is professional designer and manufacturer of IP phones and video phones for the world-wide broadband telephony market. Yealink products are fully compatible with the SIP industry standard, and have broad interoperability with the major IP-PBX, softswitch and IMS on the market today. High-quality, easy to use and affordable price-are what Yealink strive all the time to meet.

Founded in 2001 in Xiamen, China, Yealink has 9 years VoIP experience and has been 100% focusing on VoIP products. Also the core team has 16 years experience in telephone. More than 60 R&D VoIP engineers prove the innovative strength of the company by developing new VoIP product and technology constantly. All of these guarantee and backup Yealink's possibility of constantly providing world-class IP phone and establish Yealink as one of the leading designers and manufactures.

Yealink phones are characterized by a large number of functions which simplify the businesses communication with high standard of security and can work seamlessly with a large number of compatible IP-PBX that support Session Initiation Protocol (SIP).

Yealink also distinguished itself by years of experience at tailoring up the customized needs of different levels of businesses to ensure Yealink's customers benefit from the interoperability and flexibility of the phones and from the compatibility of all kind of SIP-based telephone system.


Shoretel, founded in 1998, is a privately held company that is all about IP telephony. Their approach is to evaluate your network first before designing a solution. The idea here is to determine how ready you are first, before taking the step into VoIP convergence.

According to Gartner, Shoretel’s “product architecture gives organizations distributed call control across multiple locations through an IP backbone that supports the use of IP and analog telephones. This enables organizations to implement a converged network at their own pace.”


In the 1940s, a consortium of leaders in the telecommunications industry and in government standardized how customers would be assigned telephone numbers. The telephone number identified a specific pair of wires out of millions of pairs of wires, and a specific phone company switch out of thousands of such devices.

The term circuit-switched describes this setup of circuit wiring, switching devices, and telephone number assignment. The PSTN is sometimes referred to as the circuit-switched or switched network. Because today’s public phone system is still circuit switched, it still relies on the same basic system for telephone number assignment.

VoIP introduced dramatic changes in how the network is used and, over time, VoIP could force changes in how numbers are assigned. With VoIP, phone numbers are no longer tied to specific wires and switches. VoIP routes calls based on network addresses, and phone numbers are simply used because that is what people are familiar with. (VoIP takes care of translating a phone number into a network address.) In the future, as more and more people adopt VoIP-based systems, we may see dramatic changes in phone numbering.

NTFS drives

Windows NT Server introduced a new type of formatting for hard drives, different from the standard FAT system used by MS-DOS since the early 1980s. (FAT stands for File Allocation Table, in case you’re interested.) The new system, called NTFS (for NT File System) offers many advantages over FAT drives:

  1. NTFS is much more efficient at using the space on your hard drive. As a result, NTFS can cram more data onto a given hard drive than FAT.
  2. NTFS drives provide better security features than FAT drives. NTFS stores security information on disk for each file and directory. In contrast, FAT has only rudimentary security features.
  3. NTFS drives are more reliable because NTFS keeps duplicate copies of important information, such as the location of each file on the hard drive. If a problem develops on an NTFS drive, Windows NT Server can probably correct the problem without losing any data. In contrast, FAT drives are prone to losing information

SAN is NAS spelled backwards

It’s easy to confuse the terms storage area network (SAN) and network attached storage (NAS). Both refer to relatively new network technologies that let you manage the disk storage on your network. However, NAS is a much simpler and less expensive technology. A NAS device is nothing more than an inexpensive self-contained file server. Using NAS devices actually simplifies the task of adding storage to a network because the NAS eliminates the chore of configuring a network operating system for routine file sharing tasks.

A storage area network is designed for managing very large amounts of network storage — sometimes downright huge amounts. A SAN consists of three components: storage devices (perhaps hundreds of them), a separate highspeed network (usually fiber-optic) that directly connects the storage devices to each other, and one or more SAN servers that connect the SAN to the local area network. The SAN server manages the storage devices attached to the SAN and allows users of the LAN to access the storage. Setting up and managing a storage area network is a job for a SAN expert.

For more information about storage area networks, see the home page of the Storage Networking Industry Association at www.snia.org.

Saving space with a KVM switch

If you have more than two or three servers together in one location, you should consider getting a device called a KVM switch to save space. A KVM switch lets you connect several server computers to a single keyboard, monitor, and mouse. (KVM stands for Keyboard, Video, and Mouse.)

Then, you can control any of the servers from a single keyboard, monitor, and mouse by turning a dial or by pressing a button on the KVM switch Simple KVM switches are mechanical affairs that let you choose from among 2 to 16 or more computers.

More elaborate KVM switches can control more computers, using a pop-up menu or a special keyboard combination to switch among computers. Some advanced KVMs can even control a mix of PCs and Macintosh computers from a single keyboard, monitor, and mouse.

To find more information about KVM switches, go to a Web search engine such as Google and search for “KVM.”

10Base what?

The names of Ethernet cable standards resemble the audible signals a quarterback might shout at the line of scrimmage. In reality, the cable designations consist of three parts:

  1. The first number is the speed of the network in Mbps. So 10BaseT is for 10Mbps networks (Standard Ethernet), 100BaseTX is for 100Mbps networks (Fast Ethernet), and 1000BaseT is for 1,000Mbps networks (Gigabit Ethernet).
  2. The word Base indicates the type of network transmission that the cable uses. Base is short for baseband. Baseband transmissions carry one signal at a time and are relatively simple to implement. The alternative to baseband is broadband, which can carry more than one signal at a time but is more difficult to implement. At one time, broadband incarnations of the 802.x networking standards existed, but they have all but fizzled due to lack of use.
  3. The tail end of the designation indicates the cable type. For coaxial cables, a number is used that roughly indicates the maximum length of the cable in hundreds of meters. 10Base5 cables can run up to 500 meters. 10Base2 cables can run up to 185 meters. (The IEEE rounded 185 up to 200 to come up with the name 10Base2.) If the designation ends with a T, twisted pair cable is used. Other letters are used for other types of cables.

Ethernet folklore and mythology

If you’re a history buff, you may be interested in the story of how Ethernet came to be so popular. Here’s how it happened:

The original idea for the Ethernet was hatched in the mind of a graduate computer science student at Harvard University named Robert Metcalfe. Looking for a thesis idea in 1970, he refined a networking technique that was used in Hawaii called the AlohaNet (it was actually a wireless network) and developed a technique that would enable a network to efficiently use as much as 90 percent of its capacity.

By 1973, he had his first Ethernet network up and running at the famous Xerox Palo Alto Research Center(PARC). Bob dubbed his network “Ethernet” in honor of the thick network cable, which he called “the ether.” (Xerox PARC was busy in 1973. In addition to Ethernet, PARC developed the first personal computer that used a graphical user interface complete with icons, windows, and menus, and the world’s first laser printer.)

In 1979, Xerox began working with Intel and DEC (a once popular computer company) to make Ethernet an industry standard networking product. Along the way, they enlisted the help of the IEEE, which formed committee number 802.3 and began the process of standardizing Ethernet in 1981. The 802.3 released the first official Ethernet standard in 1983.

Meanwhile, Bob Metcalfe left Xerox, turned down a job offer from Steve Jobs to work at Apple computers, and started a company called the Computer, Communication, and Compatibility Corporation — now known as 3Com. 3Com has since become one of the largest manufacturers of Ethernet equipment in the world.

How CSMA/CD works

An important function of the Data Link layer is to make sure that two computers don’t try to send packets over the network at the same time. If they do, the signals will collide with each other and the transmission will be garbled. Ethernet accomplishes this feat by using a technique called CSMA/CD, which stands for “carrier sense multiple access with collision detection.” This phrase is a mouthful, but if you take it apart piece by piece, you’ll get an idea of how it works.

Carrier sense means that whenever a device wants to send a packet over the network media, it first listens to the network media to see whether anyone else is already sending a packet. If it doesn’t hear any other signals on the media, the computer assumes that the network is free, so it sends the packet.

Multiple access means that nothing prevents two or more devices from trying to send a message at the same time. Sure, each device listens before sending. However, suppose that two devices listen, hear nothing, and then proceed to send their packets at the same time? Picture what happens when you and someone else arrive at a four-way stop sign at the same time. You wave the other driver on, he or she waves you on, you wave, he or she waves, you both wave, and then you both go at the same time.

Collision detection means that after a device sends a packet, it listens carefully to see whether the packet crashes into another packet. This is kind of like listening for the screeching of brakes at the four-way stop. If the device hears the screeching of brakes, it waits a random period of time and then tries to send the packet again. Because the delay is random, two packets that collide are sent again after different delay periods, so a second collision is unlikely.

CSMA/CD works pretty well for smaller networks. After a network hits about 30 computers, however, packets start to collide like crazy, and the network slows to a crawl. When that happens, the network should be divided into two or more separate sections that are sometimes called collision domains.

The Seven Layers of the OSI Reference Model

OSI sounds like the name of a top-secret government agency you hear about only in Tom Clancy novels. What it really stands for in the networking world is Open Systems Interconnection, as in the Open Systems Interconnection Reference Model, affectionately known as the OSI model.

The OSI model breaks the various aspects of a computer network into seven distinct layers. These layers are kind of like the layers of an onion: Each successive layer envelops the layer beneath it, hiding its details from the levels above. The OSI model is also like an onion in that if you start to peel it apart to have a look inside, you’re bound to shed a few tears.

The OSI model is not a networking standard in the same sense that Ethernet and Token Ring are networking standards. Rather, the OSI model is a framework into which the various networking standards can fit. The OSI model specifies what aspects of a network’s operation can be addressed by various network standards. So, in a sense, the OSI model is sort of a standard of standards.

The first three layers are sometimes called the lower layers. They deal with the mechanics of how information is sent from one computer to another over a network. Layers 4 through 7 are sometimes called the upper layers. They deal with how applications programs relate to the network through application programming interfaces.