MIS 564 DQ1 Ch1 Answer Guidelines
1. What are the advantages of dividing communication networks into layers?
Communication networks are often broken into a series of layers, each of which can be
defined separately, to enable vendors to develop software and hardware that can work
together in the overall network. These layers enable simplicity in development and also in
the comprehension of complex networks. In the end, the strategy of using more simplistic
network layers allows vastly different kinds of equipment to be able to have connectivity
over a common platform or network, using protocols and standards that are applicable to
each narrow slice of the network.
The advantage of dividing the communication networks into layers is vendors can develop
software and hardware to provide the function of each layer separately. This way a vendor
can update their specific layer without impacting another piece of the layer. For example
Cisco could update a network driver and not impact the application layer. This layering
allows for gives a big advantage by allowing each vendor to make this improvement to each
layer, providing a better end user experience. The vendor only needs to be concerned if they
are making changes to their layer if the change they are making impacts their layer’s ability
to hand information to another layer in the OSI or Internet model.
2. What are the five layers in the Internet network model? Explain them briefly.
The application layer is the application software used by the network user. The transport
layer is responsible for obtaining the address of the end user (if needed), breaking a large
data transmission into smaller packets (if needed), ensuring that all the packets have been
received, eliminating duplicate packets, and performing flow control to ensure that no
computer is overwhelmed by the number of messages it receives. The network layer takes the
packets generated by the transportation layer and if necessary, breaks it into several smaller
packets. It then addresses the message(s) and determines their route through the network,
and records packet accounting information before passing it to the data link layer. The data
link layer formats the message to indicate where it starts and ends, decides when to transmit
it over the physical media, and detects and corrects any errors that occur in transmission.
The physical layer is the physical connection between the sender and receiver, including the
hardware devices (e.g., computers, terminals, and modems) and physical media (e.g., cables,
The five layers in the Internet Network Model are:
1. Physical Layer: This layer deals with the transmission of raw bits over the communication
channel. The main issue with the design here is to make sure that there is no bit change during the
transmission of data. This layer includes Hardware devices like computer modems and hubs; and
physical communication media like satellites and cables.
2. Data Link Layer: This layer deals with the transmission of data from one node to the next node in
the network. In this layer, the messages are formatted by indicating its start and end point.This layer
also manages the physical layer by determining when to transmit the data over the communication
medium. Also, the transmission errors are detected and corrected here.
3. Network Layer: This layer is concerned with routing and controlling the operation of the subnet. It
controls the bottleneck which is caused if there are too many packets in the subnet at the same time.
Network layer also finds the address of the next node in the network if it doesn’t have it.
4. Transport Layer: This layer is responsible for linking the network layer and application layer to
establish end to end connections between the sender and receiver. It also breaks the long messages
into smaller messages so that they can be transmitted easily.
5. Application Layer: This is the application or the software which is used by the end-user. The
end-user uses this layer to define the messages which are sent over the network. Application Layer
contains high-level protocols like FTP, SMTP, TELNET, DNS. Common examples of Application
Layer include Web-Browsers and Web-Pages.
3. How are Internet standards developed? Explain the process briefly.
The Internet Engineering Task Force (IETF; http://www.ietf.org) sets the standards that govern
how much of the Internet will operate. Developing a standard usually takes 1-2 years.
Usually, a standard begins as a protocol developed by a vendor. When a protocol is
proposed for standardization, IETF forms a working group of technical experts to study
it. The working group examines the protocol to identify potential problems and possible
extensions and improvements, and then issues a report to IETF. If the report is favorable,
the IETF issues a Request for Comment (RFC) that describes the proposed standard and
solicits comments from the entire world. Once no additional changes have been identified,
it becomes a Proposed Standard. Once at least two vendors have developed software based
on it, and it has proven successful in operation, the Proposed Standard is changed to a Draft
Standard. This is usually the final specification, although some protocols have been elevated
to Internet Standards, which usually signifies a mature standard not likely to change. There is
a correlation of IETF RFCs to ISO standards.
Interestingly, many companies choose to implement hardware and software based on the
proposed or draft standards from such standard-determining processes rather than waiting for
final published standards. One recent and rather prevalent example of this was the Wireless-
N (802.11n) standard. Hardware was available from virtually every wireless hardware
manufacturer before IEEE 802.11n was a published standard. They usually indicated on
the box, as well as inn the manual, that the hardware was Wireless-N Draft compatible,
meaning that it was not guaranteed to be compatible with the final published standard. They
did this because the process was taking an excessive amount of time, they felt there was a
reasonable certainty that the draft specification would be compatible with the final published
specification and building such hardware was necessary to remain relevant in the current
4. According to the textbook, what are the three major trends in communications and
First, pervasive networking will change how and where we work and with whom we do
business. Pervasive networking means that we will have high speed communications
networks everywhere, and that virtually any device will be able to communicate with any
other device in the world. Prices for these networks will drop and the globalization of world
economies will continue to accelerate. Second, the integration of voice, video, and data onto
the same networks will greatly simplify networks and enable anyone to access any media
at any point. Third, the rise in these pervasive, integrated networks will mean a significant
increase the availability of information and new information services. It is likely that
application service providers will evolve that act as information utilities.
Pervasive networking implies that communication networks will eventually be used in all
aspects of our lives. Due to the exponential growth of networking, any device will be able to
communicate with any other device in the world. New ways of conducting business will emerge
as more common devices take on these capabilities. This trend becomes more likely as the
speed of data transmission continues to increase and competition drives down the cost of this
Convergence, or the integration of voice, video, and data communication, is also likely to occur.
In the past, each form of communication was transmitted on a separate network. It is predicted
that over time all three forms of communication will be transmitted over the same circuits. The
integration of voice and data has already occurred. Eventually, video will also be merged with
voice and data.
The emergence of new information systems is also predicted to occur due to the increasing
expansion of networks. For example, application service providers (ASPs) will offer the use of
specific systems such as reservations systems or payroll systems to their customers. Instead of
the customer developing or installing their own system on their computers, they simply use the
service of the ASP. Information utilities are seen as the future of ASPs. An information utility
would provide a range of information services similar to the way a telephone company provides
5. There were many, many more protocols in common use at the data link, network, and
transport layers in the 1980s than there are today. Why do you think the number of
commonly used protocols at these layers has declined? Do you think this trend will
continue? What are the implications for those who design and operate networks?
Today there is convergence around the non-proprietary use of TCP/IP as the protocol of
choice for all networks. For the most part, network software is designed to interface with
networks using this protocol. By non-proprietary, this means that TCP/IP is an interoperable
protocol portable to any manufacturer’s hardware. All manufacturers are developing
their products to use TCP/IP as their protocol of choice. This is of great benefit for those
operating networks because they do not have to deal with the incompatibilities of various
proprietary networks. In the past, network equipment such as IBM’s SNA and Novell’s
Netware products had retained proprietary protocols that did not interface with as much
ease as today’s more compatible and TCP/IP based products. The decline of the number of
competing protocols is related to the emergence of TCP/IP as the universal connector, along
with the rise in competition and subsequently better price availability from those vendors
who market to this protocol, thus ensuring the viability of this standard for a long time to
come for network managers.
In my opinion, commoditization of the technology underpinning the layers is the biggest reason
that there are less protocols in use today. The data link layer is probably the easiest to discuss
as for the most part, we’ve gotten down to just a handful of protocols that are in common use,
with Ethernet owning the largest piece of that pie. At the network layer, IPv4 and IPv6 are by
far the most widely used due to the Internet’s taking over everything. But some other protocols,
such as ICMP are still widely used, so there’s still a bunch of variety there. At the transport layer,
TCP and UDP do the lion’s share of the work, and again, I think that’s because of the world
standardizing on the Internet as the preferred medium of communication. Since it uses TCP/IP,
everyone is going to try to make their stuff work over those protocols so that they can hit a larger
I’m not sure this trend will continue, or even that it could continue all that much further. There
are things that each of the protocols in use does better than the others, which is why they’re still
around. I guess someone could try to create a new protocol that closes some of that performance
gap, but they’ve got a steep hill to climb to get it adopted.
As I said a little earlier, the biggest thing that people who design and operate networks need to
take in to account is what the dominant protocols are, what they’re going to be used for, and how
best to tune the network for them. Even though different services are using the same protocols,
they make different demands on the network and so have different design characteristics and
6. Do you have any work experiences in computer networking? If yes, what have you
learned from that experience?
Usually my work with computer networking does not go beyond home wireless setups between my router
and computer and printer. However, last month I had to run the wiring for the portable scanners used to
do inventory in a warehouse at the company I work for.
The scanners lost its connection if they were too far from their base. So I used cable connectors to
extend the length of the cables in order to keep the base near all of the inventory areas. I later learned
that some of the cables I extended were extended too long which resulted in network connectivity issues
between the scanners and the computer. Once I used the shorter cables to extend the length and did not
use more than one cable connector, all of the scanners were able to work and transmit data without so
much as one network hiccup.
This was my first hand experience with learning not just about computer networking, but also how
important it is to know your cables, distances, and how it affects transmitting data.
MIS 564 DQ2 Ch2 Answer Guidelines
What are the advantages and disadvantages of traditional host-based networks versus client-server networks?
Integrated architecture from a single vendor
Simpler, centralized installation
|Balanced processing demands
Lower cost; inexpensive infrastructure
Can use software and hardware from different vendors
|Having all processing on host may lead to overload
Cost of software and upgrades
Terminal totally dependent on server
|Problems with using software and/or hardware from different vendors
More complex installation or updating (although automated installation software helps greatly in this area).
Host Base Network:
Application software is stored on one server along with all data.
Provides a point of control, since all the message come through the host computer.
It has a very simple architecture.
Resolution of problems is easier.
Response time of the server reduces as the demand for network processing increases
Upgrading is an expensive process.
More likely to suffer from server bottlenecks.
Client Server Network:
There is no single point of failure as is in the case of host-based architecture
Servers can be easily upgraded or added to the network, at low costs, as demand for processing and storage on the network increases.
it allows software and hardware from different manufacturers to work together on the same network.
Updating network software is also more complex than other architectures since the upgrade needs to be done on all clients and all servers, which run the application.
Client server network are complex, applications need to be written such that they can work both with the client and the server.
High cost of maintenance.
Some experts argue that thin-client client-server architectures are really host-based architectures in disguise and suffer from the same old problems. Do you agree? Explain.
While thin client have substantially less application logic than thick client, they have sufficient application logic (as, for example, a Web browser possibly with Java applets) to participate in a client-server relationship. The older host-based terminals did not even have this much application logic. While thin-client in use today reflects some level of return to a more centralized approach, the client is likely served by multiple servers (and even multiple tiers), rather than a single large host server as in the past. Thus, the two approaches are similar, but not exact, from a technological design perspective.
Some thin-client, client-server architectures today are very similar to the common host-based architectures of the past, only with improved input and output quality or quantity. Remote Desktop Protocol (RDP) and NoMachine (NX) are two of the common protocols used by such architectures today. While they are designed to transfer not just text, but images and sounds between the host and the client, they still suffer from some of the same drawbacks, such as placing immense load on the host and often requiring expensive host upgrades.
Other thin-client, client-server architectures in use today are very different from host-based architectures, in that the client does do some, though limited, processing of the data, reducing the load on the host and allowing presentation to ultimately be generated on the client. One example of this would be browser-based thin clients. By parsing and displaying HTML based documents, the client reduces some of the load on the server, and also reduces the amount of data that must be transferred between the thin client and the server. These clients typically have no local storage, so all persistent data must till be transmitted back to the server in order to be saved.
Describe how a Web browser and Web server work together to send a web page to a user.
In order to get a page from the Web, the user must type the Internet Uniform Resource Locator (URL) for the page he or she wants, or click on a link that provides the URL. The URL specifies the Internet address of the Web server and the directory and name of the specific page wanted. In order for the requests from the Web browser to be understood by the Web server, they must use the same standard protocol. The standard protocol for communication between a Web browser and a Web server is Hypertext Transfer Protocol (HTTP).
A Web browser sends request to a web server and the web server will reply back with a response to that request. For example – A user will request the web browser to get a page from a web server by typing the URL for that page. The Web browser will get the IP address of that web server that serves that page and send a request with the name of the page and the directory where the file is located on that server. The Web server will send the response back to the browser with the page that was requested or an error message saying that page was not found. The request-response process will happen for every page that is requested by the client from the server.
- Web browser can use any protocol that is supported by the Web server for this request-response dialogue (example – HTTP, FTP)
- Every request that goes from the web browser will have the request line, request header, and request body.
- Every response that comes back from the web server will have a response status message, response header, and the response body.
You have explained the concept of communication between Web Browser and Web server very well. I would like to add that when browser sends the request to the Web server, it is done by sending a command like ‘GET’. This GET command is then processed by the Web Server. Then the server sends the response back to the browser. Browser then processes this response and present the web-page to the user
Describe how mail user agents and message transfer agents work together to transfer mail messages.
The sender of an e-mail uses a user agent (an application layer software package) to write the e-mail message. The user agent sends the message to a mail server that runs a special application layer software package called a message transfer agent. These agents read the envelope and then send the message through the network (possibly through dozens of mail transfer agents) until the message arrives at the receiver’s mail server. The mail transfer agent on this server then stores the message in the receiver’s mailbox on the server. When the receiver next accesses his or her e-mail, the user agent on his or her client computer contacts the mail transfer agent on the mail server and asks for the contents of the user’s mailbox. The mail transfer agent sends the e-mail message to the client computer, which the user reads with the user agent.
The most commonly used email protocol is Simple Mail Transfer Protocol (SMTP). The SMTP email is usually implemented as two-tier thick client architecture, though it is not mandatory. With this architecture, each client computer has an “application layer software package” called Mail User Agent. This is also called as Email Client. Eudora and Microsoft’s Outlook are common email clients.
The user creates an email message using one of these email clients. Internally, the email client formats the message into an SMTP Packet that contains the sender’s address, destination address and main content. When user clicks the “send” button, the user agent software sends these SMTP packets to a mail server. This mail server runs an “application layer software package” called Mail Transfer Agent. This is also called as Mail Server Software.
This Mail Transfer Agent reads the destination address found in the SMTP packet and then sends the packet to the corresponding mail server through the network (or Internet) until it reaches the destination address. At the receiver side, the Mail Transfer Agent receives the mail and stores it in the receiver’s mail box on that server. The message sits in the mail box assigned to the user who is to receive the mail until he checks the new mail.
What roles do SMTP, POP, and IMAP play in sending and receiving e-mail on the Internet?
SMTP covers the message sending process (from the sender’s email client all the way to the receiver’s email server); while POP (Post Office Protocol) and IMAP (Internet Mail Access Protocol) cover the message retrieving process (from receiver’s email server to receiver’s email client).
Assume the following environment: 1. Sender’s email program (such as Outlook), 2. sender’s email server, 3. Internet, 4. receiver’s email server, and 5. receiver’s email program. When the sender sends an email to the receiver, the message is formatted using SMTP from 1 => 2 => 3 => 4. When the receiver checks her email, her email program (5) interacts with her email server (4) using POP or IMAP.
POP is gradually being replaced by IMAP. In POP, messages are downloaded onto the client; while in IMAP messages are synchronized between the server and the client. In POP, messages can be deleted or left on the server after they are downloaded by the user depending on the setting. In IMAP, messages always remain on the serve and the user can manage mailboxes/folders on the server.
Simple Mail Transfer Protocol (SMTP) is a protocol which is used for sending and receiving emails from one destination to another destination over the internet. The destination address is predefined before the sending of email. Also, every SMTP server has its own unique identifier. SMTP uses Port 25 to send messages.
Post Office Protocol 3 (POP3) is a protocol which is used for the retrieval of emails from an email server. It makes it possible to download an email message from the SMTP server. POP 3 protocol assumes that only one email client is connected to the mailbox. POP3 works absolutely fine with on-line and off-line environment.
Internet Message Access Protocol (IMAP) is a protocol is used to access an email which on a remote webserver from a local client.
- What kind of application architecture (host-based, client-server, 2-tiered, 3-tiered, n-tiered?) is used in your organization’s computer networks? How do you think of it?
I have been implementing SAP ERP software at different organizations for over 17 years.
This is a 3 tier client server architecture with clients, application servers and database servers.
The client software runs presentation software called SAP GUI. This is usually installed for MS Windows, although Java and HTML versions exist.
The application software is database neutral. It can be implemented using many different DBMSs. Most common are Oracle and MS Sequel, although DB2 was often used earlier in my career.
The application software can run on servers with various operating systems. My current client (customer) has just migrated the application servers from HP UX to LINUX for example. This required no changes to either of the other 2 tiers in the architecture.
Most production SAP environments use multiple application servers for load balancing with the system determining which server to use each time a user logs on based on current load.
MIS 564 DQ3 Ch3 Answer Guidelines
- What are the three types of data flows? Give some examples.
The three types of data flows are simplex, half-duplex and full duplex. Simplex is one-way transmission, such as that in radio or broadcast TV transmission. Half duplex is two-way transmission, but you can transmit in only one direction at a time. A half duplex communication link is similar to a walkie-talkie link; only one computer can transmit at a time. With full duplex transmission, you can transmit in both directions simultaneously, with no turnaround time.
Data can be transmitted through the circuit (Physical Layer) from one end to the other. The movement of data is called as Flow of Data or Data flow. There are three types in which data can flow from one end to the other. They are,
Simple Transmission: This is the one way transmission of data from Sender to Receiver. Receiver cannot send the data back to the sender. Ex: Radio, Television. In this case, the analog and/or digital data is sent to the devices and not vice versa.
Half-Duplex Transmission: This is the two way communication of data between Sender and Receiver. With this type, both sender and receiver can send data to each other. But, at any point of time, only one can transmit data to the other. Ex: Walkie-Talkie, Radio phones used in police car. In this case, the data is transmitted in only one direction at a time. The control signals are used to determine who will transmit data and who will receive data. The amount of time taken to switch between receiving and sending data is called Turnaround Time.
Full-Duplex Transmission: This is the two way communication of data between Sender and Receiver. With this type, both sender and receiver can send data to each other simultaneously, with no turnaround time. In full-duplex, the available capacity in the circuit is divided-half in one direction and half in other direction to facilitate simultaneous data transfer. Ex: Telephone, Mobile Phone conversations.
- What are the three types of commonly used guided media in data communications? Describe them briefly.
Guided media are those in which the message flows through a physical media such as a twisted pair wire, coaxial cable, or fiber optic cable; the media “guides” the signal.
One of the most commonly used types of guided media is twisted pair wire, insulated or unshielded twisted pairs (UTP) of wires that can be packed quite close together. Bundles of several thousand wire pairs are placed under city streets and in large buildings. Twisted pair wire is usually twisted to minimize the electromagnetic interference between one pair and any other pair in the bundle.
Coaxial cable is another type of commonly used guided media. Coaxial cable has a copper core (the inner conductor) with an outer cylindrical shell for insulation. The outer shield, just under the shell, is the second conductor. Coaxial cables have very little distortion and are less prone to interference, they tend to have low error rates.
Fiber optics, is becoming much more widely used for many applications, and its use is continuing to expand. Instead of carrying telecommunication signals in the traditional electrical form, this technology utilizes high-speed streams of light pulses from lasers or LEDs (light emitting diodes) that carry information inside hair-thin strands of glass or plastic called optical fibers.
Medium is the physical matter or substance that carries the voice or data transmission. They can be classified as Guided Media and Wireless Media. There are three types of commonly used guided media, through which data is transmitted. They are,
Twisted Pair Cable: This media contains insulated pair of wires that can be packed quite close together. The wires are usually twisted to minimize the electromagnetic interference between one pair and another pair in the bundle. These cables are available as sets of pairs packaged together. Apartment’s Telephone cables usually contain two sets of pairs packaged together, whereas LAN cables are packaged as four sets of pairs together.
Coaxial Cable: This media has four layers having conductors and insulators arranged alternately. The inner core is a conductor made of copper material. It is covered with a cylindrical shell acting as an insulator. This cylindrical shell is wrapped with another conducting material that acts as a second conductor. This second conductor is covered with another outer shield that again acts as an insulator. Because of the heave insulation between two conductors, the coaxial cables are less prone to electromagnetic interference and errors. But, this media is quickly disappearing as the cost of the media is almost 3 times higher than the twisted pair cables.
Fiber-Optic Cable: This type of media is widely being used in the industry for its salient features. Instead of carrying telecommunication signals in the electrical form, this technology uses high-speed streams of light pulses from lasers or LEDs that carry information inside hair-thin strands of glass called Optical Fibers. Fiber optics can carry huge amounts of information at extremely fast data rates. This makes ideal for the simultaneous transmission of voice, data, and image signals It is not fragile or brittle. Also it is no bulky and it is more resistant to corrosion. An optical fiber can withstand higher temperatures than can copper wire. These features make them more popular and widely acceptable in the industry.
- What is the multiplexing? Explain it briefly.
A multiplexer puts two or more simultaneous transmissions on a single communication circuit. Multiplexing a voice telephone circuit means that two or more separate conversations are sent simultaneously over one communication circuit between two different cities. Multiplexing a data communication circuit means that two or more messages are sent simultaneously over one communication circuit. In general, no person or device is aware of the multiplexer; it is “transparent.”
Multiplexing involves the transmission of multiple streams of data on the same communication channel as one complex stream of data and then retrieving it as individual signals at the receiving ends. In other words, we can say that the bandwidth of the communication channel is shared by many different streams of data.
Multiplexing requires a multiplexer at the source of the data to multiplex different stream of data into one single stream. At the destination, there is another multiplexer which separates this single stream into different streams of information. Multiplexing is done in the multiples of four.
The two types of Multiplexing techniques are:
1. Frequency division multiplexing (FDM): This is used in analog transmission of data where in the channel bandwidth is divided into sub channels. Each channel carries a stream of data simultaneously.
2. Time division multiplexing (TDM): This is used in digital transmission of data where in each stream of data is allots a time slot to transmit.
- What are frequency division multiplexing (FDM) and time division multiplexing (TDM)? Explain them briefly.
Frequency division multiplexing can be described as dividing the circuit “horizontally” so that many signals can travel a single communication circuit simultaneously. The circuit is divided into a series of separate channels, each transmitting on a different frequency range, much like series of different radio or TV stations. All signals exist in the media at the same time, but because they are on different frequencies, they do not interfere with each other.
Time division multiplexing shares a communication circuit among two or more terminals by having them take turns, dividing the circuit “vertically.” In TDM, one character is taken from each terminal in turn, transmitted down the circuit, and delivered to the appropriate device at the far end. Time on the circuit is allocated even when data are not be transmitted, so that some capacity is wasted when terminals are idle.
Frequency Division Multiplexing (FDM) works by transmitting all of the signals simultaneously along the same high speed link, with each signal set at a different frequency. For FDM to work effeciently frequency overlap must be avoided. The demultiplexer at the receiving end works by dividing the signals by tuning into the appropriate frequency. FDM gives a total bandwidth greater than the combined bandwidth of the signals to be transmitted. In order to prevent signal overlap there are strips of frequency that separate the signals. These are called guard bands.
Time-division multiplexing (TDM) is a method of putting multiple data streams in a single signal by separating the signal into many segments, each having a very short duration. Each individual data stream is reassembled at the receiving end based on the timing. The circuit that combines signals at the source (transmitting) end of a communications link is known as a multiplexer. The individual signals are separated out by means of a circuit called a demultiplexer, and routed to the proper end users.
An easy way to remember the difference between FDM and TDM is to think about AM or FM radio. Each station in your area is assigned a frequency and thus, all stations are able to communicate in the same broad spectrum of radio frequencies. All of the FM stations in your area broadcast at the same time, but on different frequencies (FDM). The songs or talk shows on an individual radio station are like TDM – the available broadcast time is split into time slots and each song or show is provided a slot in turn, allowing all of them to be broadcast over the course of a day.
- What is a modem? How does it work?
Modem is an acronym for MOdulator/DEModulator. At one end of the communication, a modem converts digital signals into analog signals. It takes the digital electrical pulses received from a computer, terminal, or microcomputer and converts them into a continuous analog signal that is needed for transmission over an analog circuit. At the other end of the communication, a modem converts the analog signals back to digital signals.
Modem is an acronym for modulator/demodulator and using a dial-up modem to access the internet was common prior to the rise of broadband internet service. Modem is a device installed on the information sender’s computer, with its main purpose to translate digital data into analog data, that can be transmitted through the voice/telephone communication circuits. A second modem at the receiver’s end translates the analog transmission back to digital data for computer use. Both modems at sender’s and receiver’s systems must comply with the same standards in order to communicate. The analog data behaves in similar way as sound waves, which have three characteristics. First characteristic is the height of the wave, called amplitude. Second one is the length of the wave, measured in wave number per second, or frequency (hertz or Hz). Third one is the phase, measured in the number of degrees, referring to the direction in which the wave begins.
By modulating or changing the carrier sound wave’s amplitude (height), frequency (length), or phase (shape), binary 1 or 0 can be expressed and understood. In a typical amplitude modulation, one amplitude is defined to be a 1 and another amplitude is defined to be a 0. Therefore, it may be possible to send more than 1 bit on every symbol (or wave) with a different set of amplitude level definition, such as defining four amplitude levels (11 / 10 / 01 / 00) to send 2 bits on one wave.
Modem’s data transmission rate is the primary factor that determines the throughput rate of data. However, throughput rate can be improved through data compression (such as ISO V.44 that uses Lempel-Ziv encoding – usually reduces data by 6:1 on average by building a dictionary of two-, three-, and four-character combinations and sends only the index to the dictionary entry rather than the actual repeating patterns of messages).
MIS 564 DQ4 Ch4 Answer Guidelines
1. What are the main functions of the data link layer?
The data link layer controls the way messages are sent on the physical media. The data link layer handles three functions: media access control, message delineation, and error control. The data link layer accepts messages from the network layer and controls the hardware that actually transmits them. The data link layer is responsible for getting a message from one computer to another without errors. The data link layer also accepts streams of bits from the physical layer and organizes them into coherent messages that it passes to the network layer.
The main functions of Data Link layer are:
1. Media Access Control (MAC): It is responsible for handling the data which is to be transmitted over the network. It monitors and controls the data during its transmission. It has two type of operations
A. Contention: This type of operation is used where data transmission is done when the network is free.
B. Controlled Access: This approach is mainly used in transmitting data in the Mainframes.
2. Error Control: It is responsible for handling the errors related to network which occur during the transmission of data. There are three approaches to error control viz.
A. Error Prevention
B. Error Detection
C. Error Correction
3. Message Delineation: It indicates the staring and the ending of the message which operates with the help of start and stop flags. Two types of protocols are:
A. Synchronous Transmission protocol: It groups the data into blocks which is used for detecting the starting and ending of message
B. Asynchronous Transmission protocol: It uses start and stop bits to detect the start and end of message.
2. What is media access control? Under what conditions is media access control unimportant?
Media access control handles when the message gets sent. Media access control becomes important when several computers share the same communication circuit, such as a point-to-point configuration with a half duplex line that requires computers to take turns, or a multipoint configuration in which several computers share the same circuit. Here, it is critical to ensure that no two computers attempt to transmit data at the same time — or if they do, there must be a way to recover from the problem. Media access control is critical in local area networks.
With point-to-point full duplex configurations, media access control is unnecessary because there are only two computers on the circuit and full duplex permits either computer to transmit at anytime. There is no media access control.
Media access control (MAC) is one of two sublayers of the data link layer. It controls the physical hardware. Media Access Control deals with the need to control when computers transmit their messages over a shared communication circuit. MAC is necessary with point-to-point half-duplex configuration or multipoint configuration where several computers share the same circuit. Controls lessen the chance that two computers are attempting to transmit data at the same time. Contention and controlled access are two approaches that might be implemented to control transmission.
Media Access Control is not important or is unnecessary with point-to-point full-duplex configuration because there are only two computers on the circuit and full duplex allows both to transmit at the same time.
MAC refers to controlling when computer transmits.
Three approaches to this are:
1. Roll-call polling: The server polls client computer to see if they have any data to send. Computers can transmit only when they have been polled.
2. Hub-polling or Token passing: The computers themselves manage when they can transmit by passing a token to one another. Computers cannot transmit unless they have a token.
3. Contention: Computers listen and transmit only when no other computer is transmitting. In general, contention approaches work better for small networks that have low levels of usage, whereas polling approach works better for networks with high usage.
3. Compare and contrast roll call polling, hub polling (or token passing), and contention.
With roll call polling, the front end processor works consecutively through a list of clients, first polling terminal 1, then terminal 2, and so on, until all are polled. Roll call polling can be modified to select clients in priority so that some get polled more often than others. For example, one could increase the priority of terminal 1 by using a polling sequence such as 1, 2, 3, 1, 4, 5, 1, 6, 7, 1, 8, 9.
Hub polling is often used in LAN multipoint configurations (i.e., token ring) that do not have a central host computer. One computer starts the poll and passes it to the next computer on the multipoint circuit, which sends its message and passes the poll to the next. That computer then passes the poll to the next, and so on, until it reaches the first computer, which restarts the process again.
Contention is the opposite of controlled access. Computers wait until the circuit is free (i.e., no other computers are transmitting), and then transmit whenever they have data to send. Contention is commonly used in Ethernet local area networks.
Roll-call polling involves a controller that delegates access to clients in a consecutive manner. For example, computer 1 is polled first, then when it is finished, computer 2 is polled, then computer 3, etc. Priorities can be set so that certain clients are polled more (or less) often than others.
Hub polling (a.k.a. token passing), somewhat similar to roll-call, sequentially grants control. However, hub polling involves the passing of control from one computer to the next, to the next, and so on until the process starts over.
Contention, compared to hub and roll-call polling, is much more random. Computers wait until the circuit is free in order to send a message. In a sense, transmission occurs by one computer when no other computers are transmitting. As one can imagine, collisions between two simultaneous transmissions must be avoided somehow.
4. How do checksum error correction and cyclical redundancy checking (CRC) work?
Checksum error checking adds a checksum (typically 1 byte) to the end of the message. The checksum is calculated by adding the decimal value of each character in the message, dividing the sum by 255, and then using the remainder as the checksum. The same approach is used at the receiving end. If the receiver gets the same result, the block has been received correctly.
Cyclical redundancy check (CRC) adds 8, 16, 24 or 32 bits to the message. With CRC, a message is treated as one long binary number, P. Before transmission, the data link layer (or hardware device) divides P by a fixed binary number, G, resulting in a whole number, Q, and a remainder, R/G. So, P/G = Q + R/G. For example, if P = 58 and G = 8, then Q = 7 and R = 2. G is chosen so that the remainder R will be either 8 bits, 16 bits, 24 bits, or 32 bits.
The remainder, R, is appended to the message as the error checking characters before transmission. The receiving hardware divides the received message by the same G, which generates an R. The receiving hardware checks to ascertain whether the received R agrees with the locally generated R. If it does not, the message is assumed to be in error.
Checksum is an error detection method. A checksum (usually one byte) is added to the end of the message. The checksum is calculated by adding the decimal value of each character in the message then dividing by 255 and using the remainder as the checksum. The receiver then also calculates it own checksum and then compares the values – if they are they same then the assumption is that the message is free of errors. Generally this has a 95% success rate.
Cyclical Redunancy Check, a more accurate error dectection method, adds 8,16, or 32 bits to the message. The message is treated as one long binary numner. Before the message is transmitted it is divided by a fixed number which generates a remainder. The remainder is attached to the message and the receiving hardware checks to see if the remainder sent is the same as the locally generated remainder. If it does the message is considered to be error free.
5. Compare and contrast stop-and-wait ARQ and continuous ARQ.
With stop-and-wait ARQ, the sender stops and waits for a response from the receiver after each message or data packet. After receiving a packet, the receiver sends either an acknowledgment (ACK) if the message was received without error, or a negative acknowledgment (NAK) if the message contained an error. If it is an NAK, the sender resends the previous message. If it is an ACK, the sender continues with the next message. Stop-and-wait ARQ is by definition, a half duplex transmission technique.
With continuous ARQ, the sender does not wait for an acknowledgment after sending a message; it immediately sends the next one. While the messages are being transmitted, the sender examines the stream of returning acknowledgments. If it receives an NAK, the sender retransmits the needed messages. Continuous ARQ is by definition a full duplex transmission technique, because both the sender and the receiver are transmitting simultaneously (the sender is sending messages, and the receiver is sending ACKs and NAKs).
Stop-and-Wait (Automatic Repeat reQuest) ARQ: By definition, Stop-and-wait ARQ is half-duplex transmission technique. With this type, the sender stops and waits for a response from the receiver after each data packet. After receiving a packet, the receiver sends acknowledgement (ACK), if the packet was received without error, or a negative acknowledgement (NAK), if the message contained an error. If it is an NAK, the sender resends the previous message. If it is an ACK, the sender continues with the next message.
Continuous (Automatic Repeat reQuest) ARQ: By definition, Continuous ARQ is full-duplex transmission technique, because both sender and receiver are transmitting simultaneously. With this type, the sender does not wait for an acknowledgement after sending message; it immediately sends next one. However, the sender examines the stream of returning acknowledgements. If it receives NAK, the sender transmits the needed message. The packets that are getting retransmitted may be only those containing an error or may be the first packet with an error and all those that followed it.
6. What is transmission efficiency? Calculate transmission efficiency for asynchronous transmission (assuming 7 bit ASCII data) and Ethernet (using the frame layout in figure 4.10a on page 134, assuming 1500 bytes of data).
Transmission efficiency is defined as the number of information bits (actual data generated by the user) divided by the total number of bits in transmission (information bits plus overhead bits).
In asynchronous transmission with 7 bit ASCII data, there are 3 overhead bits, start bit, stop bit, and parity bit, for each character sent. So the transmission efficiency = (number of information bits) / (number of information bits + overhead bits) = 7 / (7 + 3) = 70%.
In Ethernet, the number of overhead bits in each frame = 7 + 1 + 6 + 6 + 4 + 2 + 1 + 1 + 1 + 4 = 33 bytes (see figure 4.10a on page 134). Assuming 1500 bytes of data, the transmission efficiency = 1500 / (1500 + 33) = 0.978 = 97.8%.
Transmission Efficiency is defined as the total number of information bits divided by the total number of bits in transmission.
Efficiency of Asynchronous Transmission:
In asynchronous transmission each character is transmitted independently. To separate the characters and synchronize transmission, a start bit and a stop bit are put on the front and back of each individual character. So for a 7-bit ASCII with even parity three overhead bits are added, start bit, stop bit and the parity bit. Therefore the total number of bits in transmission is 10. Therefore the efficiency of a 7 bit ASCII data will be 7(number of information bits)/ 10(total number of transmitted bits) or 70%.
Efficiency of Ethernet:
The Ethernet 802.3 frame has 7 byte preamble data, 1 byte of frame delimiter, 6 bytes for receiver and source addresses, length is 8-bit bytes of the message portion of the frame. The VLAN tag field is an optional 4-byte address field. The DSAP and SSAP of 1 byte each are used to pass control information between the sender and receiver. The control field is of 1-2 bytes depending on the last two bits which describe the type of information being passed. The Ethernet 802.3 frame has an additional 33-34 bytes or 264-272 overhead bits. Since the message is of 1500 bytes or 12000 bits. The total number of transmission bits would be 12264-12674. Therefore the efficiency of Ethernet is97.78% – 97.84% depending on the control field information.
MIS 564 DQ5 Ch5 Answer Guidelines
Using TCP/IP as an example, explain what main functions of transport layer and network layer are.
TCP performs packetizing, numbering them, ensuring each packet is reliably delivered, and putting them in the proper order at the destination. Packetizing means to take one outgoing message from the application layer and break it into a set of smaller packets for transmission through the network. Conversely, it also means to take the incoming set of smaller packets form the network layer and reassemble them into one message for the application layer.
IP performs routing and addressing. IP software is used at each of the intervening computers/routers through which the message passes; it is IP that routes the message to the final destination. The TCP software only needs to be active at the sender and the receiver, because TCP is only involved when data comes from or goes to the application layer.
The main function of transport layer is to manage the flow of data between nodes across the network. The transport layer will accept the data from application layer and split it into smaller units and pass these units to the network layer. Transport layer also ensure that all the data is arrived on the other end correctly. Transmission control protocol (TCP) and User Datagram Protocols (UDP) are the two main protocols used in transport layer. TCP is a connection oriented protocol, it establishes connection between two nodes through sockets that determined by the IP address and port numbers whereas UDP is a connectionless protocol. UDP us usually used when the sender need to send a single packet to the receiver.
The main function of network layer is to control the operation of the subnet. The network layer will receive the data from Transport layer and break it into smaller units call maximum transmission units (MTU). The network layer will determine how the packets will be routed from source node to destination node. Static or Dynamic routing tables will be used to route packets from one network to another. IP, RIP, and OSPF are some of the popular Network layer protocols.
Compare and contrast the three types of addresses used in a network.
When users work with application software, they typically use the application layer address (e.g., entering a Web address into a Web browser, such as http://www.cba.uga.edu). When a user types a Web address into a Web browser, the request is passed to the network layer as part of an application layer packet formatted using the HTTP standard.
The network layer software translates this application layer address into a network layer address. The network layer protocol used on the Internet is TCP/IP, so this Web address (www.cba.uga.edu) is translated into an TCP/IP address (usually just called an IP address for short) which is four bytes long when using IPv4 (e.g., 184.108.40.206).
The network layer then determines the best route through the network to the final destination. Based on this routing, the network layer identifies the data link layer address of the next computer to which the message should be sent. If the data link layer is running Ethernet, then the network layer IP address would be translated into an Ethernet address (e.g., 00-0F-00-81-14-00).
Application Layer address (URL)
This address is the name of a server on the network. It is used by application software to make requests of the server so that the IP address is transparent to both the application layer and the end user.
There are obvious advantages to this:
1. If for any reason the IP address of the server changes the application layer is unaffected
2. The same server (IP address) can have multiple application layer addresses – thus you will find many companies will register the common miss-spellings of their main URL and “ point” them all to the same server as the correct URL
3. It allows for much more memorable server addresses. Imagine if we had to use the IP address of every server on the internet instead of URLs. Personally speaking I find google.com easier to remember than 220.127.116.11
Network Layer Address (IP address)
This address is the unique address of an individual server on the network, in the case of the Internet Protocol this is the IP address.
It is used by the network layer software to identify the correct destination for messages.
Servers need to have a permanently assigned IP address, however clients on a network do not need to be found by other clients and thus their IP addresses may be assigned dynamically (e.g. using DHCP).
Anyone with a Smartphone will see this in action when registering on a wireless network; the message “obtaining IP address” will appear while the network server determines the next available address in its assigned range and allocates it to the device.
Data Link Layer Address (MAC address)
This address is the unique identifier of a physical network device; it is the address used by the data link layer.
It is assigned by the device manufacturer and therefore cannot be changed.
MAC addresses are typically written in hexadecimal format and can be found either labeled on the device itself or via a software utility.
For example if I want to know the MAC address of the wireless network card on my laptop I can go to the command prompt and type “ ipconfig /all”.
Knowing the MAC addresses of your wireless devices can provide an additional level of security on your home network. Wireless router software allows you to specify a list of MAC addresses that may connect to the network.
How does TCP/IP perform address resolution for network layer address, i.e., translating application layer addresses into network layer addresses?
Server name resolution is the translation of application layer addresses into network layer addresses (e.g., translating an Internet address such as http://www.cba.uga.edu into an IP address such as 18.104.22.168). This is done using the Domain Name Service (DNS). Throughout the Internet there are a series of computers called name servers that provide DNS services. These name servers run special address databases that store thousands of Internet addresses and their corresponding IP addresses. These name servers are in effect the “directory assistance” computers for the Internet. Any time a computer does not know the IP number for a computer, it sends a message to the name server requesting the IP number.
When TCP/IP needs to translate an application layer address into an IP address, it sends a special TCP-level packet to the nearest DNS server. This packet asks the DNS server to send the requesting computer the IP address that matches the Internet address provided. If the DNS server has a matching name in its database, it sends back a special TCP packet with the correct IP address. If that DNS server does not have that Internet address in its database, it will issue the same request to another DNS server elsewhere on the Internet.
Once your computer receives an IP address it is stored in a server address table. This way, if you ever need to access the same computer again, your computer does not need to contact a DNS server. Most server address tables are routinely deleted whenever you turn off your computer.
The process of translating application layer address to network layer and network layer address to data link layer is called address resolution. Server name resolution is used to convert application layer address to network layer address. This converts the web address e.g. www.facebook.com (IP Address – 22.214.171.124 – 126.96.36.199) into the corresponding IP address. This is done with the help of the Domain name Service (DNS). A request is sent to the DNS server from the client asking for the IP address for the particular web page. If the IP address is found it sends a response to the client machine with the IP address else it transfers the request to another DNS machine to fetch the IP address. This change is carried out by the Address resolution protocol. The address resolution protocol contains one address resolution request or a response.
How does TCP/IP perform address resolution for data link layer addresses, i.e., translating network layer addresses into data link layer addresses?
To send a message to a computer in its network, a computer must know the correct data link layer address. In this case, a broadcast message is sent to all computers in the subnet. A broadcast message, as the name suggests, is received and processed by all computers in the same LAN (which is usually designed to match the IP subnet). The message is a specially formatted request using Address Resolution Protocol (ARP) that says “Whoever is IP address xxx.xxx.xxx.xxx, please send me your data link layer address.” The TCP software in the computer with that IP address then responds with its data link layer address. The sender transmits its message using that data link layer address. The sender also stores the data link layer address in its address table for future use.
TCP/IP resolves data link layer addresses, the physical address or MAC address, from network layer addresses by transmitting a message asking which device has a particular IP address. The device with the sought-after IP address responds with a message that it is at a particular MAC address.
As an example, Wireshark captured the following conversation between my Dell laptop and my HP OfficeJet printer (edited for brevity and content). The first line is my laptop transmitting a message asking who has 192.168.1.128 and requesting that they respond to 192.168.1.6. The second line is my printer responding that 192.168.1.128 is at physical address 00:22:64:ec:0a:92.
Source Destination Protocol Length Info
Dell_59:25:e7 Hewlett-_ec:0a:92 ARP 42 Who has 192.168.1.128? Tell 192.168.1.6
Hewlett-_ec:0a:92 Dell_59:25:e7 ARP 60 192.168.1.128 is at 00:22:64:ec:0a:92
The ARP table on my laptop also stores known links between IP addresses and their corresponding physical addresses.
Explain how the client computers in Building A in Figure 5.13 (128.192.98.xx) would obtain the data link layer address of their subnet gateway.
When a computer is installed on a TCP/IP network (or dials into a TCP/IP network), it knows the IP address of its subnet gateway. This information can be provided by a configuration file or via a bootp or DHCP server. However, the computer does not know the subnet gateway’s Ethernet address (data link layer address). Therefore, TCP would broadcast an ARP request to all computers on its subnet, requesting that the computer whose IP address is 188.8.131.52 to respond with its Ethernet address.
All computers on the subnet would process this request, but only the subnet gateway would respond with an ARP packet giving its Ethernet address. The network layer software on the client would then store this address in its data link layer address table.
In a TCP/IP network, each host is configured with an IP address, a subnet mask and a gateway address. The hosts in the same network communicate using the Layer 2 MAC address or Ethernet address. The MAC address of the other hosts on the same network is discovered using the Address resolution protocol (ARP).
If a client in the 128.192.98.x network wanted to establish a connection with a host in 128.192.95.x or a host in another network, the client will have to get the help of Gateway (Router). The gateway or router will help to route the messages from one network to another network. The client will send an ARP request to find the MAC address of the Gateway device, once it receive the MAC address, the client will forward the message to gateway to route it to the destination network.
What is a subnet? Why roles do subnets play in networking?
Each organization must assign the IP addresses it has received to specific computers on its networks. In general, IP addresses are assigned so that all computers on the same local area network have similar addresses. For example, suppose a university has just received a set of Class B addresses starting with 128.184.x.x. It is customary to assign all the computers in the same LAN numbers that start with the same first three bytes, so the Business School LAN might be assigned 128.184.56.x while the Computer Science LAN might be assigned 128.184.55.x. Likewise, all the other LANs at the university and the backbone network that connects them, would have a different set of numbers. Each of these LANs is called a TCP/IP subnet because they are logically grouped together by IP numbers. Knowing whether a computer is on your subnet or not is very important for message routing.
Subnets or subnetworks are used on a network to subdivide the network into logical pieces. They use address hierarchy to make the IP addresses more functional. The first part of the address defines the network, the second part of the address defines the host on the network, the third part of the address defines the subnet that the particular computer is on, and the fourth and final part of the address defines the particular computer. On the subnets, the IP address xxx.xxx.xx.0 is always reserved for the network address and the IP address xxx.xxx.xx.0 is always reserved for the broadcast address.
Subnets can play an important role with large networks like those at Universities. These would allow computer labs in separate buildings to be on separate LANs so the IT department can keep track of what software is in what lab by the LAN number in the IP address. It is also easier to work with if you would need to replace a computer in a certain area. The updates of labs would be easier to keep track of as well.
Another function of a subnet is the ability to designate a portion of an IP address as a subnet, enabling a computer to determine which computers are on its subnet and which are outside of its subnet. This information is important for message routing.
What is routing? How does static routing differ from dynamic routing?
Routing is the process of determining the route or path through the network that a message will travel from the sending computer to the receiving computer. Every computer that performs routing has a routing table that specifies how messages will travel through the network.
With static routing, the routing table is developed by the network manager, and changes only when computers are added to or removed from the network. For example, if the computer recognizes that a circuit is broken or unusable (e.g., after the data link layer retry limit has been exceeded without receiving an acknowledgment), the computer will update the routing table to indicate the failed circuit. If an alternate route is available, it will be used for all subsequent messages. Otherwise, messages will be stored until the circuit is repaired. When new computers are added to the network, they announce their presence to the other computers, who automatically add them into their routing tables. Static routing is commonly used in networks that have few routing options. For example, most LANs are connected to the backbone network in only one place. There is only one route from the LAN to the backbone, so static routing is used.
Dynamic routing (or adaptive routing) is used when there are multiple routes through a network and it is important to select the best route. Dynamic routing attempts to improve network performance by routing messages over the fastest possible route, away from busy circuits and busy computers. An initial routing table is developed by the network manager, but is continuously updated by the computers themselves to reflect changing network conditions, such as network traffic. Routers can monitor outgoing messages to see how long it takes to transmit them and how long it takes for the receiving computer to acknowledge them. Based on this monitoring the router can effectuate table updating.
Routing is a process of moving the data packets from the source to destination. Router is a dedicated device which is used for the transmission of the packets from source computer to the destination computer. This transmission takes place over the internet where in the router finds out the best available route for sending packets.
There are two types of routing:
1. Static Routing: In the static or De-centralized routing, the routing path is predefined. Manual configuration of the network router is done by the Network Administrator where in the routing path is fixed. Static routing is mainly used when there are less number of computers connected to a network.
2. Dynamic Routing: In dynamic routing, the routing decisions are made by the individual computers. This method of routing is used when the destination has multiple routes. In this case, the best possible short route with minimum loss of packets is chosen. This enhances the network performance as the packets are delivered in short span of time. Here, the busy transmission routes are avoided for quicker transmission.
MIS 564 DQ6 Ch6 Answer Guidelines
Ethernet uses CSMA/CD as the media access control method. Explain the method.
CSMA/CD, like all contention-based techniques, is very simple in concept: wait until the bus is free (sense for carrier) and then transmit. Computers wait until no other devices are transmitting, and then transmit their data. As long as no other computer attempts to transmit at the same time, everything is fine. However, it is possible that two computers located some distance from one another can listen to the circuit, find it empty, and begin to transmit simultaneously. This simultaneous transmission is called a collision. The two messages collide and destroy each other.
The solution to this is to listen while transmitting, better known as collision detection (CD). If the NIC detects any signal other than its own, it presumes that a collision has occurred, and sends a jamming signal. All computers stop transmitting and wait for the circuit to become free before trying to retransmit. The problem is that the computers which caused the collision could attempt to retransmit at the same time. To prevent this, each computer waits a random amount of time after the colliding message disappears before attempting to retransmit.
CSMA/CD is a contention-based media access control method. CSMA/CD stands for Carrier Sense Multiple Access with Collision Detection. Simply put, this method is to wait for the circuit to be free before transmitting. The occurrences of collisions do happen because two or more attached computers can detect the circuit is available and start to transmit simultaneously. Listening to the circuit while transmitting solves this problem. This is called collision detection. When a collision is detected, by hearing a signal other than its own, the computer will send out a jamming signal. After a jamming signal is heard all attached devices stop transmitting and wait a specified time before starting to transmit again. The possibility of further collisions is reduced, because each computer’s wait time is chosen randomly. Thus reducing the chances that any two computers will start to transmit at the same time.
The two approaches are Physical Carrier Sense Method (PCSM) and Virtual Carrier Sense Method (VCSM). PCSM is based on the ability of the computers to physically listen before they transmit. After a transmission is sent the receiving computer acknowledges (ACK) the transmission by sending an ACK packet in reply. The source computer upon receipt of the ACK packet then knows it has a connection and can continue transmission to the destination computer. VCSM does not rely on physical media. A computer running this protocol first must send a Request to Transmit (RTS) packet to the AP. If all clear the AO responds with a Clear to Send (CTS) packet back to the source computer. The source computer may then begin transmission.
Explain the terms 100BaseT, 1000BaseT, 1000BaseF, 10GbE, and 10/100 Ethernet.
Historically, the original Ethernet specification was a 10 Mbps data rate using baseband signaling on thick coaxial cable, called 10Base5 (or “Thicknet”), capable of running 500 meters between hubs. Following 10Base5 was 10Base2 or thinnet as we used to say. Thinnet or RG-58 coaxial cable, similar to what is used for cable TV was considerably cheaper and easier to work with, although it was limited to 185 meters between hubs. The 10Base-2 standard was often called “Cheapnet.”
When twisted pair cabling was standardized for supporting Ethernet the T replaced the 2 to represent “twisted-pair”. Twisted pair is the most commonly used cable type for Ethernet. 10BaseT breaks down as 10 Mbps, baseband, and the “T” means it uses twisted pair wiring (actually unshielded twisted pair). It was the 10Base-T standard that revolutionized Ethernet, and made it the most popular type of LAN in the world.
Eventually the 10BaseT standard was improved to support Fast Ethernet or 100BaseT that breaks down as 100Mbps baseband over twisted-pair cable. This eventually was improved even further to 1000BaseT (1000 Mbps over twisted-pair wires) and 1000BaseF (1000 Mbps over fiber optic). There is currently a revised standard evolving which makes Ethernet even faster. It is known as the 10GbE or 10 billion bps Ethernet.
Finally, 10/100Mbps Ethernet refers to the standard that can autosense which speed it needs to run at between the two speeds of 10 Mbps or 100 Mbps. It comes down to the type of NIC running at the individual node and the type of switch port that the node connects into. It is commonplace to run 10/100Mbps switches in LAN operating environments where there are older NICs already operating and no real business case requirements for upgrading these nodes.
- 100BaseT – This is a standard for fast ethernet communication over twisted pair cables. The “100” part of the name indicates the speed, in this case, 100 Mbit/s.
- 1000BaseT – This is also a standard for fast ethernet communication over twisted pair cables, although this one is faster than 100BaseT. The “1000” part of the name indicates the speed, in this case, 1000 Mbit/s.
- 1000BaseF – This is actually a standard for ethernet over optical cables (the F is for “Fiber”, at least that’s how I’ve always remembered it). It also communicates at gigabit speeds, 1000 Mbit/s.
- 10/100 Ethernet – This is the ethernet standard that most local area networks use today, and have been using for probably 15 years. It indicates a mixed-speed network, with some nodes running at 10 Mbit/s and some 100 Mbit/s. Most network equipment is compatible with 10/100, and it is the cheapest because it has been around a long time. Recently however, with the rise of high-definition programming, networks are being upgraded to gigabit connections, which results in a 10/100/1000 Ethernet network.
I have been upgrading my equipment to gigabit so I can stream the rips of my blu-ray discs to my computers and entertainment centers. Although 100 Mbit/s is sufficient, the gigabit speed really gives the connection a nice cushion in case there is some other concurrent transfer occurring at the same time.
How does switched Ethernet differ from traditional Ethernet?
Traditional Ethernet uses a hub and has a bus logical topology. All nodes on a traditional Ethernet share a same circuit. Only one node can transmit at one time, and the data are delivered to all nodes. Only the intended recipient processes the data; while the other nodes discard the information.
A switched Ethernet uses a switch and has a star logical topology. Each node has a dedicated circuit connected to the switch. Packets are delivered to the intended recipient only. Multiple communications can take place simultaneously. For example, while computer A communicates with computer B, computer C can communicate with computer D at the same time.
Traditional Ethernet is logically configured in a bus topology, in which all devices effectively share one common circuit on which they can all transmit and from which they receive transmissions from all other connected devices. Each device processes only those transmissions for which it is the intended recipient and ignores all others. Traditional Ethernet is typically physically configured in a star topology, where one circuit physically runs from each connected device to a central hub. The hub passes any transmission it receives to all connected circuits, effectively merging all circuits into one circuit.
Switched Ethernet is logically and physically configured in a star topology, in which each device has a dedicated circuit back to the central switch. The switch provides computing services in which it receives an Ethernet frame from one circuit as input, processes the frame to determine the destination address and compares that an internal table to determine the destination switch-port, and outputs an Ethernet frame to the destination switch-port. The only two circuits involved in a unicast message are those running between the source device and the switch and between the switch and the destination device.
Traditional Ethernet deals with the physical connection whereas Switched Ethernet deals with physical as well as logical connection. I would also like to add that in Switched Ethernet, network has complete 10mbps of bandwidth available. In Switched Ethernet, dedicated segments are not required for transmission. The segments which act as Ethernet bridge can be switched as and when required. There is a significant increase in the throughput when Traditional Ethernet is migrated to Switched Ethernet. In this type of migration, only the older hubs are replaced with switches, the NIC cards on the computers still remain the same.
Describe the basic components of a wireless LAN and how they work.
WLANs use the same basic structure as LANs. There is a wireless network interface card (NIC) that is built into a desktop or laptop computer, a wireless access point performing the same functions as a hub or switch. Finally, instead of cable, there is a set of radio frequencies that are used to transport data.
The NIC is a radio transceiver that sends and receives radio signals through a short range, usually about 100 meters or 300 feet. A central wireless Access Point, or AP, is a radio transceiver that plays the same role as a hub or switch in wired Ethernet LANs. All NICs in the LAN send their packets to the AP and then the AP retransmits the packet over the wired network to its destination.
The Basic components of WLAN are as follows:
1. Access Point (AP): Client connects to the network through Access Point. Access Point uses an 802.11 standard modulation technique and operates within a specific frequency spectrum. Signals can be sent as well as received at the access point.
2. Network Interface Card (NIC): A computer or a workstation uses NIC to connect to the network. NIC is inserted into the expansion slot of the motherboard of the computer and it provides a hardware interface between computer and the network. NIC either supports an Ethernet connection or Wi-Fi connection.
3. Bridge: At MAC layer level, multiple wired and wireless LAN’s are connected through Bridges. Wireless bridges can cover longer distance than the Access Point.
4. Workgroup Bridge (WGB): Limited numbers of wired clients are supported through Workgroup Bridge.
5. Antenna: Modulated signals are radiated in the air through an Antenna. Propagation pattern, gain and transmission power are the important characteristics of an Antenna.
6. Switches & Routers: Multiple computers connect to switches and routers to get connected to the network.
One of the most common wireless devices in most home LANs is the wireless router. This combines many of the components you described. Here in my house, I have two Dlink DIR 655 wireless routers that are configured as access points. These have a two antennas, an access point, a router and a switch, all built into the same housing.
How does wireless Ethernet perform media access control?
Media access control in Wi-Fi uses Carrier Sense Multiple Access with Collision Avoidance, or CSMA/CA, which is similar to the media access control used in Ethernet LANs. The computers “listen” before and when they transmit, and if there is not a collision, all is well. Wi-Fi does attempt to avoid a collision more than regular Ethernet LANs do, however, by using two techniques called Distributed Coordination Function and Point Coordination Function.
Distributed Coordination Function (DCF) relies on the ability of computers to physically listen before they transmit. With DCF, each frame in CSMA/CA is sent using stop and wait ARQ, and it is designed in such a way so that no other computer begins transmitting while the waiting period is going on. There can be a “hidden node problem” with CSMA/CA DCF because some computers at the edge of the network may not sense every transmission, increasing the likelihood of collisions.
With Point Coordination Function, the node that wants to transmit first sends a request to the AP to reserve the circuit (radio frequency). The AP then sends back a “clear to transmit” message to the node, which can then start to transmit for the amount of time reserved. PCF solves the “hidden node problem”.
Wireless Ethernet performs media access control by using Carrier Sense Multiple Access with Collision Avoidance; this is similar to contention-based CSMA/CD in its approach used by Ethernet. A node on a wireless network using CSMA/CD will listen before they transmit and if no one else is transmitting, they will proceed with sending data packets. Determining if there is a collision is more difficult in radio transmission than in a wired network, so Wi-Fi attempt to avoid collision to a greater extent than wired Ethernet. CSMA/CA has two media control approaches distributed coordination function (DCF) also referred to as physical carrier sense, and point coordination function (PCF). With DCF the node will transmit a packet, and wait for an acknowledge packet before continuing to send data. With PCF the wireless access point will receive a request to transmit packet from the node, the access point will respond with a clear to transmit response which will let the node know how long it can send data for.
What kinds of wired or wireless LANs are used in your organization? Describe them briefly.
In my organization we have several different types of wired LANs. This is because our company grew by M&A activity. It is also because our buildings were built at different times. In my building we have the standard switched 100Base-T over category 6 wiring. My building is relatively new having been built in the last year and a half. As far as wireless LANs I believe we are running 802.11n. Both the wired and wireless LANs in my building are adequate.
1) Copy the Request Line and explain it.
Ans) Request Line: GET /ctl/ HTTP/1.1
This method requests a representation of the specified resource. It is a safe method intended only for information retrieval and should not change the state of the server.
Path to file: Path is part of the URL after the host name it is also called as request URI
HTTP version: The request line ends with HTTP version number that the browser understands the version number ensures that the Web server does not attempt to use a more advanced or newer version of the HTTP standard that the browser does not understand.
2) Copy the Request Header and explain it.
Ans) Request Header :
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.7 (KHTML, like Gecko) Chrome/16.0.912.77 Safari/535.7
It includes the name of the web browser used it contains information about the user agent originating the request. This is for statistical purposes, the tracing of protocol violations, and automated recognition of user agents for the sake of tailoring responses to avoid particular user agent limitations.
The Referer request-header field allows the client to specify, for the server’s benefit, the address (URI) of the resource from which the Request-URI was obtained. This allows a server to generate lists of back-links to resources for interest, logging, optimized caching, etc. It also allows obsolete or mistyped links to be traced for maintenance. The Referer field must not be sent if the Request-URI was obtained from a source that does not have its own URI, such as input from the user keyboard.
3. Copy the Response Status and explain it.
The Response Status contains the HTTP version number the server has used, a status code here it is 200 means “okay”; 404 means “not found”.
The response status ends with CR and LF which means Carriage Return and Line feed
4) Copy the Response Header and explain these fields: Last-Modified, Server, Date, Content-Length.
1) Last-Modified: The Last-Modified entity-header field indicates the date and time at which the sender believes the resource was last modified. The exact meaning of this field is defined in terms of how the recipient should interpret it; if the recipient has a copy of this resource which is older than the date given by the Last-Modified field, that copy should be considered old.
2) Server: The Server response-header field contains information about the software used by the origin server to handle the request.
3)Date: The date field mentions the date and time when the request was send.
4) Content Length: This entity-header field indicates the size of the Entity-Body, in decimal number of octets, sent to the recipient or, in the case of the HEAD method, the size of the Entity-Body that would have been sent had the request been a GET.
IP Address is a unique address that computing devices use to identify itself and communicate with other devices in the Internet Protocol network. An IP address serves two principal functions: host or network interface identification and location addressing. Any device connected to the IP network must have a unique IP address within its network.
A subnet mask separates the IP address into the network and host addresses. Subnetting further divides the host part of an IP address into a subnet and host address. It is called a subnet mask because it is used to identify network address of an IP address by performing bitwise AND operation on the netmask
The Physical or MAC (Media Access Control) address is a computer’s unique hardware number. When you’re connected to the Internet from your computer, a correspondence table relates your IP address to your computer’s physical (MAC) address on the LAN.
The Address Resolution protocol (ARP) allows a host to find the media access control address of a host on the same physical network, given the IP address of the host. To make ARP efficient, each computer caches IP-to-media access control address mappings to eliminate repetitive ARP broadcast requests.
You can use the arp command to view and modify the ARP table entries on the local computer. The arp command is useful for viewing the ARP cache and resolving address resolution problems.
Time to live means the capability of the DNS Server to cache DNS Records. It represents the amount of time that a DNS record for a certain host remains in the cache memory of a DNS server after the latter has located the host’s matching IP address.
A DNS cache contains entries that translate Internet domain names (such as “google.com”) to IP addresses. The Internet’s Domain Name System (DNS) involves caching on both Internet DNS servers and on the client computers that contact DNS servers. These caches provide an efficient way for DNS to efficiently keep the Internet synchronized as the IP addresses of some servers change and as new servers come online.