Drag and Drop Questions 1

September 10th, 2011 in CCDA 640-864

Here you will find answers to Drag and Drop Questions – Part 1

Question 1

Drag the data center property on the left to the design aspect on the right it is most apt to affect



Space: amount of racks, equipment, cabling, people
Weight load: rack servers vs blade servers
Power: variability of computing load, computing power and memory requirements
Cooling: arranging equipment racks face-to-face or back-to-back
Cabling: abundant, variable, well organized and easy to maintain
Security: disasters, fire suppression and alarm systems


The data center space includes number of racks for equipment that will be installed. Other factor needs to be considered is the number of employees who will work in that data center.

Rack servers are low cost and provide high performance, unfortunately they take up space and consume a lot of energy to operate. Blade servers provide similar computing power when compared to rack mount servers, but require less space, power, and cabling. The chassis in most blade servers allows for shared power, Ethernet LAN, and Fibre Channel SAN connections, which reduce the number of cables needed.

The power in the data center facility is used to power cooling devices, servers, storage equipment, the network, and some lighting equipment. In server environments, the power usage depends on the computing load place on the server. For example, if the server needs to work harder by processing more data, it has to draw more AC power from the power supply, which in turn creates more heat that needs to be cooled down.

Cooling is used to control the temperature and humidity of the devices. The cabinets and racks should be arranged in the data center with an alternating pattern of “cold” and “hot” aisles. The cold aisle should have equipment arranged face to face, and the hot aisle should have equipment arranged back to back. In the cold aisle, there should be perforated floor tiles drawing cold air from the floor into the face of the equipment. This cold air passes through the equipment and flushes out the back into the hot aisle. The hot aisle does not have any perforated tiles, and this design prevents the hot air from mixing with the cold air.

The cabling in the data center is known as the passive infrastructure. Data center teams rely on a structured and well-organized cabling plant. It is important for cabling to be easy to maintain, abundant and capable of supporting various media types and requirements for proper data center operations.

Fire suppression and alarm systems are considered physical security and should be in place to protect equipment and data from natural disasters and theft.

(Reference: CCDA 640-864 Official Cert Guide)


Drag and Drop Questions 2

September 10th, 2011 in CCDA 640-864

Here you will find answers to Drag and Drop Questions – Part 2

Question 1

Drag the network characteristic on the left to the design method on the right which will best ensure redundancy at the building distribution layer



Layer 2 between distribution and access layers, with a Layer 3 link between the distribution switches:
FHRP for convergence, no VLANs span between access layer switches across the distribution switches

Layer 3 between distribution and access layers, with a Layer 3 link between the distribution switches:
Support Layer 2 VLANs spanning multiple access layer switches across the distribution switches

VSS: Convergence (FHRP) is not an issue

Question 2

Click and drag the QoS feature type on the left to the category of QoS mechanism on the right.



+ classification and marking: ACLs
+ congestion avoidance: WRED
+ traffic conditioners: CAR
+ congestion management: LLQ
+ link efficiency: LFI


Classification is the process of partitioning traffic into multiple priority levels or classes of service. Information in the frame or packet header is inspected, and the frame’s priority is determined.Marking is the process of changing the priority or class of service (CoS) setting within a frame or packet to indicate its classification. Classification is usually performed with access control lists (ACL), QoS class maps, or route maps, using various match criteria.

Congestion-avoidance techniques monitor network traffic loads so that congestion can be anticipated and avoided before it becomes problematic. Congestion-avoidance techniques allow packets from streams identified as being eligible for early discard (those with lower priority) to be dropped when the queue is getting full. Congestion avoidance techniques provide preferential treatment for high priority traffic under congestion situations while maximizing network throughput and capacity utilization and minimizing packet loss and delay.

Weighted random early detection (WRED) is the Cisco implementation of the random early detection (RED) mechanism. WRED extends RED by using the IP Precedence bits in the IP packet header to determine which traffic should be dropped; the drop-selection process is weighted by the IP precedence.

Traffic conditioner consists of policing and shaping. Policing either discards the packet or modifies some aspect of it, such as its IP Precedence or CoS bits, when the policing agent determines that the packet meets a given criterion. In comparison, traffic shaping attempts to adjust the transmission rate of packets that match a certain criterion. Shaper typically delays excess traffic by using a buffer or queuing mechanism to hold packets and shape the ?ow when the source’s data rate is higher than expected. For example, generic traffic shaping uses a weighted fair queue to delay packets to shape the flow. Traffic conditioner is also referred to as Committed Access Rate (CAR).

Congestion management includes two separate processes: queuing, which separates traffic into various queues or buffers, and scheduling, which decides from which queue traffic is to be sent next. There are two types of queues: the hardware queue (also called the transmit queue or TxQ) and software queues. Software queues schedule packets into the hardware queue based on the QoS requirements and include the following types: weighted fair queuing (WFQ), priority queuing (PQ), custom queuing (CQ), class-based WFQ (CBWFQ), and low latency queuing (LLQ).

LLQ is also known as Priority Queuing–Class-Based Weighted Fair Queuing (PQ-CBWFQ). LLQ provides a single priority but it’s preferred for VoIP networks because it can also configure guaranteed bandwidth for different classes of traffic queue. For example, all voice call traffic would be assigned to the priority queue, VoIP signaling and video would be assigned to a traffic class, FTP traffic would be assigned to a low-priority traffic class, and all other traffic would
be assigned to a regular class.

Link efficiency techniques, including link fragmentation and interleaving (LFI) and compression. LFI prevents small voice packets from being queued behind large data packets, which could lead to unacceptable delays on low-speed links. With LFI, the voice gateway fragments large packets into smaller equal-sized frames and interleaves them with small voice packets so that a voice packet does not have to wait until the entire large data packet is sent. LFI reduces and ensures a more predictable voice delay.

(Reference: Cisco Press Designing for Cisco Internetwork Solutions)