A Single TCP Sender| The ns-3 Network Simulator




A Single TCP Sender

For our first script we demonstrate a single sender sending through a router. Here is the topology we will build, with the delays and bandwidths:

 

 

 

The smaller bandwidth on the R–B link makes it the bottleneck. The default TCP packet size in ns2 is 1000 bytes, so the bottleneck bandwidth is nominally 100 packets/sec or 0.1 packets/ms.

An Introduction to Computer Networks, Release 2.0.4

bandwidthˆRTTnoLoad product is 0.1 packets/ms ˆ 120 ms = 12 packets. Actually, the default size of 1000 bytes refers to the data segment, and there are an additional 40 bytes of TCP and IP header. We therefore set packetSize_ to 960 so the actual transmitted size is 1000 bytes; this makes the bottleneck bandwidth exactly 100 packets/sec.

We want the router R to have a queue capacity of 6 packets, plus the one currently being transmitted; we set queue-limit = 7 for this. We create a TCP connection between A and B, create an ftp sender on top that, and run the simulation for 20 seconds. The nodes A, B and R are named; the links are not.

The ns-2 default maximum window size is 20; we increase that to 100 with $tcp0 set window_ 100; otherwise we will see an artificial cap on the cwnd growth (in the next section we will increase this to 65000).

The script itself is in a file basic1.tcl, with the 1 here signifying a single sender.

# basic1.tcl simulation: A---R---B

#Create a simulator object set ns [new Simulator]

#Open the nam file basic1.nam and the variable-trace file basic1.tr set namfile [open basic1.nam w] $ns namtrace-all $namfile set tracefile [open basic1.tr w] $ns trace-all $tracefile

#Define a 'finish' procedure

proc finish {} {

global ns namfile tracefile

$ns flush-trace

close $namfile

close $tracefile

exit 0 }

#Create the network nodes set A [$ns node]

set R [$ns node]

set B [$ns node]

#Create a duplex link between the nodes

$ns duplex-link $A $R 10Mb 10ms DropTail

$ns duplex-link $R $B 800Kb 50ms DropTail

# The queue size at $R is to be 7, including the packet being sent $ns queue-limit $R $B 7

# some hints for nam

# color packets of flow 0 red

$ns color 0 Red

$ns duplex-link-op $A $R orient right

$ns duplex-link-op $R $B orient right

$ns duplex-link-op $R $B queuePos 0.5

 

An Introduction to Computer Networks, Release 2.0.4

# Create a TCP sending agent and attach it to A
set tcp0 [new Agent/TCP/Reno]

# We make our one-and-only flow be flow 0

$tcp0 set class_ 0
$tcp0 set window_ 100
$tcp0 set packetSize_ 960
$ns attach-agent $A $tcp0
# Let's trace some variables
$tcp0 attach $tracefile
$tcp0 tracevar cwnd_
$tcp0 tracevar ssthresh_
$tcp0 tracevar ack_
$tcp0 tracevar maxseq_
#Create a TCP receive agent (a traffic sink) and attach it to B set end0 [new Agent/TCPSink]

$ns attach-agent
$B $end0

#Connect the traffic source with the traffic sink $ns connect $tcp0 $end0

#Schedule the connection data flow; start sending data at T=0, stop at T=10.0 set myftp [new Application/FTP]
$myftp attach-agent
$tcp0 $ns at 0.0 "$myftp start"
$ns at 10.0 "finish"

#Run the simulation
$ns run

 

After running this script, there is no command-line output (because we did not ask for any); however, the files basic1.tr and basic1.nam are created. Perhaps the simplest thing to do at this point is to view the animation with nam, using the command nam basic1.nam.

In the animation we can see slow start at the beginning, as first one, then two, then four and then eight packets are sent. A little past T=0.7, we can see a string of packet losses. This is visible in the animation as a tumbling series of red squares from the top of R’s queue. After that, the TCP sawtooth takes over; we alternate between the cwnd linear-increase phase (congestion avoidance), packet loss, and threshold slow start. During the linear-increase phase the bottleneck link is at first incompletely utilized; once the bottleneck link is saturated the router queue begins to build.

Graph of cwnd v time

Here is a graph of cwnd versus time, prepared (see below) from data in the trace file basic1.tr:

An Introduction to Computer Networks, Release 2.0.4

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Slow start is at the left edge. Unbounded slow start runs until about T=0.75, when a timeout occurs; bounded slow start runs from about T=1.2 to T=1.8. After that, all losses have been handled with fast recovery (we can tell this because cwnd does not drop below half its previous peak). The first three teeth following slow start have heights (cwnd peak values) of 20.931, 20.934 and 20.934 respectively; when the simulation is extended to 1000 seconds all subsequent peaks have exactly the same height, cwnd = 20.935. The spacing between the peaks is also constant, 1.946 seconds.

Because cwnd is incremented by ns-2 after each arriving ACK as described in 19.2.1 TCP Reno PerACK Responses, during the linear-increase phase there are a great many data points jammed together; the bunching effect is made stronger by the choice here of a large-enough dot size to make the slow-start points clearly visible. This gives the appearance of continuous line segments. Upon close examination, these line segments are slightly concave, as discussed in 22.5 Highspeed TCP, due to the increase in RTT as the queue fills. Individual flights of packets can just be made out at the lower-left end of each tooth, especially the first.

The Trace File

To examine the simulation (or, for that matter, the animation) more quantitatively, we turn to a more detailed analysis of the trace file, which contains records for all packet events plus (because it was requested) variable-trace information for cwnd_, ssthresh_, ack_ and maxseq_; these were the variables for which we requested traces in the basic1.tcl file above.

The bulk of the trace-file lines are event records; three sample records are below. (These are in the default event-record format for point-to-point links; ns-2 has other event-record formats for wireless. See use-newtrace in 31.6 Wireless Simulation below.)

r 0.58616 0 1 tcp 1000 ------- 0 0.0 2.0 28 43
+ 0.58616 1 2 tcp 1000 ------- 0 0.0 2.0 28 43
d 0.58616 1 2 tcp 1000 ------- 0 0.0 2.0 28 43

An Introduction to Computer Networks, Release 2.0.4

The twelve event-record fields are as follows:

1. r for received, d for dropped, + for enqueued, - for dequeued. Every arriving packet is enqueued, even if it is immediately dequeued. The third packet above was the first dropped packet in the entire simulation.

2. the time, in seconds.

3. the number of the sending node, in the order of node definition and starting at 0. If the first field was “+”, “-” or “d”, this is the number of the node doing the enqueuing, dequeuing or dropping. Events beginning with “-” represent this node sending the packet.

4. the number of the destination node. If the first field was “r”, this record represents the packet’s arrival at this node.

5. the protocol.

6. the packet size, 960 bytes of data (as we requested) plus 20 of TCP header and 20 of IP header.

7. some TCP flags, here represented as “-------” because none of the flags are set. Flags include E and N for ECN and A for reduction in the advertised winsize.

8. the flow ID. Here we have only one: flow 0. This value can be set via the fid_ variable in the Tcl source file; an example appears in the two-sender version below. The same flow ID is used for both directions of a TCP connection.

9. the source node (0.0), in form (node . connectionID). ConnectionID numbers here are simply an abstraction for connection endpoints; while they superficially resemble port numbers, the node in question need not even simulate IP, and each connection has a unique connectionID at each end. ConnectionID numbers start at 0.

10. the destination node (2.0), again with connectionID.

11. the packet sequence number as a TCP packet, starting from 0.

12. a packet identifier uniquely identifying this packet throughout the simulation; when a packet is forwarded on a new link it keeps its old sequence number but gets a new packet identifier.

The three trace lines above represent the arrival of packet 28 at R, the enqueuing of packet 28, and then the dropping of the packet. All these happen at the same instant.

The three trace lines above represent the arrival of packet 28 at R, the enqueuing of packet 28, and then the dropping of the packet. All these happen at the same instant.

0.38333 0 0 2 0 ack_ 3 0.38333 0 0 2 0 cwnd_ 5.000

 

The format of these lines is

1. time

2. source node of the flow

3. source port (as above, an abstract connection endpoint, not a simulated TCP port)

4. destination node of the flow

5. destination port

An Introduction to Computer Networks, Release 2.0.4

6. name of the traced variable

7. value of the traced variable

The two variable-trace records above are from the instant when the variable cwnd_ was set to 5. It was initially 1 at the start of the simulation, and was incremented upon arrival of each of ack0, ack1, ack2 and ack3. The first line shows the ack counter reaching 3 (that is, the arrival of ack3); the second line shows the resultant change in cwnd_.

The graph above of cwnd v time was made by selecting out these cwnd_ lines and plotting the first field (time) and the last. (Recall that during the linear-increase phase cwnd is incremented by 1.0/cwnd with each arriving new ACK.)

The last ack in the tracefile is

      9.98029 0 0 2 0 ack_ 808

Since ACKs started with number 0, this means we sent 809 packets successfully. The theoretical bandwidth was 100 packets/sec ˆ 10 sec = 1000 packets, so this is about an 81% goodput. Use of the ack_ value this way tells us how much data was actually delivered. An alternative statistic is the final value of maxseq_ which represents the number of distinct packets sent; the last maxseq_ line is

       9.99029 0 0 2 0 maxseq_ 829

As can be seen from the cwnd-v-time graph above, slow start ends around T=2.0. If we measure goodput from then until the end, we do a little better than 81%. The first data packet sent after T=2.0 is at 2.043184; it is data packet 72. 737 packets are sent from packet 72 until packet 808 at the end; 737 packets in 8.0 seconds is a goodput of 92%.

It is not necessary to use the tracefile to get the final values of TCP variables such as ack_ and maxseq_; they can be printed from within the Tcl script’s finish() procedure. The following example illustrates this, where ack_ and maxseq_ come from the connection tcp0. The global line lists global variables that are to be made available within the body of the procedure; tcp0 must be among these.

proc finish {} {

global ns nf f tcp0

$ns flush-trace
close $namfile
close $tracefile set lastACK
[$tcp0 set ack_] set lastSEQ [$tcp0 set maxseq
_] puts stdout "final ack: $lastACK, final seq num: $lastSEQ"
exit 0

}

 

For TCP sending agents, useful member variables to set include:

class_: the identifying number of a flow

window_: the maximum window size; the default is much too small.

packetSize_: we set this to 960 above so the total packet size would be 1000.

Useful member variables either to trace or to print at the simulation’s end include:

maxseq_: the number of the last packet sent, starting at 1 for data packets

An Introduction to Computer Networks, Release 2.0.4

• ack_: the number of the last ACK received

• cwnd_: the current value of the congestion window

• nrexmitpack_: the number of retransmitted packets

To get a count of the data actually received, we need to look at the TCPsink object, $end0 above. There is no packet counter here, but we can retrieve the value bytes_ representing the total number of bytes received. This will include 40 bytes from the threeway handshake which can either be ignored or subtracted:

set ACKed [expr round ( [$end0 set bytes_] / 1000.0)]

 

This is a slightly better estimate of goodput. In very long simulations, however, this (or any other) byte count will wrap around long before any of the packet counters wrap around.

In the example above every packet event was traced, a consequence of the line

$ns trace-all $trace

 

We could instead have asked only to trace particular links. For example, the following line would request tracing for the bottleneck (RÝÑB) link:

$ns trace-queue $R $B $trace

This is often useful when the overall volume of tracing is large, and we are interested in the bottleneck link only. In long simulations, full tracing can increase the runtime 10-fold; limiting tracing only to what is actually needed can be quite important.

Single Losses

By examining the basic1.tr file above for packet-drop records, we can confirm that only a single drop occurs at the end of each tooth, as was argued in 19.8 Single Packet Losses. After slow start finishes at around T=2, the next few drops are at T=3.963408, T=5.909568 and T=7.855728. The first of these drops is of Data[254], as is shown by the following record:

d 3.963408 1 2 tcp 1000 ------- 0 0.0 2.0 254 531

 

Like most “real” implementations, the ns-2 implementation of TCP increments cwnd (cwnd_ in the tracefile) by 1/cwnd on each new ACK (19.2.1 TCP Reno Per-ACK Responses). An additional packet is sent by A whenever cwnd is increased this way past another whole number; that is, whenever floor(cwnd) increases. At T=3.95181, cwnd_ was incremented to 20.001, triggering the double transmission of Data 253] and the doomed Data[254]. At this point the RTT is around 190 ms.

The loss of Data[254] is discovered by Fast Retransmit when the third dupACK[253] arrives. The first ACK[253] arrives at A at T=4.141808, and the dupACKs arrive every 10 ms, clocked by the 10 ms/packet transmission rate of R. Thus, A detects the loss at T=4.171808; at this time we see cwnd_ reduced by half to 10.465; the tracefile times for variables are only to 5 decimal places, so this is recorded as

4.17181 0 0 2 0 cwnd_ 10.465

An Introduction to Computer Networks, Release 2.0.4

That represents an elapsed time from when Data[254] was dropped of 207.7 ms, more than one RTT. As described in 19.8 Single Packet Losses, however, A stopped incrementing cwnd_ when the first ACK[253] arrived at T=4.141808. The value of cwnd_ at that point is only 20.931, not quite large enough to trigger transmission of another back-to-back pair and thus eventually a second packet drop.

Reading the Tracefile in Python

Deeper analysis of ns-2 data typically involves running some sort of script on the tracefiles; we will mostly use the Python (python3) language for this, although the awk language is also traditional. The following is the programmer interface to a simple module (library) nstrace.py:

• nsopen(filename): opens the tracefile

• isEvent(): returns true if the current line is a normal ns-2 event record

• isVar(): returns true if the current line is an ns-2 variable-trace record

• isEOF(): returns true if there are no more tracefile lines

• getEvent(): returns a twelve-element tuple of the ns-2 event-trace values, each cast to the correct type. The ninth and tenth values, which are node.port pairs in the tracefile, are returned as (node port) sub-tuples

. • getVar(): returns a seven-element tuple of ns-2 variable-trace values

• skipline(): skips the current line (useful if we are interested only in event records, or only in variabletrace records, and want to ignore the other type of record)

We will first make use of this in 31.2.6.1 Link utilization measurement; see also 31.4 TCP Loss Events and Synchronized Losses. The nstrace.py file above includes regular-expression checks to verify that each tracefile line has the correct format, but, as these are slow, they are disabled by default. Enabling these checks is potentially useful, however, if some wireless trace records are also included.

The nam Animation

Let us now re-examine the nam animation, in light of what can be found in the trace file.

At T=0.120864, the first 1000-byte data packet is sent (at T=0 a 40-byte SYN packet is sent); the actual packet identification number is 1 so we will refer to it as Data[1]. At this point cwnd_ = 2, so Data[2] is enqueued at this same time, and sent at T=0.121664 (the delay exactly matches the A–R link’s bandwidth of 8000 bits in 0.0008 sec). The first loss occurs at T=0.58616, of Data[28]; at T=0.59616 Data[30] is lost. (Data[29] was not lost because R was able to send a packet and thus make room).

From T=.707392 to T=.777392 we begin a string of losses: packets 42, 44, 46, 48, 50, 52, 54 and 56.

At T=0.76579 the first ACK[27] makes it back to A. The first dupACK[27] arrives at T=0.77576; another arrives at T=0.78576 (10 ms later, exactly the bottleneck per-packet time!) and the third dupACK arrives at T=0.79576. At this point, Data[28] is retransmitted and cwnd is halved from 29 to 14.5.

At T=0.985792, the sender receives ACK[29]. DupACK[29]’s are received at T=1.077024, T=1.087024, T=1.097024 and T=1.107024. Alas, this is TCP Reno, in Fast Recovery mode, and it is not implementing

An Introduction to Computer Networks, Release 2.0.4

Fast Retransmit while Fast Recovery is going on (TCP NewReno in effect fixes this). Therefore, the connection experiences a hard timeout at T=1.22579; the last previous event was at T=1.107024. At this point ssthresh is set to 7 and cwnd drops to 1. Slow start is used up to ssthresh = 7, at which point the sender switches to the linear-increase phase.

Single-sender Throughput Experiments

According to the theoretical analysis in 19.7 TCP and Bottleneck Link Utilization, a queue size of close to zero should yield about a 75% bottleneck utilization, a queue size such that the mean cwnd equals the transit capacity should yield about 87.5%, and a queue size equal to the transit capacity should yield close to 100%. We now test this.

$ns duplex-link $A $R 10Mb 50ms DropTail

$ns duplex-link $R $B 800Kb 100ms DropTail

 

The bottleneck link here is 800 kbps, or 100 kBps, or 10 ms/packet, so these propagation-delay changes mean a round-trip transit capacity of 30 packets (31 if we include the bandwidth delay at R). In the table below, we run the simulation while varying the queue-limit parameter from 3 to 30. The simulations run for 1000 seconds, to minimize the effect of slow start. Tracing is disabled to reduce runtimes. The “received” column gives the number of distinct packets received by B; if the link utilization were 100% then in 1,000 seconds B would receive 100,000 packets.

 

 

 

 

 

 

 

 

In ns-2, every arriving packet is first enqueued, even if it is immediately dequeued, and so queue-limit cannot actually be zero. A queue-limit of 1 or 2 gives very poor results, probably because of problems with slow start. The run here with queue-limit = 3 is not too far out of line with the 75% predicted by theory for a queue-limit close to zero. When queue-limit is 10, then cwnd will range from 20 to 40, and the link-unsaturated and queue-filling phases should be of equal length. This leads to a theoretical link utilization of about (75%+100%)/2 = 87.5%; our measurement here of 89.3% is in good agreement. As queue-limit continues to increase, the link utilization rapidly approaches 100%, again as expected.

An Introduction to Computer Networks, Release 2.0.4

Link utilization measurement

In the experiment above we estimated the utilization of the RÑB link by the number of distinct packets arriving at B. But packet duplicate transmissions sometimes occur as well (see 31.2.6.4 Packets that are delivered twice); these are part of the RÑB link utilization but are hard to estimate (nominally, most packets retransmitted by A are dropped by R, but not all).

If desired, we can get an exact value of the RÑB link utilization through analysis of the ns-2 trace file. In this file R is node 1 and B is node 2 and our flow is flow 0; we look for all lines of the form

- time 1 2 tcp 1000 ------- 0 0.0 2.0 x y

that is, with field1 = ‘-‘, field3 = 1, field4 = 2, field6 = 1000 and field8 = 0 (if we do not check field6=1000 then we count one extra packet, a simulated SYN). We then simply count these lines; here is a simple script to do this in Python using the nstrace module above:

#!/usr/bin/python3
import nstrace
import sys

 

def link_count(filename):
SEND_NODE = 1
DEST_NODE = 2
FLOW = 0
count = 0
nstrace.nsopen(filename)
while not nstrace.isEOF():
if nstrace.isEvent():
(event, time, sendnode, dest, dummy, size, dummy, flow, dummy, ãÑdummy, dummy, dummy) = nstrace.getEvent()
if (event == "-" and sendnode == SEND_NODE and dest == DEST_NODE ãÑand size >= 1000 and flow == FLOW):
count += 1
else:
nstrace.skipline()
print ("packet count:", count);
link_count(sys.argv[1])

For completeness, here is the same program implemented in the Awk scripting language.

BEGIN {count=0; SEND_NODE=1; DEST_NODE=2; FLOW=0}
$1 == "-" { if ($3 == SEND_NODE && $4 == DEST_NODE && $6 >= 1000 && $8 == ãÑFLOW       )
{ count++;
    }
  }
END             {print count;}

An Introduction to Computer Networks, Release 2.0.4

ns-2 command-line parameters

Experiments where we vary one parameter, eg queue-limit, are facilitated by passing in the parameter value on the command line. For example, in the basic1.tcl file we can include the following:

set queuesize $argv

$ns queue-limit $R $B $queuesize

Then, from the command line, we can run this as follows:

ns basic1.tcl 5

If we want to run this simulation with parameters ranging from 0 to 10, a simple shell script is

queue=0

while [ $queue -le 10 ]

do

ns basic1.tcl $queue

queue=$(expr $queue + 1)

done

If we want to pass multiple parameters on the command line, we use lindex to separate out arguments from the $argv string; the first argument is at position 0 (in bash and awk scripts, by comparison, the first argument is $1). For two optional parameters, the first representing queuesize and the second representing endtime, we would use

if { $argc >= 1 }     {

set queuesize [expr [lindex $argv 0]    ] 

}

if { $argc >= 2 }   {

set endtime [expr [lindex $argv 1]    ]

}

Queue utilization

In our previous experiment we examined link utilization when queue-limit was smaller than the bandwidthˆdelay product. Now suppose queue-limit is greater than the bandwidthˆdelay product, so the bottleneck link is essentially never idle. We calculated in 19.7 TCP and Bottleneck Link Utilization what we might expect as an average queue utilization. If the transit capacity is 30 packets and the queue capacity is 40 then cwndmax would be 70 and cwndmin would be 35; the queue utilization would vary from 70-30 = 40 down to 35-30 = 5, averaging around (40+5)/2 = 22.5.

Let us run this as an ns-2 experiment. As before, we set the A–R and R–B propagation delays to 50 ms and 100 ms respectively, making the RTTnoLoad 300 ms, for about 30 packets in transit. We also set the queue-limit value to 40. The simulation runs for 1000 seconds, enough, as it turns out, for about 50 TCP sawteeth.

At the end of the run, the following Python script maintains a running time-weighted average of the queue size. Because the queue capacity exceeds the total transit capacity, the queue is seldom empty.

An Introduction to Computer Networks, Release 2.0.4

#!/usr/bin/python3
import nstrace
import sys

def queuesize(filename):
QUEUE_NODE = 1
nstrace.nsopen(filename)
sum = 0.0
size= 0
prevtime=0
while not nstrace.isEOF():
if nstrace.isEvent():       # counting regular trace lines (event, time, sendnode, dnode, proto, dummy, dummy, flow, dummy, ãÑdummy, seqno, pktid) = nstrace.getEvent()
if (sendnode != QUEUE_NODE): continue
if (event == "r"): continue
sum += size * (time -prevtime)
prevtime = time
if(event=='d'): size -= 1
elif (event=="-"): size -= 1
elif (event=="+"): size += 1
else:
nstrace.skipline()


print("avg queue=", sum/time)


queuesize(sys.argv[1])

The answer we get for the average queue size is about 23.76, which is in good agreement with our theoretical value of 22.5.

Packets that are delivered twice

Every dropped TCP packet is ultimately transmitted twice, but classical TCP theory suggests that relatively few packets are actually delivered twice. This is pretty much true once the TCP sawtooth phase is reached, but can fail rather badly during slow start.

The following Python script will count packets delivered two or more times. It uses a dictionary, COUNTS, which is indexed by sequence numbers.

#!/usr/bin/python3
import nstrace
import sys

 

def dup_counter(filename):
SEND_NODE = 1
DEST_NODE = 2
FLOW = 0
count = 0
COUNTS = {}
nstrace.nsopen(filename)

An Introduction to Computer Networks, Release 2.0.4

while not nstrace.isEOF():
if nstrace.isEvent():
(event, time, sendnode, dest, dummy, size, dummy, flow, dummy, ãÑdummy, seqno, dummy) = nstrace.getEvent()
if (event == "r" and dest == DEST_NODE and size >= 1000 and flow ãÑ== FLOW):
if (seqno in COUNTS):
COUNTS[seqno] += 1

else:
COUNTS[seqno] = 1

else:
nstrace.skipline()
for seqno in sorted(COUNTS.keys()):
if (COUNTS[seqno] > 1): print(seqno, COUNTS[seqno]) dup_counter(sys.argv[1])

 

When run on the basic1.tr file above, it finds 13 packets delivered twice, with TCP sequence numbers 43, 45, 47, 49, 51, 53, 55, 57, 58, 59, 60, 61 and 62. These are sent the second time between T=1.437824 and T=1.952752; the first transmissions are at times between T=0.83536 and T=1.046592. If we look at our cwnd-v-time graph above, we see that these first transmissions occurred during the gap between the end of the unbounded slow-start phase and the beginning of threshold-slow-start leading up to the TCP sawtooth. Slow start, in other words, is messy.

Loss rate versus cwnd: part 1

If we run the basic1.tcl simulation above until time 1000, there are 94915 packets acknowledged and 512 loss events. This yields a loss rate of p = 512/94915 = 0.00539, and so by the formula of 21.2 TCP Reno loss rate versus cwnd we should expect the average cwnd to be about 1.225/?p » 16.7. The true average cwnd is the number of packets sent divided by the elapsed time in RTTs, but as RTTs are not constant here (they get significantly longer as the queue fills), we turn to an approximation. From 31.2.1 Graph of cwnd v time we saw that the peak cwnd was 20.935; the mean cwnd should thus be about 3/4 of this, or 15.7. While not perfect, agreement here is quite reasonable.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



Frequently Asked Questions

+
Ans: The ns-2 simulator|NETWORK SIMULATIONS: NS-2 view more..
+
Ans: Wireless|The ns-3 Network Simulator view more..
+
Ans: A Single TCP Sender| The ns-3 Network Simulator view more..
+
Ans: A Single TCP Sender| The ns-3 Network Simulator view more..
+
Ans: Two TCP Senders Competing|THE NS-3 NETWORK SIMULATOR view more..
+
Ans: Wireless Simulation|NETWORK SIMULATIONS: NS-2 view more..
+
Ans: Epilog|NETWORK SIMULATIONS: NS-2 view more..
+
Ans: Installing and Running ns-3|THE NS-3 NETWORK SIMULATOR view more..
+
Ans: Installing Mininet|MININET view more..
+
Ans: A Simple Mininet Example|MININET view more..
+
Ans: Multiple Switches in a Line|Mininet view more..
+
Ans: IP Routers in a Line|Mininet view more..
+
Ans: IP Routers With Simple Distance-Vector Implementation|Mininet view more..
+
Ans: TCP Competition: Reno vs BBR|Mininet view more..
+
Ans: Linux Traffic Control (tc)|Mininet view more..
+
Ans: OpenFlow and the POX Controller|Mininet view more..
+
Ans: RSA|PUBLIC-KEY ENCRYPTION view more..
+
Ans: Forward Secrecy|Public-Key Encryption view more..




Rating - 3/5
509 views

Advertisements