Using latest version of mininet
refs: #2851
Change-Id: Ica4936cb98245a0b1a22d8ceada4e7a0f6dbddc8
diff --git a/.gitignore b/.gitignore
index 35d7ca8..be51b16 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,21 +1,14 @@
-*.pyc
-mnexec
-*.pyc
-*~
-*.1
-*.xcodeproj
-*.xcworkspace
-\#*\#
-mininet.egg-info
+Mini_NDN.egg-info
build
dist
doc/html
doc/latex
-trunk
-data
-trash
-measures
-debian
-.fuse*
-.project
-.pydev*
+NFD
+NLSR
+crypto
+mininet
+ndn-cxx
+ndn-tools
+openflow
+examples
+util
diff --git a/INSTALL.md b/INSTALL.md
index 9f76b26..22f330d 100644
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -10,7 +10,7 @@
### Installing NDN
-Each node in **Mini-NDN** will run the official implementation of NDN. Let's get it.
+Each node in **Mini-NDN** will run the official implementation of NDN. The following dependencies are needed:
Mini-NDN uses NFD, NLSR, and ndn-tlv-ping.
@@ -20,45 +20,33 @@
To install NLSR:
http://named-data.net/doc/NLSR/current/INSTALL.html
-To install ndn-tlv-ping:
-https://github.com/named-data/ndn-tlv-ping
+To install ndn-tools:
+https://github.com/named-data/ndn-tools
-### Downloading and installing **Mini-NDN**
+### Installing Mininet
-If you don't have it yet, you'll need to have _git_ installed first. In Ubuntu, that would be:
+**Mini-NDN** is based on Mininet. To install Mininet:
+https://github.com/mininet/mininet/INSTALL
- sudo apt-get install git
+### Installing **Mini-NDN**
-Now, let's get the source code of **Mini-NDN**.
-Go to your home directory and use the following command:
+If you have all the dependencies installed simply clone this repository and run:
- git clone https://github.com/named-data/mini-ndn
+ sudo ./install.sh -i
-As a result, there will be a directory named _mini-ndn_ in your home directory, containing all the source code.
+else if you don't have the dependencies:
-Still in your home directory, use the utility install script with _-fnv_ options:
-
- sudo ./mini-ndn/util/install.sh -fnv
-
-where
--f: install open(F)low
--n: install mini(N)et dependencies + core files
--v: install open (V)switch
-
-Prerequisite packages will be downloaded and installed during the process.
+ sudo ./install.sh -mrfti
### Verification
-Once everything is installed, the following command can be issued for verification from the home folder:
+Once everything is installed, the following command can be issued for verification
- sudo minindn --pingall 50 --ctime 180 mini-ndn/ndn_utils/hyperbolic_conf_file/minindn.caida.conf
-
-where:
---pingall: Will schedule and collect the specified number of pings from each node to every other node
---ctime: Convergence time for NLSR, provide according to the size of the topology
-
-Note: The configuration file contains hyperbolic coordinates but hyperbolic routing will only be
-activated if --hr is provided
+ sudo minindn --pingall 50 --ctime 180 ndn_utils/hyperbolic_conf_file/minindn.caida.conf
All the ping logs will be stored under /tmp/node-name/ping-data and the command will provide a
command line interface at the end.
+
+When the "mininet>" CLI prompt appears, press CTRL+D to terminate the experiment.
+Then, execute `ls /tmp/*/ping-data/*.txt | wc -l`, and expect to see "90".
+Execute `cat /tmp/*/ping-data/*.txt | grep loss`, and expect to see "0% packet loss" on every line.
diff --git a/INSTALL_mininet b/INSTALL_mininet
deleted file mode 100644
index d3e4695..0000000
--- a/INSTALL_mininet
+++ /dev/null
@@ -1,123 +0,0 @@
-
-Mininet Installation/Configuration Notes
-----------------------------------------
-
-Mininet 2.0.0
----
-
-The supported installation methods for Mininet are 1) using a
-pre-built VM image, and 2) native installation on Ubuntu. You can also
-easily create your own Mininet VM image (4).
-
-(Other distributions may be supported in the future - if you would
-like to contribute an installation script, we would welcome it!)
-
-1. Easiest "installation" - use our pre-built VM image!
-
- The easiest way to get Mininet running is to start with one of our
- pre-built virtual machine images from <http://openflow.org/mininet>
-
- Boot up the VM image, log in, and follow the instructions on the
- Mininet web site.
-
- One advantage of using the VM image is that it doesn't mess with
- your native OS installation or damage it in any way.
-
- Although a single Mininet instance can simulate multiple networks
- with multiple controllers, only one Mininet instance may currently
- be run at a time, and Mininet requires root access in the machine
- it's running on. Therefore, if you have a multiuser system, you
- may wish to consider running Mininet in a VM.
-
-2. Next-easiest option: use our Ubuntu package!
-
- To install Mininet itself (i.e. `mn` and the Python API) on Ubuntu
- 12.10+:
-
- sudo apt-get install mininet
-
- Note: if you are upgrading from an older version of Mininet, make
- sure you remove the old OVS from `/usr/local`:
-
- sudo rm /usr/local/bin/ovs*
- sudo rm /usr/local/sbin/ovs*
-
-3. Native installation from source on Ubuntu 11.10+
-
- If you're reading this, you've probably already done so, but the
- command to download the Mininet source code is:
-
- git clone git://github.com/mininet/mininet.git
-
- If you are running Ubuntu, you may be able to use our handy
- `install.sh` script, which is in `mininet/util`.
-
- *WARNING: USE AT YOUR OWN RISK!*
-
- `install.sh` is a bit intrusive and may possibly damage your OS
- and/or home directory, by creating/modifying several directories
- such as `mininet`, `openflow`, `oftest`, `pox`, or `noxcosre`.
- Although we hope it won't do anything completely terrible, you may
- want to look at the script before you run it, and you should make
- sure your system and home directory are backed up just in case!
-
- To install Mininet itself, the OpenFlow reference implementation, and
- Open vSwitch, you may use:
-
- mininet/util/install.sh -fnv
-
- This should be reasonably quick, and the following command should
- work after the installation:
-
- sudo mn --test pingall
-
- To install ALL of the software which we use for OpenFlow tutorials,
- including POX, the OpenFlow WireShark dissector, the `oftest`
- framework, and other potentially useful software (and to add some
- stuff to `/etc/sysctl.conf` which may or may not be useful) you may
- use:
-
- mininet/util/install.sh -a
-
- This takes about 4 minutes on our test system.
-
-4. Creating your own Mininet/OpenFlow tutorial VM
-
- Creating your own Ubuntu Mininet VM for use with the OpenFlow tutorial
- is easy! First, create a new Ubuntu VM. Next, run two commands in it:
-
- wget https://raw.github.com/mininet/mininet/master/util/vm/install-mininet-vm.sh
- time install-mininet-vm.sh
-
- Finally, verify that Mininet is installed and working in the VM:
-
- sudo mn --test pingall
-
-5. Installation on other Linux distributions
-
- Although we don't support other Linux distributions directly, it
- should be possible to install and run Mininet with some degree of
- manual effort.
-
- In general, you must have:
-
- * A Linux kernel compiled with network namespace support enabled
-
- * An OpenFlow implementation (either the reference user or kernel
- space implementations, or Open vSwitch.) Appropriate kernel
- modules (e.g. tun and ofdatapath for the reference kernel
- implementation) must be loaded.
-
- * Python, `bash`, `ping`, `iperf`, etc.`
-
- * Root privileges (required for network device access)
-
- We encourage contribution of patches to the `install.sh` script to
- support other Linux distributions.
-
-
-Good luck!
-
-Mininet Team
-
----
diff --git a/LICENSE b/LICENSE
deleted file mode 100644
index 704d157..0000000
--- a/LICENSE
+++ /dev/null
@@ -1,33 +0,0 @@
-Mininet 2.0.0 License
-
-Copyright (c) 2012 Open Networking Laboratory
-Copyright (c) 2009-2012 Bob Lantz and The Board of Trustees of
-The Leland Stanford Junior University
-
-Original authors: Bob Lantz and Brandon Heller
-
-We are making Mininet available for public use and benefit with the
-expectation that others will use, modify and enhance the Software and
-contribute those enhancements back to the community. However, since we
-would like to make the Software available for broadest use, with as few
-restrictions as possible permission is hereby granted, free of charge, to
-any person obtaining a copy of this Software to deal in the Software
-under the copyrights without restriction, including without limitation
-the rights to use, copy, modify, merge, publish, distribute, sublicense,
-and/or sell copies of the Software, and to permit persons to whom the
-Software is furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included
-in all copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
-OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
-IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
-CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
-TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
-SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-
-The name and trademarks of copyright holder(s) may NOT be used in
-advertising or publicity pertaining to the Software or any derivatives
-without specific, written prior permission.
diff --git a/Makefile b/Makefile
deleted file mode 100644
index 9aea747..0000000
--- a/Makefile
+++ /dev/null
@@ -1,66 +0,0 @@
-MININET = mininet/*.py
-TEST = mininet/test/*.py
-EXAMPLES = examples/*.py
-MN = bin/mn
-BIN = $(MN)
-NDN = ndn/**/*.py
-PYSRC = $(MININET) $(TEST) $(EXAMPLES) $(BIN) $(NDN)
-MNEXEC = mnexec
-MANPAGES = mn.1 mnexec.1
-P8IGN = E251,E201,E302,E202
-BINDIR = /usr/bin
-MANDIR = /usr/share/man/man1
-DOCDIRS = doc/html doc/latex
-PDF = doc/latex/refman.pdf
-
-all: codecheck test
-
-clean:
- rm -rf build dist *.egg-info *.pyc $(MNEXEC) $(MANPAGES) $(DOCDIRS)
-
-codecheck: $(PYSRC)
- -echo "Running code check"
- util/versioncheck.py
- pyflakes $(PYSRC)
- pylint --rcfile=.pylint $(PYSRC)
- pep8 --repeat --ignore=$(P8IGN) $(PYSRC)
-
-errcheck: $(PYSRC)
- -echo "Running check for errors only"
- pyflakes $(PYSRC)
- pylint -E --rcfile=.pylint $(PYSRC)
-
-test: $(MININET) $(TEST)
- -echo "Running tests"
- mininet/test/test_nets.py
- mininet/test/test_hifi.py
-
-mnexec: mnexec.c $(MN) mininet/net.py
- cc $(CFLAGS) $(LDFLAGS) -DVERSION=\"`PYTHONPATH=. $(MN) --version`\" $< -o $@
-
-install: $(MNEXEC) $(MANPAGES)
- install $(MNEXEC) $(BINDIR)
- install $(MANPAGES) $(MANDIR)
- python setup.py install
-
-develop: $(MNEXEC) $(MANPAGES)
- # Perhaps we should link these as well
- install $(MNEXEC) $(BINDIR)
- install $(MANPAGES) $(MANDIR)
- python setup.py develop
-
-man: $(MANPAGES)
-
-mn.1: $(MN)
- PYTHONPATH=. help2man -N -n "create a Mininet network." \
- --no-discard-stderr $< -o $@
-
-mnexec.1: mnexec
- help2man -N -n "execution utility for Mininet." \
- -h "-h" -v "-v" --no-discard-stderr ./$< -o $@
-
-.PHONY: doc
-
-doc: man
- doxygen doc/doxygen.cfg
- make -C doc/latex
diff --git a/README_mininet.md b/README_mininet.md
deleted file mode 100644
index cf0e8b8..0000000
--- a/README_mininet.md
+++ /dev/null
@@ -1,134 +0,0 @@
-
-Mininet: Rapid Prototyping for Software Defined Networks
-========================================================
-
-*The best way to emulate almost any network on your laptop!*
-
-Version 2.0.0
-
-### What is Mininet?
-
-Mininet emulates a complete network of hosts, links, and switches
-on a single machine. To create a sample two-host, one-switch network,
-just run:
-
- `sudo mn`
-
-Mininet is useful for interactive development, testing, and demos,
-especially those using OpenFlow and SDN. OpenFlow-based network
-controllers prototyped in Mininet can usually be transferred to
-hardware with minimal changes for full line-rate execution.
-
-### How does it work?
-
-Mininet creates virtual networks using process-based virtualization
-and network namespaces - features that are available in recent Linux
-kernels. In Mininet, hosts are emulated as `bash` processes running in
-a network namespace, so any code that would normally run on a Linux
-server (like a web server or client program) should run just fine
-within a Mininet "Host". The Mininet "Host" will have its own private
-network interface and can only see its own processes. Switches in
-Mininet are software-based switches like Open vSwitch or the OpenFlow
-reference switch. Links are virtual ethernet pairs, which live in the
-Linux kernel and connect our emulated switches to emulated hosts
-(processes).
-
-### Features
-
-Mininet includes:
-
-* A command-line launcher (`mn`) to instantiate networks.
-
-* A handy Python API for creating networks of varying sizes and
- topologies.
-
-* Examples (in the `examples/` directory) to help you get started.
-
-* Full API documentation via Python `help()` docstrings, as well as
- the ability to generate PDF/HTML documentation with `make doc`.
-
-* Parametrized topologies (`Topo` subclasses) using the Mininet
- object. For example, a tree network may be created with the
- command:
-
- `mn --topo tree,depth=2,fanout=3`
-
-* A command-line interface (`CLI` class) which provides useful
- diagnostic commands (like `iperf` and `ping`), as well as the
- ability to run a command to a node. For example,
-
- `mininet> h11 ifconfig -a`
-
- tells host h11 to run the command `ifconfig -a`
-
-* A "cleanup" command to get rid of junk (interfaces, processes, files
- in /tmp, etc.) which might be left around by Mininet or Linux. Try
- this if things stop working!
-
- `mn -c`
-
-### New features in 2.0.0
-
-Mininet 2.0.0 is a major upgrade and provides
-a number of enhancements and new features, including:
-
-* "Mininet-HiFi" functionality:
-
- * Link bandwidth limits using `tc` (`TCIntf` and `TCLink` classes)
-
- * CPU isolation and bandwidth limits (`CPULimitedHost` class)
-
-* Support for Open vSwitch 1.4+ (including Ubuntu OVS packages)
-
-* Debian packaging (and `apt-get install mininet` in Ubuntu 12.10)
-
-* First-class Interface (`Intf`) and Link (`Link`) classes for easier
- extensibility
-
-* An upgraded Topology (`Topo`) class which supports node and link
- customization
-
-* Man pages for the `mn` and `mnexec` utilities.
-
-[Since the API (most notably the topology) has changed, existing code
-that runs in Mininet 1.0 will need to be changed to run with Mininet
-2.0. This is the primary reason for the major version number change.]
-
-### Installation
-
-See `INSTALL` for installation instructions and details.
-
-### Documentation
-
-In addition to the API documentation (`make doc`), much useful
-information, including a Mininet walkthrough and an introduction
-to the Python API, is available on the
-[Mininet Web Site](http://openflow.org/mininet).
-There is also a wiki which you are encouraged to read and to
-contribute to, particularly the Frequently Asked Questions (FAQ.)
-
-### Support
-
-Mininet is community-supported. We encourage you to join the
-Mininet mailing list, `mininet-discuss` at:
-
-<https://mailman.stanford.edu/mailman/listinfo/mininet-discuss>
-
-### Contributing
-
-Mininet is an open-source project and is currently hosted at
-<https://github.com/mininet>. You are encouraged to download the code,
-examine it, modify it, and submit bug reports, bug fixes, feature
-requests, and enhancements!
-
-Best wishes, and we look forward to seeing what you can do with
-Mininet to change the networking world!
-
-### Credits
-
-The Mininet Team:
-
-* Bob Lantz
-* Brandon Heller
-* Nikhil Handigol
-* Vimal Jeyakumar
diff --git a/bin/minindn b/bin/minindn
index 1c92097..952c568 100755
--- a/bin/minindn
+++ b/bin/minindn
@@ -5,12 +5,13 @@
from mininet.log import setLogLevel, output, info
from mininet.cli import CLI
from mininet.link import TCLink
-from mininet.conf_parser import parse_hosts, parse_links
+from mininet.util import ipStr, ipParse
from ndn.experiments.multiple_failure_experiment import MultipleFailureExperiment
from ndn.experiments.pingall_experiment import PingallExperiment
from ndn.experiments.failure_experiment import FailureExperiment
from ndn.ndn_host import NdnHost, CpuLimitedNdnHost
+from ndn.conf_parser import parse_hosts, parse_links
import os.path, time
import optparse
@@ -114,15 +115,12 @@
def execute(template_file='minindn.conf', testbed=False, pingall=None, ctime=None, hr=False, faces=3, failure=False, isMultipleFailure=False, isCliEnabled=True):
"Create a network based on template_file"
- home = expanduser("~")
-
if template_file == '':
template_file='minindn.conf'
if os.path.exists(template_file) == False:
info('No template file given and default template file minindn.conf not found. Exiting...\n')
quit()
-
topo = NdnTopo(template_file)
t = datetime.datetime.now()
@@ -144,6 +142,22 @@
net.start()
+ # Giving proper IPs to intf so neighbor nodes can communicate
+ # This is one way of giving connectivity, another way could be
+ # to insert a switch between each pair of neighbors
+ ndnNetBase = "1.0.0.0"
+ interfaces = []
+ for host in net.hosts:
+ for intf in host.intfList():
+ link = intf.link
+ node1, node2 = link.intf1.node, link.intf2.node
+ if link.intf1 not in interfaces and link.intf2 not in interfaces:
+ interfaces.append(link.intf1)
+ interfaces.append(link.intf2)
+ node1.setIP(ipStr(ipParse(ndnNetBase) + 1) + '/30', intf=link.intf1)
+ node2.setIP(ipStr(ipParse(ndnNetBase) + 2) + '/30', intf=link.intf2)
+ ndnNetBase = ipStr(ipParse(ndnNetBase) + 4)
+
nodes = "" # Used later to check prefix name in checkFIB
# NLSR initialization
@@ -160,7 +174,7 @@
host.nlsrParameters["hyperbolic-state"] = "on"
# Generate NLSR configuration file
- configGenerator = NlsrConfigGenerator(host, home)
+ configGenerator = NlsrConfigGenerator(host)
configGenerator.createConfigFile()
# Start NLSR
diff --git a/bin/minindnedit b/bin/minindnedit
index 70e0dce..8c56cfc 100755
--- a/bin/minindnedit
+++ b/bin/minindnedit
@@ -42,7 +42,7 @@
from mininet.net import Mininet, VERSION
from mininet.util import ipStr, netParse, ipAdd, quietRun
from mininet.util import buildTopo
-from mininet.util import custom, customConstructor
+from mininet.util import custom
from mininet.term import makeTerm, cleanUpScreens
from mininet.node import Controller, RemoteController, NOX, OVSController
from mininet.node import CPULimitedHost, Host, Node
diff --git a/examples/README b/examples/README
deleted file mode 100644
index dd71bea..0000000
--- a/examples/README
+++ /dev/null
@@ -1,102 +0,0 @@
-
-Mininet Examples
-
-These examples are intended to help you get started using
-Mininet's Python API.
-
----
-
-baresshd.py:
-
-This example uses Mininet's medium-level API to create an sshd
-process running in a namespace. Doesn't use OpenFlow.
-
-consoles.py:
-
-This example creates a grid of console windows, one for each node,
-and allows interaction with and monitoring of each console, including
-graphical monitoring.
-
-controllers.py:
-
-This example creates a network and adds multiple controllers to it.
-
-cpu.py:
-
-This example tests iperf bandwidth for varying CPU limits.
-
-emptynet.py:
-
-This example demonstrates creating an empty network (i.e. with no
-topology object) and adding nodes to it.
-
-hwintf.py:
-
-This example shows how to add an interface (for example a real
-hardware interface) to a network after the network is created.
-
-limit.py:
-
-This example shows how to use link and CPU limits.
-
-linearbandwidth.py:
-
-This example shows how to create a custom topology programatically
-by subclassing Topo, and how to run a series of tests on it.
-
-miniedit.py:
-
-This example demonstrates creating a network via a graphical editor.
-
-multiping.py:
-
-This example demonstrates one method for
-monitoring output from multiple hosts, using node.monitor().
-
-multipoll.py:
-
-This example demonstrates monitoring output files from multiple hosts.
-
-multitest.py:
-
-This example creates a network and runs multiple tests on it.
-
-popen.py:
-
-This example monitors a number of hosts using host.popen() and
-pmonitor().
-
-popenpoll.py:
-
-This example demonstrates monitoring output from multiple hosts using
-the node.popen() interface (which returns Popen objects) and pmonitor().
-
-scratchnet.py, scratchnetuser.py:
-
-These two examples demonstrate how to create a network by using the lowest-
-level Mininet functions. Generally the higher-level API is easier to use,
-but scratchnet shows what is going on behind the scenes.
-
-simpleperf.py:
-
-A simple example of configuring network and CPU bandwidth limits.
-
-sshd.py:
-
-This example shows how to run an sshd process in each host, allowing
-you to log in via ssh. This requires connecting the Mininet data network
-to an interface in the root namespace (generaly the control network
-already lives in the root namespace, so it does not need to be explicitly
-connected.)
-
-treeping64.py:
-
-This example creates a 64-host tree network, and attempts to check full
-connectivity using ping, for different switch/datapath types.
-
-tree1024.py:
-
-This example attempts to create a 1024-host network, and then runs the
-CLI on it. It may run into scalability limits, depending on available
-memory and sysctl configuration (see INSTALL.)
-
diff --git a/examples/baresshd.py b/examples/baresshd.py
deleted file mode 100644
index a714edb..0000000
--- a/examples/baresshd.py
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/usr/bin/python
-
-"This example doesn't use OpenFlow, but attempts to run sshd in a namespace."
-
-from mininet.node import Host
-from mininet.util import ensureRoot
-
-ensureRoot()
-
-print "*** Creating nodes"
-h1 = Host( 'h1' )
-
-root = Host( 'root', inNamespace=False )
-
-print "*** Creating links"
-h1.linkTo( root )
-
-print h1
-
-print "*** Configuring nodes"
-h1.setIP( '10.0.0.1', 8 )
-root.setIP( '10.0.0.2', 8 )
-
-print "*** Creating banner file"
-f = open( '/tmp/%s.banner' % h1.name, 'w' )
-f.write( 'Welcome to %s at %s\n' % ( h1.name, h1.IP() ) )
-f.close()
-
-print "*** Running sshd"
-h1.cmd( '/usr/sbin/sshd -o "Banner /tmp/%s.banner"' % h1.name )
-
-print "*** You may now ssh into", h1.name, "at", h1.IP()
diff --git a/examples/consoles.py b/examples/consoles.py
deleted file mode 100644
index ea2e28d..0000000
--- a/examples/consoles.py
+++ /dev/null
@@ -1,456 +0,0 @@
-#!/usr/bin/python
-
-"""
-consoles.py: bring up a bunch of miniature consoles on a virtual network
-
-This demo shows how to monitor a set of nodes by using
-Node's monitor() and Tkinter's createfilehandler().
-
-We monitor nodes in a couple of ways:
-
-- First, each individual node is monitored, and its output is added
- to its console window
-
-- Second, each time a console window gets iperf output, it is parsed
- and accumulated. Once we have output for all consoles, a bar is
- added to the bandwidth graph.
-
-The consoles also support limited interaction:
-
-- Pressing "return" in a console will send a command to it
-
-- Pressing the console's title button will open up an xterm
-
-Bob Lantz, April 2010
-
-"""
-
-import re
-
-from Tkinter import Frame, Button, Label, Text, Scrollbar, Canvas, Wm, READABLE
-
-from mininet.log import setLogLevel
-from mininet.topolib import TreeNet
-from mininet.term import makeTerms, cleanUpScreens
-from mininet.util import quietRun
-
-class Console( Frame ):
- "A simple console on a host."
-
- def __init__( self, parent, net, node, height=10, width=32, title='Node' ):
- Frame.__init__( self, parent )
-
- self.net = net
- self.node = node
- self.prompt = node.name + '# '
- self.height, self.width, self.title = height, width, title
-
- # Initialize widget styles
- self.buttonStyle = { 'font': 'Monaco 7' }
- self.textStyle = {
- 'font': 'Monaco 7',
- 'bg': 'black',
- 'fg': 'green',
- 'width': self.width,
- 'height': self.height,
- 'relief': 'sunken',
- 'insertbackground': 'green',
- 'highlightcolor': 'green',
- 'selectforeground': 'black',
- 'selectbackground': 'green'
- }
-
- # Set up widgets
- self.text = self.makeWidgets( )
- self.bindEvents()
- self.sendCmd( 'export TERM=dumb' )
-
- self.outputHook = None
-
- def makeWidgets( self ):
- "Make a label, a text area, and a scroll bar."
-
- def newTerm( net=self.net, node=self.node, title=self.title ):
- "Pop up a new terminal window for a node."
- net.terms += makeTerms( [ node ], title )
- label = Button( self, text=self.node.name, command=newTerm,
- **self.buttonStyle )
- label.pack( side='top', fill='x' )
- text = Text( self, wrap='word', **self.textStyle )
- ybar = Scrollbar( self, orient='vertical', width=7,
- command=text.yview )
- text.configure( yscrollcommand=ybar.set )
- text.pack( side='left', expand=True, fill='both' )
- ybar.pack( side='right', fill='y' )
- return text
-
- def bindEvents( self ):
- "Bind keyboard and file events."
- # The text widget handles regular key presses, but we
- # use special handlers for the following:
- self.text.bind( '<Return>', self.handleReturn )
- self.text.bind( '<Control-c>', self.handleInt )
- self.text.bind( '<KeyPress>', self.handleKey )
- # This is not well-documented, but it is the correct
- # way to trigger a file event handler from Tk's
- # event loop!
- self.tk.createfilehandler( self.node.stdout, READABLE,
- self.handleReadable )
-
- # We're not a terminal (yet?), so we ignore the following
- # control characters other than [\b\n\r]
- ignoreChars = re.compile( r'[\x00-\x07\x09\x0b\x0c\x0e-\x1f]+' )
-
- def append( self, text ):
- "Append something to our text frame."
- text = self.ignoreChars.sub( '', text )
- self.text.insert( 'end', text )
- self.text.mark_set( 'insert', 'end' )
- self.text.see( 'insert' )
- outputHook = lambda x, y: True # make pylint happier
- if self.outputHook:
- outputHook = self.outputHook
- outputHook( self, text )
-
- def handleKey( self, event ):
- "If it's an interactive command, send it to the node."
- char = event.char
- if self.node.waiting:
- self.node.write( char )
-
- def handleReturn( self, event ):
- "Handle a carriage return."
- cmd = self.text.get( 'insert linestart', 'insert lineend' )
- # Send it immediately, if "interactive" command
- if self.node.waiting:
- self.node.write( event.char )
- return
- # Otherwise send the whole line to the shell
- pos = cmd.find( self.prompt )
- if pos >= 0:
- cmd = cmd[ pos + len( self.prompt ): ]
- self.sendCmd( cmd )
-
- # Callback ignores event
- def handleInt( self, _event=None ):
- "Handle control-c."
- self.node.sendInt()
-
- def sendCmd( self, cmd ):
- "Send a command to our node."
- if not self.node.waiting:
- self.node.sendCmd( cmd )
-
- def handleReadable( self, _fds, timeoutms=None ):
- "Handle file readable event."
- data = self.node.monitor( timeoutms )
- self.append( data )
- if not self.node.waiting:
- # Print prompt
- self.append( self.prompt )
-
- def waiting( self ):
- "Are we waiting for output?"
- return self.node.waiting
-
- def waitOutput( self ):
- "Wait for any remaining output."
- while self.node.waiting:
- # A bit of a trade-off here...
- self.handleReadable( self, timeoutms=1000)
- self.update()
-
- def clear( self ):
- "Clear all of our text."
- self.text.delete( '1.0', 'end' )
-
-
-class Graph( Frame ):
-
- "Graph that we can add bars to over time."
-
- def __init__( self, parent=None, bg = 'white', gheight=200, gwidth=500,
- barwidth=10, ymax=3.5,):
-
- Frame.__init__( self, parent )
-
- self.bg = bg
- self.gheight = gheight
- self.gwidth = gwidth
- self.barwidth = barwidth
- self.ymax = float( ymax )
- self.xpos = 0
-
- # Create everything
- self.title, self.scale, self.graph = self.createWidgets()
- self.updateScrollRegions()
- self.yview( 'moveto', '1.0' )
-
- def createScale( self ):
- "Create a and return a new canvas with scale markers."
- height = float( self.gheight )
- width = 25
- ymax = self.ymax
- scale = Canvas( self, width=width, height=height,
- background=self.bg )
- opts = { 'fill': 'red' }
- # Draw scale line
- scale.create_line( width - 1, height, width - 1, 0, **opts )
- # Draw ticks and numbers
- for y in range( 0, int( ymax + 1 ) ):
- ypos = height * (1 - float( y ) / ymax )
- scale.create_line( width, ypos, width - 10, ypos, **opts )
- scale.create_text( 10, ypos, text=str( y ), **opts )
- return scale
-
- def updateScrollRegions( self ):
- "Update graph and scale scroll regions."
- ofs = 20
- height = self.gheight + ofs
- self.graph.configure( scrollregion=( 0, -ofs,
- self.xpos * self.barwidth, height ) )
- self.scale.configure( scrollregion=( 0, -ofs, 0, height ) )
-
- def yview( self, *args ):
- "Scroll both scale and graph."
- self.graph.yview( *args )
- self.scale.yview( *args )
-
- def createWidgets( self ):
- "Create initial widget set."
-
- # Objects
- title = Label( self, text='Bandwidth (Gb/s)', bg=self.bg )
- width = self.gwidth
- height = self.gheight
- scale = self.createScale()
- graph = Canvas( self, width=width, height=height, background=self.bg)
- xbar = Scrollbar( self, orient='horizontal', command=graph.xview )
- ybar = Scrollbar( self, orient='vertical', command=self.yview )
- graph.configure( xscrollcommand=xbar.set, yscrollcommand=ybar.set,
- scrollregion=(0, 0, width, height ) )
- scale.configure( yscrollcommand=ybar.set )
-
- # Layout
- title.grid( row=0, columnspan=3, sticky='new')
- scale.grid( row=1, column=0, sticky='nsew' )
- graph.grid( row=1, column=1, sticky='nsew' )
- ybar.grid( row=1, column=2, sticky='ns' )
- xbar.grid( row=2, column=0, columnspan=2, sticky='ew' )
- self.rowconfigure( 1, weight=1 )
- self.columnconfigure( 1, weight=1 )
- return title, scale, graph
-
- def addBar( self, yval ):
- "Add a new bar to our graph."
- percent = yval / self.ymax
- c = self.graph
- x0 = self.xpos * self.barwidth
- x1 = x0 + self.barwidth
- y0 = self.gheight
- y1 = ( 1 - percent ) * self.gheight
- c.create_rectangle( x0, y0, x1, y1, fill='green' )
- self.xpos += 1
- self.updateScrollRegions()
- self.graph.xview( 'moveto', '1.0' )
-
- def clear( self ):
- "Clear graph contents."
- self.graph.delete( 'all' )
- self.xpos = 0
-
- def test( self ):
- "Add a bar for testing purposes."
- ms = 1000
- if self.xpos < 10:
- self.addBar( self.xpos / 10 * self.ymax )
- self.after( ms, self.test )
-
- def setTitle( self, text ):
- "Set graph title"
- self.title.configure( text=text, font='Helvetica 9 bold' )
-
-
-class ConsoleApp( Frame ):
-
- "Simple Tk consoles for Mininet."
-
- menuStyle = { 'font': 'Geneva 7 bold' }
-
- def __init__( self, net, parent=None, width=4 ):
- Frame.__init__( self, parent )
- self.top = self.winfo_toplevel()
- self.top.title( 'Mininet' )
- self.net = net
- self.menubar = self.createMenuBar()
- cframe = self.cframe = Frame( self )
- self.consoles = {} # consoles themselves
- titles = {
- 'hosts': 'Host',
- 'switches': 'Switch',
- 'controllers': 'Controller'
- }
- for name in titles:
- nodes = getattr( net, name )
- frame, consoles = self.createConsoles(
- cframe, nodes, width, titles[ name ] )
- self.consoles[ name ] = Object( frame=frame, consoles=consoles )
- self.selected = None
- self.select( 'hosts' )
- self.cframe.pack( expand=True, fill='both' )
- cleanUpScreens()
- # Close window gracefully
- Wm.wm_protocol( self.top, name='WM_DELETE_WINDOW', func=self.quit )
-
- # Initialize graph
- graph = Graph( cframe )
- self.consoles[ 'graph' ] = Object( frame=graph, consoles=[ graph ] )
- self.graph = graph
- self.graphVisible = False
- self.updates = 0
- self.hostCount = len( self.consoles[ 'hosts' ].consoles )
- self.bw = 0
-
- self.pack( expand=True, fill='both' )
-
- def updateGraph( self, _console, output ):
- "Update our graph."
- m = re.search( r'(\d+) Mbits/sec', output )
- if not m:
- return
- self.updates += 1
- self.bw += .001 * float( m.group( 1 ) )
- if self.updates >= self.hostCount:
- self.graph.addBar( self.bw )
- self.bw = 0
- self.updates = 0
-
- def setOutputHook( self, fn=None, consoles=None ):
- "Register fn as output hook [on specific consoles.]"
- if consoles is None:
- consoles = self.consoles[ 'hosts' ].consoles
- for console in consoles:
- console.outputHook = fn
-
- def createConsoles( self, parent, nodes, width, title ):
- "Create a grid of consoles in a frame."
- f = Frame( parent )
- # Create consoles
- consoles = []
- index = 0
- for node in nodes:
- console = Console( f, self.net, node, title=title )
- consoles.append( console )
- row = index / width
- column = index % width
- console.grid( row=row, column=column, sticky='nsew' )
- index += 1
- f.rowconfigure( row, weight=1 )
- f.columnconfigure( column, weight=1 )
- return f, consoles
-
- def select( self, groupName ):
- "Select a group of consoles to display."
- if self.selected is not None:
- self.selected.frame.pack_forget()
- self.selected = self.consoles[ groupName ]
- self.selected.frame.pack( expand=True, fill='both' )
-
- def createMenuBar( self ):
- "Create and return a menu (really button) bar."
- f = Frame( self )
- buttons = [
- ( 'Hosts', lambda: self.select( 'hosts' ) ),
- ( 'Switches', lambda: self.select( 'switches' ) ),
- ( 'Controllers', lambda: self.select( 'controllers' ) ),
- ( 'Graph', lambda: self.select( 'graph' ) ),
- ( 'Ping', self.ping ),
- ( 'Iperf', self.iperf ),
- ( 'Interrupt', self.stop ),
- ( 'Clear', self.clear ),
- ( 'Quit', self.quit )
- ]
- for name, cmd in buttons:
- b = Button( f, text=name, command=cmd, **self.menuStyle )
- b.pack( side='left' )
- f.pack( padx=4, pady=4, fill='x' )
- return f
-
- def clear( self ):
- "Clear selection."
- for console in self.selected.consoles:
- console.clear()
-
- def waiting( self, consoles=None ):
- "Are any of our hosts waiting for output?"
- if consoles is None:
- consoles = self.consoles[ 'hosts' ].consoles
- for console in consoles:
- if console.waiting():
- return True
- return False
-
- def ping( self ):
- "Tell each host to ping the next one."
- consoles = self.consoles[ 'hosts' ].consoles
- if self.waiting( consoles ):
- return
- count = len( consoles )
- i = 0
- for console in consoles:
- i = ( i + 1 ) % count
- ip = consoles[ i ].node.IP()
- console.sendCmd( 'ping ' + ip )
-
- def iperf( self ):
- "Tell each host to iperf to the next one."
- consoles = self.consoles[ 'hosts' ].consoles
- if self.waiting( consoles ):
- return
- count = len( consoles )
- self.setOutputHook( self.updateGraph )
- for console in consoles:
- console.node.cmd( 'iperf -sD' )
- i = 0
- for console in consoles:
- i = ( i + 1 ) % count
- ip = consoles[ i ].node.IP()
- console.sendCmd( 'iperf -t 99999 -i 1 -c ' + ip )
-
- def stop( self, wait=True ):
- "Interrupt all hosts."
- consoles = self.consoles[ 'hosts' ].consoles
- for console in consoles:
- console.handleInt()
- if wait:
- for console in consoles:
- console.waitOutput()
- self.setOutputHook( None )
- # Shut down any iperfs that might still be running
- quietRun( 'killall -9 iperf' )
-
- def quit( self ):
- "Stop everything and quit."
- self.stop( wait=False)
- Frame.quit( self )
-
-
-# Make it easier to construct and assign objects
-
-def assign( obj, **kwargs ):
- "Set a bunch of fields in an object."
- obj.__dict__.update( kwargs )
-
-class Object( object ):
- "Generic object you can stuff junk into."
- def __init__( self, **kwargs ):
- assign( self, **kwargs )
-
-
-if __name__ == '__main__':
- setLogLevel( 'info' )
- network = TreeNet( depth=2, fanout=4 )
- network.start()
- app = ConsoleApp( network, width=4 )
- app.mainloop()
- network.stop()
diff --git a/examples/controllers.py b/examples/controllers.py
deleted file mode 100644
index 6eeef0e..0000000
--- a/examples/controllers.py
+++ /dev/null
@@ -1,64 +0,0 @@
-#!/usr/bin/python
-
-"""
-This example creates a multi-controller network from
-semi-scratch; note a topo object could also be used and
-would be passed into the Mininet() constructor.
-"""
-
-from mininet.net import Mininet
-from mininet.node import Controller, OVSKernelSwitch
-from mininet.cli import CLI
-from mininet.log import setLogLevel
-
-Switch = OVSKernelSwitch
-
-def addHost( net, N ):
- "Create host hN and add to net."
- name = 'h%d' % N
- ip = '10.0.0.%d' % N
- return net.addHost( name, ip=ip )
-
-def multiControllerNet():
- "Create a network with multiple controllers."
-
- net = Mininet( controller=Controller, switch=Switch)
-
- print "*** Creating controllers"
- c1 = net.addController( 'c1', port=6633 )
- c2 = net.addController( 'c2', port=6634 )
-
- print "*** Creating switches"
- s1 = net.addSwitch( 's1' )
- s2 = net.addSwitch( 's2' )
-
- print "*** Creating hosts"
- hosts1 = [ addHost( net, n ) for n in 3, 4 ]
- hosts2 = [ addHost( net, n ) for n in 5, 6 ]
-
- print "*** Creating links"
- for h in hosts1:
- s1.linkTo( h )
- for h in hosts2:
- s2.linkTo( h )
- s1.linkTo( s2 )
-
- print "*** Starting network"
- net.build()
- c1.start()
- c2.start()
- s1.start( [ c1 ] )
- s2.start( [ c2 ] )
-
- print "*** Testing network"
- net.pingAll()
-
- print "*** Running CLI"
- CLI( net )
-
- print "*** Stopping network"
- net.stop()
-
-if __name__ == '__main__':
- setLogLevel( 'info' ) # for CLI output
- multiControllerNet()
diff --git a/examples/cpu.py b/examples/cpu.py
deleted file mode 100644
index 6dfc936..0000000
--- a/examples/cpu.py
+++ /dev/null
@@ -1,81 +0,0 @@
-#!/usr/bin/python
-
-"""
-cpu.py: test iperf bandwidth for varying cpu limits
-"""
-
-from mininet.net import Mininet
-from mininet.node import CPULimitedHost
-from mininet.topolib import TreeTopo
-from mininet.util import custom
-from mininet.log import setLogLevel, output
-
-from time import sleep
-
-def waitListening(client, server, port):
- "Wait until server is listening on port"
- if not client.cmd('which telnet'):
- raise Exception('Could not find telnet')
- cmd = ('sh -c "echo A | telnet -e A %s %s"' %
- (server.IP(), port))
- while 'Connected' not in client.cmd(cmd):
- output('waiting for', server,
- 'to listen on port', port, '\n')
- sleep(.5)
-
-
-def bwtest( cpuLimits, period_us=100000, seconds=5 ):
- """Example/test of link and CPU bandwidth limits
- cpu: cpu limit as fraction of overall CPU time"""
-
- topo = TreeTopo( depth=1, fanout=2 )
-
- results = {}
-
- for sched in 'rt', 'cfs':
- print '*** Testing with', sched, 'bandwidth limiting'
- for cpu in cpuLimits:
- host = custom( CPULimitedHost, sched=sched,
- period_us=period_us,
- cpu=cpu )
- net = Mininet( topo=topo, host=host )
- net.start()
- net.pingAll()
- hosts = [ net.getNodeByName( h ) for h in topo.hosts() ]
- client, server = hosts[ 0 ], hosts[ -1 ]
- server.cmd( 'iperf -s -p 5001 &' )
- waitListening( client, server, 5001 )
- result = client.cmd( 'iperf -yc -t %s -c %s' % (
- seconds, server.IP() ) ).split( ',' )
- bps = float( result[ -1 ] )
- server.cmdPrint( 'kill %iperf' )
- net.stop()
- updated = results.get( sched, [] )
- updated += [ ( cpu, bps ) ]
- results[ sched ] = updated
-
- return results
-
-
-def dump( results ):
- "Dump results"
-
- fmt = '%s\t%s\t%s'
-
- print
- print fmt % ( 'sched', 'cpu', 'client MB/s' )
- print
-
- for sched in sorted( results.keys() ):
- entries = results[ sched ]
- for cpu, bps in entries:
- pct = '%.2f%%' % ( cpu * 100 )
- mbps = bps / 1e6
- print fmt % ( sched, pct, mbps )
-
-
-if __name__ == '__main__':
- setLogLevel( 'info' )
- limits = [ .45, .4, .3, .2, .1 ]
- out = bwtest( limits )
- dump( out )
diff --git a/examples/emptynet.py b/examples/emptynet.py
deleted file mode 100644
index 9f57855..0000000
--- a/examples/emptynet.py
+++ /dev/null
@@ -1,44 +0,0 @@
-#!/usr/bin/python
-
-"""
-This example shows how to create an empty Mininet object
-(without a topology object) and add nodes to it manually.
-"""
-
-from mininet.net import Mininet
-from mininet.node import Controller
-from mininet.cli import CLI
-from mininet.log import setLogLevel, info
-
-def emptyNet():
-
- "Create an empty network and add nodes to it."
-
- net = Mininet( controller=Controller )
-
- info( '*** Adding controller\n' )
- net.addController( 'c0' )
-
- info( '*** Adding hosts\n' )
- h1 = net.addHost( 'h1', ip='10.0.0.1' )
- h2 = net.addHost( 'h2', ip='10.0.0.2' )
-
- info( '*** Adding switch\n' )
- s3 = net.addSwitch( 's3' )
-
- info( '*** Creating links\n' )
- h1.linkTo( s3 )
- h2.linkTo( s3 )
-
- info( '*** Starting network\n')
- net.start()
-
- info( '*** Running CLI\n' )
- CLI( net )
-
- info( '*** Stopping network' )
- net.stop()
-
-if __name__ == '__main__':
- setLogLevel( 'info' )
- emptyNet()
diff --git a/examples/hwintf.py b/examples/hwintf.py
deleted file mode 100644
index e5960d9..0000000
--- a/examples/hwintf.py
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/usr/bin/python
-
-"""
-This example shows how to add an interface (for example a real
-hardware interface) to a network after the network is created.
-"""
-
-import re
-
-from mininet.cli import CLI
-from mininet.log import setLogLevel, info, error
-from mininet.net import Mininet
-from mininet.link import Intf
-from mininet.topolib import TreeTopo
-from mininet.util import quietRun
-
-def checkIntf( intf ):
- "Make sure intf exists and is not configured."
- if ( ' %s:' % intf ) not in quietRun( 'ip link show' ):
- error( 'Error:', intf, 'does not exist!\n' )
- exit( 1 )
- ips = re.findall( r'\d+\.\d+\.\d+\.\d+', quietRun( 'ifconfig ' + intf ) )
- if ips:
- error( 'Error:', intf, 'has an IP address,'
- 'and is probably in use!\n' )
- exit( 1 )
-
-if __name__ == '__main__':
- setLogLevel( 'info' )
-
- intfName = 'eth1'
- info( '*** Checking', intfName, '\n' )
- checkIntf( intfName )
-
- info( '*** Creating network\n' )
- net = Mininet( topo=TreeTopo( depth=1, fanout=2 ) )
-
- switch = net.switches[ 0 ]
- info( '*** Adding hardware interface', intfName, 'to switch',
- switch.name, '\n' )
- _intf = Intf( intfName, node=switch )
-
- info( '*** Note: you may need to reconfigure the interfaces for '
- 'the Mininet hosts:\n', net.hosts, '\n' )
-
- net.start()
- CLI( net )
- net.stop()
diff --git a/examples/limit.py b/examples/limit.py
deleted file mode 100644
index 0b23ca1..0000000
--- a/examples/limit.py
+++ /dev/null
@@ -1,53 +0,0 @@
-#!/usr/bin/python
-
-"""
-limit.py: example of using link and CPU limits
-"""
-
-from mininet.net import Mininet
-from mininet.link import TCIntf
-from mininet.node import CPULimitedHost
-from mininet.topolib import TreeTopo
-from mininet.util import custom
-from mininet.log import setLogLevel
-
-
-def testLinkLimit( net, bw ):
- "Run bandwidth limit test"
- print '*** Testing network %.2f Mbps bandwidth limit' % bw
- net.iperf( )
-
-
-def limit( bw=10, cpu=.1 ):
- """Example/test of link and CPU bandwidth limits
- bw: interface bandwidth limit in Mbps
- cpu: cpu limit as fraction of overall CPU time"""
- intf = custom( TCIntf, bw=bw )
- myTopo = TreeTopo( depth=1, fanout=2 )
- for sched in 'rt', 'cfs':
- print '*** Testing with', sched, 'bandwidth limiting'
- host = custom( CPULimitedHost, sched=sched, cpu=cpu )
- net = Mininet( topo=myTopo, intf=intf, host=host )
- net.start()
- testLinkLimit( net, bw=bw )
- net.runCpuLimitTest( cpu=cpu )
- net.stop()
-
-def verySimpleLimit( bw=150 ):
- "Absurdly simple limiting test"
- intf = custom( TCIntf, bw=bw )
- net = Mininet( intf=intf )
- h1, h2 = net.addHost( 'h1' ), net.addHost( 'h2' )
- net.addLink( h1, h2 )
- net.start()
- net.pingAll()
- net.iperf()
- h1.cmdPrint( 'tc -s qdisc ls dev', h1.defaultIntf() )
- h2.cmdPrint( 'tc -d class show dev', h2.defaultIntf() )
- h1.cmdPrint( 'tc -s qdisc ls dev', h1.defaultIntf() )
- h2.cmdPrint( 'tc -d class show dev', h2.defaultIntf() )
- net.stop()
-
-if __name__ == '__main__':
- setLogLevel( 'info' )
- limit()
diff --git a/examples/linearbandwidth.py b/examples/linearbandwidth.py
deleted file mode 100644
index 3fd06c7..0000000
--- a/examples/linearbandwidth.py
+++ /dev/null
@@ -1,110 +0,0 @@
-#!/usr/bin/python
-
-"""
-Test bandwidth (using iperf) on linear networks of varying size,
-using both kernel and user datapaths.
-
-We construct a network of N hosts and N-1 switches, connected as follows:
-
-h1 <-> s1 <-> s2 .. sN-1
- | | |
- h2 h3 hN
-
-WARNING: by default, the reference controller only supports 16
-switches, so this test WILL NOT WORK unless you have recompiled
-your controller to support 100 switches (or more.)
-
-In addition to testing the bandwidth across varying numbers
-of switches, this example demonstrates:
-
-- creating a custom topology, LinearTestTopo
-- using the ping() and iperf() tests from Mininet()
-- testing both the kernel and user switches
-
-"""
-
-from mininet.net import Mininet
-from mininet.node import UserSwitch, OVSKernelSwitch
-from mininet.topo import Topo
-from mininet.log import lg
-from mininet.util import irange
-
-import sys
-flush = sys.stdout.flush
-
-class LinearTestTopo( Topo ):
- "Topology for a string of N hosts and N-1 switches."
-
- def __init__( self, N, **params ):
-
- # Initialize topology
- Topo.__init__( self, **params )
-
- # Create switches and hosts
- hosts = [ self.addHost( 'h%s' % h )
- for h in irange( 1, N ) ]
- switches = [ self.addSwitch( 's%s' % s )
- for s in irange( 1, N - 1 ) ]
-
- # Wire up switches
- last = None
- for switch in switches:
- if last:
- self.addLink( last, switch )
- last = switch
-
- # Wire up hosts
- self.addLink( hosts[ 0 ], switches[ 0 ] )
- for host, switch in zip( hosts[ 1: ], switches ):
- self.addLink( host, switch )
-
-
-def linearBandwidthTest( lengths ):
-
- "Check bandwidth at various lengths along a switch chain."
-
- results = {}
- switchCount = max( lengths )
- hostCount = switchCount + 1
-
- switches = { 'reference user': UserSwitch,
- 'Open vSwitch kernel': OVSKernelSwitch }
-
- topo = LinearTestTopo( hostCount )
-
- for datapath in switches.keys():
- print "*** testing", datapath, "datapath"
- Switch = switches[ datapath ]
- results[ datapath ] = []
- net = Mininet( topo=topo, switch=Switch )
- net.start()
- print "*** testing basic connectivity"
- for n in lengths:
- net.ping( [ net.hosts[ 0 ], net.hosts[ n ] ] )
- print "*** testing bandwidth"
- for n in lengths:
- src, dst = net.hosts[ 0 ], net.hosts[ n ]
- print "testing", src.name, "<->", dst.name,
- bandwidth = net.iperf( [ src, dst ] )
- print bandwidth
- flush()
- results[ datapath ] += [ ( n, bandwidth ) ]
- net.stop()
-
- for datapath in switches.keys():
- print
- print "*** Linear network results for", datapath, "datapath:"
- print
- result = results[ datapath ]
- print "SwitchCount\tiperf Results"
- for switchCount, bandwidth in result:
- print switchCount, '\t\t',
- print bandwidth[ 0 ], 'server, ', bandwidth[ 1 ], 'client'
- print
- print
-
-if __name__ == '__main__':
- lg.setLogLevel( 'info' )
- sizes = [ 1, 10, 20, 40, 60, 80, 100 ]
- print "*** Running linearBandwidthTest", sizes
- linearBandwidthTest( sizes )
diff --git a/examples/miniedit.py b/examples/miniedit.py
deleted file mode 100644
index c17cf3d..0000000
--- a/examples/miniedit.py
+++ /dev/null
@@ -1,771 +0,0 @@
-#!/usr/bin/python
-
-"""
-MiniEdit: a simple network editor for Mininet
-
-This is a simple demonstration of how one might build a
-GUI application using Mininet as the network model.
-
-Development version - not entirely functional!
-
-Bob Lantz, April 2010
-"""
-import optparse
-
-from Tkinter import Frame, Button, Label, Scrollbar, Canvas
-from Tkinter import Menu, BitmapImage, PhotoImage, Wm, Toplevel
-
-# someday: from ttk import *
-
-from mininet.log import setLogLevel
-from mininet.net import Mininet
-from mininet.util import ipStr
-from mininet.term import makeTerm, cleanUpScreens
-
-
-def parse_args():
- usage="""Usage: miniccnxedit [template_file]
- If no template_file is given, generated template will be written
- to the file topo.miniccnx in the current directory.
- """
-
- parser = optparse.OptionParser(usage)
-
- #parser.add_option("-t", "--template", help="template file name")
- _, arg = parser.parse_args()
-
- return arg
-
-class MiniEdit( Frame ):
-
- "A simple network editor for Mininet."
-
- def __init__( self, parent=None, cheight=200, cwidth=500, template_file='topo.miniccnx' ):
-
- Frame.__init__( self, parent )
- self.action = None
- self.appName = 'Mini-CCNx'
- self.template_file = template_file
-
- # Style
- self.font = ( 'Geneva', 9 )
- self.smallFont = ( 'Geneva', 7 )
- self.bg = 'white'
-
- # Title
- self.top = self.winfo_toplevel()
- self.top.title( self.appName )
-
- # Menu bar
- self.createMenubar()
-
- # Editing canvas
- self.cheight, self.cwidth = cheight, cwidth
- self.cframe, self.canvas = self.createCanvas()
-
- # Toolbar
- self.images = miniEditImages()
- self.buttons = {}
- self.active = None
- self.tools = ( 'Select', 'Host', 'Switch', 'Link' )
- self.customColors = { 'Switch': 'darkGreen', 'Host': 'blue' }
- self.toolbar = self.createToolbar()
-
- # Layout
- self.toolbar.grid( column=0, row=0, sticky='nsew')
- self.cframe.grid( column=1, row=0 )
- self.columnconfigure( 1, weight=1 )
- self.rowconfigure( 0, weight=1 )
- self.pack( expand=True, fill='both' )
-
- # About box
- self.aboutBox = None
-
- # Initialize node data
- self.nodeBindings = self.createNodeBindings()
- self.nodePrefixes = { 'Switch': 's', 'Host': 'h' }
- self.widgetToItem = {}
- self.itemToWidget = {}
-
- # Initialize link tool
- self.link = self.linkWidget = None
-
- # Selection support
- self.selection = None
-
- # Keyboard bindings
- self.bind( '<Control-q>', lambda event: self.quit() )
- self.bind( '<KeyPress-Delete>', self.deleteSelection )
- self.bind( '<KeyPress-BackSpace>', self.deleteSelection )
- self.focus()
-
- # Event handling initalization
- self.linkx = self.linky = self.linkItem = None
- self.lastSelection = None
-
- # Model initialization
- self.links = {}
- self.nodeCount = 0
- self.net = None
-
- # Close window gracefully
- Wm.wm_protocol( self.top, name='WM_DELETE_WINDOW', func=self.quit )
-
- def quit( self ):
- "Stop our network, if any, then quit."
- self.stop()
- Frame.quit( self )
-
- def createMenubar( self ):
- "Create our menu bar."
-
- font = self.font
-
- mbar = Menu( self.top, font=font )
- self.top.configure( menu=mbar )
-
- # Application menu
- appMenu = Menu( mbar, tearoff=False )
- mbar.add_cascade( label=self.appName, font=font, menu=appMenu )
- appMenu.add_command( label='About Mini-CCNx', command=self.about,
- font=font)
- appMenu.add_separator()
- appMenu.add_command( label='Quit', command=self.quit, font=font )
-
- #fileMenu = Menu( mbar, tearoff=False )
- #mbar.add_cascade( label="File", font=font, menu=fileMenu )
- #fileMenu.add_command( label="Load...", font=font )
- #fileMenu.add_separator()
- #fileMenu.add_command( label="Save", font=font )
- #fileMenu.add_separator()
- #fileMenu.add_command( label="Print", font=font )
-
- editMenu = Menu( mbar, tearoff=False )
- mbar.add_cascade( label="Edit", font=font, menu=editMenu )
- editMenu.add_command( label="Cut", font=font,
- command=lambda: self.deleteSelection( None ) )
-
- # runMenu = Menu( mbar, tearoff=False )
- # mbar.add_cascade( label="Run", font=font, menu=runMenu )
- # runMenu.add_command( label="Run", font=font, command=self.doRun )
- # runMenu.add_command( label="Stop", font=font, command=self.doStop )
- # runMenu.add_separator()
- # runMenu.add_command( label='Xterm', font=font, command=self.xterm )
-
- # Canvas
-
- def createCanvas( self ):
- "Create and return our scrolling canvas frame."
- f = Frame( self )
-
- canvas = Canvas( f, width=self.cwidth, height=self.cheight,
- bg=self.bg )
-
- # Scroll bars
- xbar = Scrollbar( f, orient='horizontal', command=canvas.xview )
- ybar = Scrollbar( f, orient='vertical', command=canvas.yview )
- canvas.configure( xscrollcommand=xbar.set, yscrollcommand=ybar.set )
-
- # Resize box
- resize = Label( f, bg='white' )
-
- # Layout
- canvas.grid( row=0, column=1, sticky='nsew')
- ybar.grid( row=0, column=2, sticky='ns')
- xbar.grid( row=1, column=1, sticky='ew' )
- resize.grid( row=1, column=2, sticky='nsew' )
-
- # Resize behavior
- f.rowconfigure( 0, weight=1 )
- f.columnconfigure( 1, weight=1 )
- f.grid( row=0, column=0, sticky='nsew' )
- f.bind( '<Configure>', lambda event: self.updateScrollRegion() )
-
- # Mouse bindings
- canvas.bind( '<ButtonPress-1>', self.clickCanvas )
- canvas.bind( '<B1-Motion>', self.dragCanvas )
- canvas.bind( '<ButtonRelease-1>', self.releaseCanvas )
-
- return f, canvas
-
- def updateScrollRegion( self ):
- "Update canvas scroll region to hold everything."
- bbox = self.canvas.bbox( 'all' )
- if bbox is not None:
- self.canvas.configure( scrollregion=( 0, 0, bbox[ 2 ],
- bbox[ 3 ] ) )
-
- def canvasx( self, x_root ):
- "Convert root x coordinate to canvas coordinate."
- c = self.canvas
- return c.canvasx( x_root ) - c.winfo_rootx()
-
- def canvasy( self, y_root ):
- "Convert root y coordinate to canvas coordinate."
- c = self.canvas
- return c.canvasy( y_root ) - c.winfo_rooty()
-
- # Toolbar
-
- def activate( self, toolName ):
- "Activate a tool and press its button."
- # Adjust button appearance
- if self.active:
- self.buttons[ self.active ].configure( relief='raised' )
- self.buttons[ toolName ].configure( relief='sunken' )
- # Activate dynamic bindings
- self.active = toolName
-
- def createToolbar( self ):
- "Create and return our toolbar frame."
-
- toolbar = Frame( self )
-
- # Tools
- for tool in self.tools:
- cmd = ( lambda t=tool: self.activate( t ) )
- b = Button( toolbar, text=tool, font=self.smallFont, command=cmd)
- if tool in self.images:
- b.config( height=50, image=self.images[ tool ] )
- # b.config( compound='top' )
- b.pack( fill='x' )
- self.buttons[ tool ] = b
- self.activate( self.tools[ 0 ] )
-
- # Spacer
- Label( toolbar, text='' ).pack()
-
- # Commands
- #for cmd, color in [ ( 'Stop', 'darkRed' ), ( 'Run', 'darkGreen' ) ]:
- # doCmd = getattr( self, 'do' + cmd )
- # b = Button( toolbar, text=cmd, font=self.smallFont,
- # fg=color, command=doCmd )
- # b.pack( fill='x', side='bottom' )
-
- for cmd, color in [ ( 'Generate', 'darkGreen' ) ]:
- doCmd = getattr( self, 'do' + cmd )
- b = Button( toolbar, text=cmd, font=self.smallFont,
- fg=color, command=doCmd )
- b.pack( fill='x', side='bottom' )
-
-
- return toolbar
-
- def doGenerate( self ):
- "Generate template."
- self.activate( 'Select' )
- for tool in self.tools:
- self.buttons[ tool ].config( state='disabled' )
-
- self.buildTemplate()
-
- for tool in self.tools:
- self.buttons[ tool ].config( state='normal' )
-
- def doStop( self ):
- "Stop command."
- self.stop()
- for tool in self.tools:
- self.buttons[ tool ].config( state='normal' )
-
- def buildTemplate( self ):
- "Generate template"
-
- template = open(self.template_file, 'w')
-
- # hosts
- template.write('[hosts]\n')
- for widget in self.widgetToItem:
- name = widget[ 'text' ]
- tags = self.canvas.gettags( self.widgetToItem[ widget ] )
- if 'Host' in tags:
- template.write(name + ':\n')
-
- # switches/routers
- template.write('[routers]\n')
- for widget in self.widgetToItem:
- name = widget[ 'text' ]
- tags = self.canvas.gettags( self.widgetToItem[ widget ] )
- if 'Switch' in tags:
- template.write(name + ':\n')
-
- # Make links
- template.write('[links]\n')
- for link in self.links.values():
- ( src, dst ) = link
- srcName, dstName = src[ 'text' ], dst[ 'text' ]
- template.write(srcName + ':' + dstName + '\n')
-
- template.close()
-
-
- # Generic canvas handler
- #
- # We could have used bindtags, as in nodeIcon, but
- # the dynamic approach used here
- # may actually require less code. In any case, it's an
- # interesting introspection-based alternative to bindtags.
-
- def canvasHandle( self, eventName, event ):
- "Generic canvas event handler"
- if self.active is None:
- return
- toolName = self.active
- handler = getattr( self, eventName + toolName, None )
- if handler is not None:
- handler( event )
-
- def clickCanvas( self, event ):
- "Canvas click handler."
- self.canvasHandle( 'click', event )
-
- def dragCanvas( self, event ):
- "Canvas drag handler."
- self.canvasHandle( 'drag', event )
-
- def releaseCanvas( self, event ):
- "Canvas mouse up handler."
- self.canvasHandle( 'release', event )
-
- # Currently the only items we can select directly are
- # links. Nodes are handled by bindings in the node icon.
-
- def findItem( self, x, y ):
- "Find items at a location in our canvas."
- items = self.canvas.find_overlapping( x, y, x, y )
- if len( items ) == 0:
- return None
- else:
- return items[ 0 ]
-
- # Canvas bindings for Select, Host, Switch and Link tools
-
- def clickSelect( self, event ):
- "Select an item."
- self.selectItem( self.findItem( event.x, event.y ) )
-
- def deleteItem( self, item ):
- "Delete an item."
- # Don't delete while network is running
- if self.buttons[ 'Select' ][ 'state' ] == 'disabled':
- return
- # Delete from model
- if item in self.links:
- self.deleteLink( item )
- if item in self.itemToWidget:
- self.deleteNode( item )
- # Delete from view
- self.canvas.delete( item )
-
- def deleteSelection( self, _event ):
- "Delete the selected item."
- if self.selection is not None:
- self.deleteItem( self.selection )
- self.selectItem( None )
-
- def nodeIcon( self, node, name ):
- "Create a new node icon."
- icon = Button( self.canvas, image=self.images[ node ],
- text=name, compound='top' )
- # Unfortunately bindtags wants a tuple
- bindtags = [ str( self.nodeBindings ) ]
- bindtags += list( icon.bindtags() )
- icon.bindtags( tuple( bindtags ) )
- return icon
-
- def newNode( self, node, event ):
- "Add a new node to our canvas."
- c = self.canvas
- x, y = c.canvasx( event.x ), c.canvasy( event.y )
- self.nodeCount += 1
- name = self.nodePrefixes[ node ] + str( self.nodeCount )
- icon = self.nodeIcon( node, name )
- item = self.canvas.create_window( x, y, anchor='c', window=icon,
- tags=node )
- self.widgetToItem[ icon ] = item
- self.itemToWidget[ item ] = icon
- self.selectItem( item )
- icon.links = {}
-
- def clickHost( self, event ):
- "Add a new host to our canvas."
- self.newNode( 'Host', event )
-
- def clickSwitch( self, event ):
- "Add a new switch to our canvas."
- self.newNode( 'Switch', event )
-
- def dragLink( self, event ):
- "Drag a link's endpoint to another node."
- if self.link is None:
- return
- # Since drag starts in widget, we use root coords
- x = self.canvasx( event.x_root )
- y = self.canvasy( event.y_root )
- c = self.canvas
- c.coords( self.link, self.linkx, self.linky, x, y )
-
- def releaseLink( self, _event ):
- "Give up on the current link."
- if self.link is not None:
- self.canvas.delete( self.link )
- self.linkWidget = self.linkItem = self.link = None
-
- # Generic node handlers
-
- def createNodeBindings( self ):
- "Create a set of bindings for nodes."
- bindings = {
- '<ButtonPress-1>': self.clickNode,
- '<B1-Motion>': self.dragNode,
- '<ButtonRelease-1>': self.releaseNode,
- '<Enter>': self.enterNode,
- '<Leave>': self.leaveNode,
- '<Double-ButtonPress-1>': self.xterm
- }
- l = Label() # lightweight-ish owner for bindings
- for event, binding in bindings.items():
- l.bind( event, binding )
- return l
-
- def selectItem( self, item ):
- "Select an item and remember old selection."
- self.lastSelection = self.selection
- self.selection = item
-
- def enterNode( self, event ):
- "Select node on entry."
- self.selectNode( event )
-
- def leaveNode( self, _event ):
- "Restore old selection on exit."
- self.selectItem( self.lastSelection )
-
- def clickNode( self, event ):
- "Node click handler."
- if self.active is 'Link':
- self.startLink( event )
- else:
- self.selectNode( event )
- return 'break'
-
- def dragNode( self, event ):
- "Node drag handler."
- if self.active is 'Link':
- self.dragLink( event )
- else:
- self.dragNodeAround( event )
-
- def releaseNode( self, event ):
- "Node release handler."
- if self.active is 'Link':
- self.finishLink( event )
-
- # Specific node handlers
-
- def selectNode( self, event ):
- "Select the node that was clicked on."
- item = self.widgetToItem.get( event.widget, None )
- self.selectItem( item )
-
- def dragNodeAround( self, event ):
- "Drag a node around on the canvas."
- c = self.canvas
- # Convert global to local coordinates;
- # Necessary since x, y are widget-relative
- x = self.canvasx( event.x_root )
- y = self.canvasy( event.y_root )
- w = event.widget
- # Adjust node position
- item = self.widgetToItem[ w ]
- c.coords( item, x, y )
- # Adjust link positions
- for dest in w.links:
- link = w.links[ dest ]
- item = self.widgetToItem[ dest ]
- x1, y1 = c.coords( item )
- c.coords( link, x, y, x1, y1 )
-
- def startLink( self, event ):
- "Start a new link."
- if event.widget not in self.widgetToItem:
- # Didn't click on a node
- return
- w = event.widget
- item = self.widgetToItem[ w ]
- x, y = self.canvas.coords( item )
- self.link = self.canvas.create_line( x, y, x, y, width=4,
- fill='blue', tag='link' )
- self.linkx, self.linky = x, y
- self.linkWidget = w
- self.linkItem = item
-
- # Link bindings
- # Selection still needs a bit of work overall
- # Callbacks ignore event
-
- def select( _event, link=self.link ):
- "Select item on mouse entry."
- self.selectItem( link )
-
- def highlight( _event, link=self.link ):
- "Highlight item on mouse entry."
- # self.selectItem( link )
- self.canvas.itemconfig( link, fill='green' )
-
- def unhighlight( _event, link=self.link ):
- "Unhighlight item on mouse exit."
- self.canvas.itemconfig( link, fill='blue' )
- # self.selectItem( None )
-
- self.canvas.tag_bind( self.link, '<Enter>', highlight )
- self.canvas.tag_bind( self.link, '<Leave>', unhighlight )
- self.canvas.tag_bind( self.link, '<ButtonPress-1>', select )
-
- def finishLink( self, event ):
- "Finish creating a link"
- if self.link is None:
- return
- source = self.linkWidget
- c = self.canvas
- # Since we dragged from the widget, use root coords
- x, y = self.canvasx( event.x_root ), self.canvasy( event.y_root )
- target = self.findItem( x, y )
- dest = self.itemToWidget.get( target, None )
- if ( source is None or dest is None or source == dest
- or dest in source.links or source in dest.links ):
- self.releaseLink( event )
- return
- # For now, don't allow hosts to be directly linked
-# stags = self.canvas.gettags( self.widgetToItem[ source ] )
-# dtags = self.canvas.gettags( target )
-# if 'Host' in stags and 'Host' in dtags:
-# self.releaseLink( event )
-# return
- x, y = c.coords( target )
- c.coords( self.link, self.linkx, self.linky, x, y )
- self.addLink( source, dest )
- # We're done
- self.link = self.linkWidget = None
-
- # Menu handlers
-
- def about( self ):
- "Display about box."
- about = self.aboutBox
- if about is None:
- bg = 'white'
- about = Toplevel( bg='white' )
- about.title( 'About' )
- info = self.appName + ': a simple network editor for Mini-CCNx - based on Miniedit '
- warning = 'Development version - not entirely functional!'
- author = 'Carlos Cabral, Jan 2013'
- author2 = 'Miniedit by Bob Lantz <rlantz@cs>, April 2010'
- line1 = Label( about, text=info, font='Helvetica 10 bold', bg=bg )
- line2 = Label( about, text=warning, font='Helvetica 9', bg=bg )
- line3 = Label( about, text=author, font='Helvetica 9', bg=bg )
- line4 = Label( about, text=author2, font='Helvetica 9', bg=bg )
- line1.pack( padx=20, pady=10 )
- line2.pack(pady=10 )
- line3.pack(pady=10 )
- line4.pack(pady=10 )
- hide = ( lambda about=about: about.withdraw() )
- self.aboutBox = about
- # Hide on close rather than destroying window
- Wm.wm_protocol( about, name='WM_DELETE_WINDOW', func=hide )
- # Show (existing) window
- about.deiconify()
-
- def createToolImages( self ):
- "Create toolbar (and icon) images."
-
- # Model interface
- #
- # Ultimately we will either want to use a topo or
- # mininet object here, probably.
-
- def addLink( self, source, dest ):
- "Add link to model."
- source.links[ dest ] = self.link
- dest.links[ source ] = self.link
- self.links[ self.link ] = ( source, dest )
-
- def deleteLink( self, link ):
- "Delete link from model."
- pair = self.links.get( link, None )
- if pair is not None:
- source, dest = pair
- del source.links[ dest ]
- del dest.links[ source ]
- if link is not None:
- del self.links[ link ]
-
- def deleteNode( self, item ):
- "Delete node (and its links) from model."
- widget = self.itemToWidget[ item ]
- for link in widget.links.values():
- # Delete from view and model
- self.deleteItem( link )
- del self.itemToWidget[ item ]
- del self.widgetToItem[ widget ]
-
- def build( self ):
- "Build network based on our topology."
-
- net = Mininet( topo=None )
-
- # Make controller
- net.addController( 'c0' )
- # Make nodes
- for widget in self.widgetToItem:
- name = widget[ 'text' ]
- tags = self.canvas.gettags( self.widgetToItem[ widget ] )
- nodeNum = int( name[ 1: ] )
- if 'Switch' in tags:
- net.addSwitch( name )
- elif 'Host' in tags:
- net.addHost( name, ip=ipStr( nodeNum ) )
- else:
- raise Exception( "Cannot create mystery node: " + name )
- # Make links
- for link in self.links.values():
- ( src, dst ) = link
- srcName, dstName = src[ 'text' ], dst[ 'text' ]
- src, dst = net.nameToNode[ srcName ], net.nameToNode[ dstName ]
- src.linkTo( dst )
-
- # Build network (we have to do this separately at the moment )
- net.build()
-
- return net
-
- def start( self ):
- "Start network."
- if self.net is None:
- self.net = self.build()
- self.net.start()
-
- def stop( self ):
- "Stop network."
- if self.net is not None:
- self.net.stop()
- cleanUpScreens()
- self.net = None
-
- def xterm( self, _ignore=None ):
- "Make an xterm when a button is pressed."
- if ( self.selection is None or
- self.net is None or
- self.selection not in self.itemToWidget ):
- return
- name = self.itemToWidget[ self.selection ][ 'text' ]
- if name not in self.net.nameToNode:
- return
- term = makeTerm( self.net.nameToNode[ name ], 'Host' )
- self.net.terms.append( term )
-
-
-def miniEditImages():
- "Create and return images for MiniEdit."
-
- # Image data. Git will be unhappy. However, the alternative
- # is to keep track of separate binary files, which is also
- # unappealing.
-
- return {
- 'Select': BitmapImage(
- file='/usr/include/X11/bitmaps/left_ptr' ),
-
- 'Switch' : PhotoImage( data=r"""
- R0lGODlhOgArAMIEAB8aFwB7tQCb343L8P///////////////yH+GlNvZnR3YXJlOiBNaWNyb3NvZnQgT2ZmaWNlACwAAAAAOgArAAAD/ki63P4wykmrvTjr3YYfQigKH7d5Y6qmnjmBayyHg8vAAqDPaUTbowaA13OIahqcyEgEQEbIi7LIGA1FzsaSQK0QfbnH10sMa83VsqX53HLL7sgUTudR5s367F7PEq4CYDJRcngqfgshiAqAMwF3AYdWTCERjSoBjy+ZVItvMg6XIZmaEgOkSmJwlKOkkKSRlaqraaewr7ABhnqBNLmZuL+6vCzCrpvGsB9EH8m5wc7R0sbQ09bT1dOEBLbXwMjeEN7HpuO6Dt3hFObi7Ovj7d7bEOnYD+4V8PfqF/wN/lKsxZPmop6wBwaFzTsRbVvCWzYQmlMW0UKzZCUqatzICLGjx48gKyYAADs=
-"""),
- 'Host' : PhotoImage( data=r"""
- R0lGODlhKQAyAMIHAJyeoK+wsrW2uMHCxM7P0Ozt7fn5+f///yH+EUNyZWF0ZWQgd2l0aCBHSU1QACwAAAAAKQAyAAAD63i63P4wykmrvS4cwLv/IEhxRxGeKGBM3pa+X9QeBmxrT3gMNpyLrt6rgcJgisKXgIFMopaLpjMEVUinn2pQ1ImSrN8uGKCVegHn8bl8CqbV7jFbJ47H650592sX4zl6MX9ocIOBLYNvhkxtiYV8eYx0kJSEi2d7WFmSmZqRmIKeHoddoqOcoaZkqIiqq6CtqqQkrq9jnaKzaLW6Wy8DBMHCp7ClPT+ArMY2t1u9Qs3Et6k+W87KtMfW0r6x1d7P2uDYu+LLtt3nQ9ufxeXM7MkOuCnR7UTe6/jyEOqeWj/SYQEowxXBfgYPJAAAOw==
-"""),
- 'Hosti': PhotoImage( data=r"""
- R0lGODlhIAAYAPcAMf//////zP//mf//Zv//M///AP/M///MzP/M
- mf/MZv/MM//MAP+Z//+ZzP+Zmf+ZZv+ZM/+ZAP9m//9mzP9mmf9m
- Zv9mM/9mAP8z//8zzP8zmf8zZv8zM/8zAP8A//8AzP8Amf8AZv8A
- M/8AAMz//8z/zMz/mcz/Zsz/M8z/AMzM/8zMzMzMmczMZszMM8zM
- AMyZ/8yZzMyZmcyZZsyZM8yZAMxm/8xmzMxmmcxmZsxmM8xmAMwz
- /8wzzMwzmcwzZswzM8wzAMwA/8wAzMwAmcwAZswAM8wAAJn//5n/
- zJn/mZn/Zpn/M5n/AJnM/5nMzJnMmZnMZpnMM5nMAJmZ/5mZzJmZ
- mZmZZpmZM5mZAJlm/5lmzJlmmZlmZplmM5lmAJkz/5kzzJkzmZkz
- ZpkzM5kzAJkA/5kAzJkAmZkAZpkAM5kAAGb//2b/zGb/mWb/Zmb/
- M2b/AGbM/2bMzGbMmWbMZmbMM2bMAGaZ/2aZzGaZmWaZZmaZM2aZ
- AGZm/2ZmzGZmmWZmZmZmM2ZmAGYz/2YzzGYzmWYzZmYzM2YzAGYA
- /2YAzGYAmWYAZmYAM2YAADP//zP/zDP/mTP/ZjP/MzP/ADPM/zPM
- zDPMmTPMZjPMMzPMADOZ/zOZzDOZmTOZZjOZMzOZADNm/zNmzDNm
- mTNmZjNmMzNmADMz/zMzzDMzmTMzZjMzMzMzADMA/zMAzDMAmTMA
- ZjMAMzMAAAD//wD/zAD/mQD/ZgD/MwD/AADM/wDMzADMmQDMZgDM
- MwDMAACZ/wCZzACZmQCZZgCZMwCZAABm/wBmzABmmQBmZgBmMwBm
- AAAz/wAzzAAzmQAzZgAzMwAzAAAA/wAAzAAAmQAAZgAAM+4AAN0A
- ALsAAKoAAIgAAHcAAFUAAEQAACIAABEAAADuAADdAAC7AACqAACI
- AAB3AABVAABEAAAiAAARAAAA7gAA3QAAuwAAqgAAiAAAdwAAVQAA
- RAAAIgAAEe7u7t3d3bu7u6qqqoiIiHd3d1VVVURERCIiIhEREQAA
- ACH5BAEAAAAALAAAAAAgABgAAAiNAAH8G0iwoMGDCAcKTMiw4UBw
- BPXVm0ixosWLFvVBHFjPoUeC9Tb+6/jRY0iQ/8iVbHiS40CVKxG2
- HEkQZsyCM0mmvGkw50uePUV2tEnOZkyfQA8iTYpTKNOgKJ+C3AhO
- p9SWVaVOfWj1KdauTL9q5UgVbFKsEjGqXVtP40NwcBnCjXtw7tx/
- C8cSBBAQADs=
- """ ),
-
- 'Switchi': PhotoImage( data=r"""
- R0lGODlhIAAYAPcAMf//////zP//mf//Zv//M///AP/M///MzP/M
- mf/MZv/MM//MAP+Z//+ZzP+Zmf+ZZv+ZM/+ZAP9m//9mzP9mmf9m
- Zv9mM/9mAP8z//8zzP8zmf8zZv8zM/8zAP8A//8AzP8Amf8AZv8A
- M/8AAMz//8z/zMz/mcz/Zsz/M8z/AMzM/8zMzMzMmczMZszMM8zM
- AMyZ/8yZzMyZmcyZZsyZM8yZAMxm/8xmzMxmmcxmZsxmM8xmAMwz
- /8wzzMwzmcwzZswzM8wzAMwA/8wAzMwAmcwAZswAM8wAAJn//5n/
- zJn/mZn/Zpn/M5n/AJnM/5nMzJnMmZnMZpnMM5nMAJmZ/5mZzJmZ
- mZmZZpmZM5mZAJlm/5lmzJlmmZlmZplmM5lmAJkz/5kzzJkzmZkz
- ZpkzM5kzAJkA/5kAzJkAmZkAZpkAM5kAAGb//2b/zGb/mWb/Zmb/
- M2b/AGbM/2bMzGbMmWbMZmbMM2bMAGaZ/2aZzGaZmWaZZmaZM2aZ
- AGZm/2ZmzGZmmWZmZmZmM2ZmAGYz/2YzzGYzmWYzZmYzM2YzAGYA
- /2YAzGYAmWYAZmYAM2YAADP//zP/zDP/mTP/ZjP/MzP/ADPM/zPM
- zDPMmTPMZjPMMzPMADOZ/zOZzDOZmTOZZjOZMzOZADNm/zNmzDNm
- mTNmZjNmMzNmADMz/zMzzDMzmTMzZjMzMzMzADMA/zMAzDMAmTMA
- ZjMAMzMAAAD//wD/zAD/mQD/ZgD/MwD/AADM/wDMzADMmQDMZgDM
- MwDMAACZ/wCZzACZmQCZZgCZMwCZAABm/wBmzABmmQBmZgBmMwBm
- AAAz/wAzzAAzmQAzZgAzMwAzAAAA/wAAzAAAmQAAZgAAM+4AAN0A
- ALsAAKoAAIgAAHcAAFUAAEQAACIAABEAAADuAADdAAC7AACqAACI
- AAB3AABVAABEAAAiAAARAAAA7gAA3QAAuwAAqgAAiAAAdwAAVQAA
- RAAAIgAAEe7u7t3d3bu7u6qqqoiIiHd3d1VVVURERCIiIhEREQAA
- ACH5BAEAAAAALAAAAAAgABgAAAhwAAEIHEiwoMGDCBMqXMiwocOH
- ECNKnEixosWB3zJq3Mixo0eNAL7xG0mypMmTKPl9Cznyn8uWL/m5
- /AeTpsyYI1eKlBnO5r+eLYHy9Ck0J8ubPmPOrMmUpM6UUKMa/Ui1
- 6saLWLNq3cq1q9evYB0GBAA7
- """ ),
-
- 'Link': PhotoImage( data=r"""
- R0lGODlhFgAWAPcAMf//////zP//mf//Zv//M///AP/M///MzP/M
- mf/MZv/MM//MAP+Z//+ZzP+Zmf+ZZv+ZM/+ZAP9m//9mzP9mmf9m
- Zv9mM/9mAP8z//8zzP8zmf8zZv8zM/8zAP8A//8AzP8Amf8AZv8A
- M/8AAMz//8z/zMz/mcz/Zsz/M8z/AMzM/8zMzMzMmczMZszMM8zM
- AMyZ/8yZzMyZmcyZZsyZM8yZAMxm/8xmzMxmmcxmZsxmM8xmAMwz
- /8wzzMwzmcwzZswzM8wzAMwA/8wAzMwAmcwAZswAM8wAAJn//5n/
- zJn/mZn/Zpn/M5n/AJnM/5nMzJnMmZnMZpnMM5nMAJmZ/5mZzJmZ
- mZmZZpmZM5mZAJlm/5lmzJlmmZlmZplmM5lmAJkz/5kzzJkzmZkz
- ZpkzM5kzAJkA/5kAzJkAmZkAZpkAM5kAAGb//2b/zGb/mWb/Zmb/
- M2b/AGbM/2bMzGbMmWbMZmbMM2bMAGaZ/2aZzGaZmWaZZmaZM2aZ
- AGZm/2ZmzGZmmWZmZmZmM2ZmAGYz/2YzzGYzmWYzZmYzM2YzAGYA
- /2YAzGYAmWYAZmYAM2YAADP//zP/zDP/mTP/ZjP/MzP/ADPM/zPM
- zDPMmTPMZjPMMzPMADOZ/zOZzDOZmTOZZjOZMzOZADNm/zNmzDNm
- mTNmZjNmMzNmADMz/zMzzDMzmTMzZjMzMzMzADMA/zMAzDMAmTMA
- ZjMAMzMAAAD//wD/zAD/mQD/ZgD/MwD/AADM/wDMzADMmQDMZgDM
- MwDMAACZ/wCZzACZmQCZZgCZMwCZAABm/wBmzABmmQBmZgBmMwBm
- AAAz/wAzzAAzmQAzZgAzMwAzAAAA/wAAzAAAmQAAZgAAM+4AAN0A
- ALsAAKoAAIgAAHcAAFUAAEQAACIAABEAAADuAADdAAC7AACqAACI
- AAB3AABVAABEAAAiAAARAAAA7gAA3QAAuwAAqgAAiAAAdwAAVQAA
- RAAAIgAAEe7u7t3d3bu7u6qqqoiIiHd3d1VVVURERCIiIhEREQAA
- ACH5BAEAAAAALAAAAAAWABYAAAhIAAEIHEiwoEGBrhIeXEgwoUKG
- Cx0+hGhQoiuKBy1irChxY0GNHgeCDAlgZEiTHlFuVImRJUWXEGEy
- lBmxI8mSNknm1Dnx5sCAADs=
- """ )
- }
-
-if __name__ == '__main__':
- setLogLevel( 'info' )
- temp_file = parse_args()[0]
- app = MiniEdit(template_file=temp_file)
- app.mainloop()
diff --git a/examples/multiping.py b/examples/multiping.py
deleted file mode 100644
index 3bd231c..0000000
--- a/examples/multiping.py
+++ /dev/null
@@ -1,86 +0,0 @@
-#!/usr/bin/python
-
-"""
-multiping.py: monitor multiple sets of hosts using ping
-
-This demonstrates how one may send a simple shell script to
-multiple hosts and monitor their output interactively for a period=
-of time.
-"""
-
-from mininet.net import Mininet
-from mininet.node import Node
-from mininet.topo import SingleSwitchTopo
-from mininet.log import setLogLevel
-
-from select import poll, POLLIN
-from time import time
-
-def chunks( l, n ):
- "Divide list l into chunks of size n - thanks Stackoverflow"
- return [ l[ i: i + n ] for i in range( 0, len( l ), n ) ]
-
-def startpings( host, targetips ):
- "Tell host to repeatedly ping targets"
-
- targetips.append( '10.0.0.200' )
-
- targetips = ' '.join( targetips )
-
- # BL: Not sure why loopback intf isn't up!
- host.cmd( 'ifconfig lo up' )
-
- # Simple ping loop
- cmd = ( 'while true; do '
- ' for ip in %s; do ' % targetips +
- ' echo -n %s "->" $ip ' % host.IP() +
- ' `ping -c1 -w 1 $ip | grep packets` ;'
- ' sleep 1;'
- ' done; '
- 'done &' )
-
- print ( '*** Host %s (%s) will be pinging ips: %s' %
- ( host.name, host.IP(), targetips ) )
-
- host.cmd( cmd )
-
-def multiping( netsize, chunksize, seconds):
- "Ping subsets of size chunksize in net of size netsize"
-
- # Create network and identify subnets
- topo = SingleSwitchTopo( netsize )
- net = Mininet( topo=topo )
- net.start()
- hosts = net.hosts
- subnets = chunks( hosts, chunksize )
-
- # Create polling object
- fds = [ host.stdout.fileno() for host in hosts ]
- poller = poll()
- for fd in fds:
- poller.register( fd, POLLIN )
-
- # Start pings
- for subnet in subnets:
- ips = [ host.IP() for host in subnet ]
- for host in subnet:
- startpings( host, ips )
-
- # Monitor output
- endTime = time() + seconds
- while time() < endTime:
- readable = poller.poll(1000)
- for fd, _mask in readable:
- node = Node.outToNode[ fd ]
- print '%s:' % node.name, node.monitor().strip()
-
- # Stop pings
- for host in hosts:
- host.cmd( 'kill %while' )
-
- net.stop()
-
-
-if __name__ == '__main__':
- setLogLevel( 'info' )
- multiping( netsize=20, chunksize=4, seconds=10 )
diff --git a/examples/multipoll.py b/examples/multipoll.py
deleted file mode 100644
index aef1b10..0000000
--- a/examples/multipoll.py
+++ /dev/null
@@ -1,81 +0,0 @@
-#!/usr/bin/python
-
-"""
-Simple example of sending output to multiple files and
-monitoring them
-"""
-
-from mininet.topo import SingleSwitchTopo
-from mininet.net import Mininet
-from mininet.log import setLogLevel
-
-from time import time
-from select import poll, POLLIN
-from subprocess import Popen, PIPE
-
-def monitorFiles( outfiles, seconds, timeoutms ):
- "Monitor set of files and return [(host, line)...]"
- devnull = open( '/dev/null', 'w' )
- tails, fdToFile, fdToHost = {}, {}, {}
- for h, outfile in outfiles.iteritems():
- tail = Popen( [ 'tail', '-f', outfile ],
- stdout=PIPE, stderr=devnull )
- fd = tail.stdout.fileno()
- tails[ h ] = tail
- fdToFile[ fd ] = tail.stdout
- fdToHost[ fd ] = h
- # Prepare to poll output files
- readable = poll()
- for t in tails.values():
- readable.register( t.stdout.fileno(), POLLIN )
- # Run until a set number of seconds have elapsed
- endTime = time() + seconds
- while time() < endTime:
- fdlist = readable.poll(timeoutms)
- if fdlist:
- for fd, _flags in fdlist:
- f = fdToFile[ fd ]
- host = fdToHost[ fd ]
- # Wait for a line of output
- line = f.readline().strip()
- yield host, line
- else:
- # If we timed out, return nothing
- yield None, ''
- for t in tails.values():
- t.terminate()
- devnull.close() # Not really necessary
-
-
-def monitorTest( N=3, seconds=3 ):
- "Run pings and monitor multiple hosts"
- topo = SingleSwitchTopo( N )
- net = Mininet( topo )
- net.start()
- hosts = net.hosts
- print "Starting test..."
- server = hosts[ 0 ]
- outfiles, errfiles = {}, {}
- for h in hosts:
- # Create and/or erase output files
- outfiles[ h ] = '/tmp/%s.out' % h.name
- errfiles[ h ] = '/tmp/%s.err' % h.name
- h.cmd( 'echo >', outfiles[ h ] )
- h.cmd( 'echo >', errfiles[ h ] )
- # Start pings
- h.cmdPrint('ping', server.IP(),
- '>', outfiles[ h ],
- '2>', errfiles[ h ],
- '&' )
- print "Monitoring output for", seconds, "seconds"
- for h, line in monitorFiles( outfiles, seconds, timeoutms=500 ):
- if h:
- print '%s: %s' % ( h.name, line )
- for h in hosts:
- h.cmd('kill %ping')
- net.stop()
-
-
-if __name__ == '__main__':
- setLogLevel('info')
- monitorTest()
diff --git a/examples/multitest.py b/examples/multitest.py
deleted file mode 100644
index bcb40f7..0000000
--- a/examples/multitest.py
+++ /dev/null
@@ -1,35 +0,0 @@
-#!/usr/bin/python
-
-"""
-This example shows how to create a network and run multiple tests.
-For a more complicated test example, see udpbwtest.py.
-"""
-
-from mininet.cli import CLI
-from mininet.log import lg, info
-from mininet.net import Mininet
-from mininet.node import OVSKernelSwitch
-from mininet.topolib import TreeTopo
-
-def ifconfigTest( net ):
- "Run ifconfig on all hosts in net."
- hosts = net.hosts
- for host in hosts:
- info( host.cmd( 'ifconfig' ) )
-
-if __name__ == '__main__':
- lg.setLogLevel( 'info' )
- info( "*** Initializing Mininet and kernel modules\n" )
- OVSKernelSwitch.setup()
- info( "*** Creating network\n" )
- network = Mininet( TreeTopo( depth=2, fanout=2 ), switch=OVSKernelSwitch)
- info( "*** Starting network\n" )
- network.start()
- info( "*** Running ping test\n" )
- network.pingAll()
- info( "*** Running ifconfig test\n" )
- ifconfigTest( network )
- info( "*** Starting CLI (type 'exit' to exit)\n" )
- CLI( network )
- info( "*** Stopping network\n" )
- network.stop()
diff --git a/examples/popen.py b/examples/popen.py
deleted file mode 100644
index 332822f..0000000
--- a/examples/popen.py
+++ /dev/null
@@ -1,36 +0,0 @@
-#!/usr/bin/python
-
-"""
-This example monitors a number of hosts using host.popen() and
-pmonitor()
-"""
-
-from mininet.net import Mininet
-from mininet.node import CPULimitedHost
-from mininet.topo import SingleSwitchTopo
-from mininet.log import setLogLevel
-from mininet.util import custom, pmonitor
-
-def monitorhosts( hosts=5, sched='cfs' ):
- "Start a bunch of pings and monitor them using popen"
- mytopo = SingleSwitchTopo( hosts )
- cpu = .5 / hosts
- myhost = custom( CPULimitedHost, cpu=cpu, sched=sched )
- net = Mininet( topo=mytopo, host=myhost )
- net.start()
- # Start a bunch of pings
- popens = {}
- last = net.hosts[ -1 ]
- for host in net.hosts:
- popens[ host ] = host.popen( "ping -c5 %s" % last.IP() )
- last = host
- # Monitor them and print output
- for host, line in pmonitor( popens ):
- if host:
- print "<%s>: %s" % ( host.name, line.strip() )
- # Done
- net.stop()
-
-if __name__ == '__main__':
- setLogLevel( 'info' )
- monitorhosts( hosts=5 )
diff --git a/examples/popenpoll.py b/examples/popenpoll.py
deleted file mode 100644
index c581c27..0000000
--- a/examples/popenpoll.py
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/usr/bin/python
-
-"Monitor multiple hosts using popen()/pmonitor()"
-
-from mininet.net import Mininet
-from mininet.topo import SingleSwitchTopo
-from mininet.util import pmonitor
-from time import time
-from signal import SIGINT
-
-def pmonitorTest( N=3, seconds=10 ):
- "Run pings and monitor multiple hosts using pmonitor"
- topo = SingleSwitchTopo( N )
- net = Mininet( topo )
- net.start()
- hosts = net.hosts
- print "Starting test..."
- server = hosts[ 0 ]
- popens = {}
- for h in hosts:
- popens[ h ] = h.popen('ping', server.IP() )
- print "Monitoring output for", seconds, "seconds"
- endTime = time() + seconds
- for h, line in pmonitor( popens, timeoutms=500 ):
- if h:
- print '%s: %s' % ( h.name, line ),
- if time() >= endTime:
- for p in popens.values():
- p.send_signal( SIGINT )
- net.stop()
-
-if __name__ == '__main__':
- pmonitorTest()
diff --git a/examples/router.gif b/examples/router.gif
deleted file mode 100644
index a670ac7..0000000
--- a/examples/router.gif
+++ /dev/null
Binary files differ
diff --git a/examples/scratchnet.py b/examples/scratchnet.py
deleted file mode 100644
index 966a183..0000000
--- a/examples/scratchnet.py
+++ /dev/null
@@ -1,68 +0,0 @@
-#!/usr/bin/python
-
-"""
-Build a simple network from scratch, using mininet primitives.
-This is more complicated than using the higher-level classes,
-but it exposes the configuration details and allows customization.
-
-For most tasks, the higher-level API will be preferable.
-"""
-
-from mininet.net import Mininet
-from mininet.node import Node
-from mininet.link import Link
-from mininet.log import setLogLevel, info
-from mininet.util import quietRun
-
-from time import sleep
-
-def scratchNet( cname='controller', cargs='-v ptcp:' ):
- "Create network from scratch using Open vSwitch."
-
- info( "*** Creating nodes\n" )
- controller = Node( 'c0', inNamespace=False )
- switch = Node( 's0', inNamespace=False )
- h0 = Node( 'h0' )
- h1 = Node( 'h1' )
-
- info( "*** Creating links\n" )
- Link( h0, switch )
- Link( h1, switch )
-
- info( "*** Configuring hosts\n" )
- h0.setIP( '192.168.123.1/24' )
- h1.setIP( '192.168.123.2/24' )
- info( str( h0 ) + '\n' )
- info( str( h1 ) + '\n' )
-
- info( "*** Starting network using Open vSwitch\n" )
- controller.cmd( cname + ' ' + cargs + '&' )
- switch.cmd( 'ovs-vsctl del-br dp0' )
- switch.cmd( 'ovs-vsctl add-br dp0' )
- for intf in switch.intfs.values():
- print switch.cmd( 'ovs-vsctl add-port dp0 %s' % intf )
-
- # Note: controller and switch are in root namespace, and we
- # can connect via loopback interface
- switch.cmd( 'ovs-vsctl set-controller dp0 tcp:127.0.0.1:6633' )
-
- info( '*** Waiting for switch to connect to controller' )
- while 'is_connected' not in quietRun( 'ovs-vsctl show' ):
- sleep( 1 )
- info( '.' )
- info( '\n' )
-
- info( "*** Running test\n" )
- h0.cmdPrint( 'ping -c1 ' + h1.IP() )
-
- info( "*** Stopping network\n" )
- controller.cmd( 'kill %' + cname )
- switch.cmd( 'ovs-vsctl del-br dp0' )
- switch.deleteIntfs()
- info( '\n' )
-
-if __name__ == '__main__':
- setLogLevel( 'info' )
- info( '*** Scratch network demo (kernel datapath)\n' )
- Mininet.init()
- scratchNet()
diff --git a/examples/scratchnetuser.py b/examples/scratchnetuser.py
deleted file mode 100644
index ccd20e9..0000000
--- a/examples/scratchnetuser.py
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/usr/bin/python
-
-"""
-Build a simple network from scratch, using mininet primitives.
-This is more complicated than using the higher-level classes,
-but it exposes the configuration details and allows customization.
-
-For most tasks, the higher-level API will be preferable.
-
-This version uses the user datapath and an explicit control network.
-"""
-
-from mininet.net import Mininet
-from mininet.node import Node
-from mininet.link import Link
-from mininet.log import setLogLevel, info
-
-def linkIntfs( node1, node2 ):
- "Create link from node1 to node2 and return intfs"
- link = Link( node1, node2 )
- return link.intf1, link.intf2
-
-def scratchNetUser( cname='controller', cargs='ptcp:' ):
- "Create network from scratch using user switch."
-
- # It's not strictly necessary for the controller and switches
- # to be in separate namespaces. For performance, they probably
- # should be in the root namespace. However, it's interesting to
- # see how they could work even if they are in separate namespaces.
-
- info( '*** Creating Network\n' )
- controller = Node( 'c0' )
- switch = Node( 's0')
- h0 = Node( 'h0' )
- h1 = Node( 'h1' )
- cintf, sintf = linkIntfs( controller, switch )
- h0intf, sintf1 = linkIntfs( h0, switch )
- h1intf, sintf2 = linkIntfs( h1, switch )
-
- info( '*** Configuring control network\n' )
- controller.setIP( '10.0.123.1/24', intf=cintf )
- switch.setIP( '10.0.123.2/24', intf=sintf)
-
- info( '*** Configuring hosts\n' )
- h0.setIP( '192.168.123.1/24', intf=h0intf )
- h1.setIP( '192.168.123.2/24', intf=h1intf )
-
- info( '*** Network state:\n' )
- for node in controller, switch, h0, h1:
- info( str( node ) + '\n' )
-
- info( '*** Starting controller and user datapath\n' )
- controller.cmd( cname + ' ' + cargs + '&' )
- switch.cmd( 'ifconfig lo 127.0.0.1' )
- intfs = [ str( i ) for i in sintf1, sintf2 ]
- switch.cmd( 'ofdatapath -i ' + ','.join( intfs ) + ' ptcp: &' )
- switch.cmd( 'ofprotocol tcp:' + controller.IP() + ' tcp:localhost &' )
-
- info( '*** Running test\n' )
- h0.cmdPrint( 'ping -c1 ' + h1.IP() )
-
- info( '*** Stopping network\n' )
- controller.cmd( 'kill %' + cname )
- switch.cmd( 'kill %ofdatapath' )
- switch.cmd( 'kill %ofprotocol' )
- switch.deleteIntfs()
- info( '\n' )
-
-if __name__ == '__main__':
- setLogLevel( 'info' )
- info( '*** Scratch network demo (user datapath)\n' )
- Mininet.init()
- scratchNetUser()
diff --git a/examples/server.gif b/examples/server.gif
deleted file mode 100644
index b16b0da..0000000
--- a/examples/server.gif
+++ /dev/null
Binary files differ
diff --git a/examples/simpleperf.py b/examples/simpleperf.py
deleted file mode 100644
index 1da4b66..0000000
--- a/examples/simpleperf.py
+++ /dev/null
@@ -1,49 +0,0 @@
-#!/usr/bin/python
-
-"""
-Simple example of setting network and CPU parameters
-
-NOTE: link params limit BW, add latency, and loss.
-There is a high chance that pings WILL fail and that
-iperf will hang indefinitely if the TCP handshake fails
-to complete.
-"""
-
-from mininet.topo import Topo
-from mininet.net import Mininet
-from mininet.node import CPULimitedHost
-from mininet.link import TCLink
-from mininet.util import dumpNodeConnections
-from mininet.log import setLogLevel
-
-class SingleSwitchTopo(Topo):
- "Single switch connected to n hosts."
- def __init__(self, n=2, **opts):
- Topo.__init__(self, **opts)
- switch = self.addSwitch('s1')
- for h in range(n):
- # Each host gets 50%/n of system CPU
- host = self.addHost('h%s' % (h + 1),
- cpu=.5 / n)
- # 10 Mbps, 5ms delay, 10% loss
- self.addLink(host, switch,
- bw=10, delay='5ms', loss=10, use_htb=True)
-
-def perfTest():
- "Create network and run simple performance test"
- topo = SingleSwitchTopo(n=4)
- net = Mininet(topo=topo,
- host=CPULimitedHost, link=TCLink)
- net.start()
- print "Dumping host connections"
- dumpNodeConnections(net.hosts)
- print "Testing network connectivity"
- net.pingAll()
- print "Testing bandwidth between h1 and h4"
- h1, h4 = net.getNodeByName('h1', 'h4')
- net.iperf((h1, h4))
- net.stop()
-
-if __name__ == '__main__':
- setLogLevel('info')
- perfTest()
diff --git a/examples/sshd.py b/examples/sshd.py
deleted file mode 100644
index 2bedb9c..0000000
--- a/examples/sshd.py
+++ /dev/null
@@ -1,71 +0,0 @@
-#!/usr/bin/python
-
-"""
-Create a network and start sshd(8) on each host.
-
-While something like rshd(8) would be lighter and faster,
-(and perfectly adequate on an in-machine network)
-the advantage of running sshd is that scripts can work
-unchanged on mininet and hardware.
-
-In addition to providing ssh access to hosts, this example
-demonstrates:
-
-- creating a convenience function to construct networks
-- connecting the host network to the root namespace
-- running server processes (sshd in this case) on hosts
-"""
-
-from mininet.net import Mininet
-from mininet.cli import CLI
-from mininet.log import lg
-from mininet.node import Node, OVSKernelSwitch
-from mininet.topolib import TreeTopo
-from mininet.link import Link
-
-def TreeNet( depth=1, fanout=2, **kwargs ):
- "Convenience function for creating tree networks."
- topo = TreeTopo( depth, fanout )
- return Mininet( topo, **kwargs )
-
-def connectToRootNS( network, switch, ip, prefixLen, routes ):
- """Connect hosts to root namespace via switch. Starts network.
- network: Mininet() network object
- switch: switch to connect to root namespace
- ip: IP address for root namespace node
- prefixLen: IP address prefix length (e.g. 8, 16, 24)
- routes: host networks to route to"""
- # Create a node in root namespace and link to switch 0
- root = Node( 'root', inNamespace=False )
- intf = Link( root, switch ).intf1
- root.setIP( ip, prefixLen, intf )
- # Start network that now includes link to root namespace
- network.start()
- # Add routes from root ns to hosts
- for route in routes:
- root.cmd( 'route add -net ' + route + ' dev ' + str( intf ) )
-
-def sshd( network, cmd='/usr/sbin/sshd', opts='-D' ):
- "Start a network, connect it to root ns, and run sshd on all hosts."
- switch = network.switches[ 0 ] # switch to use
- ip = '10.123.123.1' # our IP address on host network
- routes = [ '10.0.0.0/8' ] # host networks to route to
- connectToRootNS( network, switch, ip, 8, routes )
- for host in network.hosts:
- host.cmd( cmd + ' ' + opts + '&' )
- print
- print "*** Hosts are running sshd at the following addresses:"
- print
- for host in network.hosts:
- print host.name, host.IP()
- print
- print "*** Type 'exit' or control-D to shut down network"
- CLI( network )
- for host in network.hosts:
- host.cmd( 'kill %' + cmd )
- network.stop()
-
-if __name__ == '__main__':
- lg.setLogLevel( 'info')
- net = TreeNet( depth=1, fanout=4, switch=OVSKernelSwitch )
- sshd( net )
diff --git a/examples/tree1024.py b/examples/tree1024.py
deleted file mode 100644
index 9397131..0000000
--- a/examples/tree1024.py
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/usr/bin/python
-
-"""
-Create a 1024-host network, and run the CLI on it.
-If this fails because of kernel limits, you may have
-to adjust them, e.g. by adding entries to /etc/sysctl.conf
-and running sysctl -p. Check util/sysctl_addon.
-"""
-
-from mininet.cli import CLI
-from mininet.log import setLogLevel
-from mininet.node import OVSKernelSwitch
-from mininet.topolib import TreeNet
-
-if __name__ == '__main__':
- setLogLevel( 'info' )
- network = TreeNet( depth=2, fanout=32, switch=OVSKernelSwitch )
- network.run( CLI, network )
diff --git a/examples/treeping64.py b/examples/treeping64.py
deleted file mode 100644
index ba60f1b..0000000
--- a/examples/treeping64.py
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/usr/bin/python
-
-"Create a 64-node tree network, and test connectivity using ping."
-
-from mininet.log import setLogLevel
-from mininet.node import UserSwitch, OVSKernelSwitch # , KernelSwitch
-from mininet.topolib import TreeNet
-
-def treePing64():
- "Run ping test on 64-node tree networks."
-
- results = {}
- switches = { # 'reference kernel': KernelSwitch,
- 'reference user': UserSwitch,
- 'Open vSwitch kernel': OVSKernelSwitch }
-
- for name in switches:
- print "*** Testing", name, "datapath"
- switch = switches[ name ]
- network = TreeNet( depth=2, fanout=8, switch=switch )
- result = network.run( network.pingAll )
- results[ name ] = result
-
- print
- print "*** Tree network ping results:"
- for name in switches:
- print "%s: %d%% packet loss" % ( name, results[ name ] )
- print
-
-if __name__ == '__main__':
- setLogLevel( 'info' )
- treePing64()
diff --git a/install.sh b/install.sh
new file mode 100755
index 0000000..ca86513
--- /dev/null
+++ b/install.sh
@@ -0,0 +1,182 @@
+#!/bin/bash
+
+test -e /etc/debian_version && DIST="Debian"
+grep Ubuntu /etc/lsb-release &> /dev/null && DIST="Ubuntu"
+
+if [[ $DIST == Ubuntu || $DIST == Debian ]]; then
+ update='sudo apt-get update'
+ install='sudo apt-get -y install'
+ remove='sudo apt-get -y remove'
+ pkginst='sudo dpkg -i'
+ # Prereqs for this script
+ if ! which lsb_release &> /dev/null; then
+ $install lsb-release
+ fi
+fi
+
+test -e /etc/fedora-release && DIST="Fedora"
+if [[ $DIST == Fedora ]]; then
+ update='sudo yum update'
+ install='sudo yum -y install'
+ remove='sudo yum -y erase'
+ pkginst='sudo rpm -ivh'
+ # Prereqs for this script
+ if ! which lsb_release &> /dev/null; then
+ $install redhat-lsb-core
+ fi
+fi
+
+function forwarder {
+ if [[ $cxx != true ]]; then
+ ndncxx
+ cxx="true"
+ fi
+
+ if [[ $DIST == Ubuntu || $DIST == Debian ]]; then
+ $install libpcap-dev pkg-config
+ fi
+
+ if [[ $DIST == Fedora ]]; then
+ $install libpcap-devel
+ fi
+
+ git clone --depth 1 https://github.com/named-data/NFD
+ cd NFD
+ ./waf configure --without-websocket
+ ./waf
+ sudo ./waf install
+ cd ../
+}
+
+function routing {
+ if [[ $cxx != true ]]; then
+ ndncxx
+ cxx="true"
+ fi
+
+ if [[ $DIST == Ubuntu ]]; then
+ $install liblog4cxx10-dev libprotobuf-dev protobuf-compiler
+ fi
+
+ if [[ $DIST == Fedora ]]; then
+ $install log4cxx log4cxx-devel openssl-devel protobuf-devel
+ fi
+
+ git clone --depth 1 https://github.com/named-data/NLSR
+ cd NLSR
+ ./waf configure
+ ./waf
+ sudo ./waf install
+ cd ../
+}
+
+function ndncxx {
+ if [[ updated != true ]]; then
+ $update
+ updated="true"
+ fi
+
+ if [[ $DIST == Ubuntu || $DIST == Debian ]]; then
+ $install git libsqlite3-dev libboost-all-dev make g++
+ crypto
+ fi
+
+ if [[ $DIST == Fedora ]]; then
+ $install gcc-c++ sqlite-devel boost-devel
+ fi
+
+ git clone --depth 1 https://github.com/named-data/ndn-cxx
+ cd ndn-cxx
+ ./waf configure
+ ./waf
+ sudo ./waf install
+ cd ../
+}
+
+function crypto {
+ mkdir crypto
+ cd crypto
+ $install unzip
+ wget http://www.cryptopp.com/cryptopp562.zip
+ unzip cryptopp562.zip
+ make
+ sudo make install
+ cd ../
+}
+
+function tools {
+ if [[ $cxx != true ]]; then
+ ndncxx
+ cxx="true"
+ fi
+
+ git clone --depth 1 https://github.com/named-data/ndn-tools
+ cd ndn-tools
+ ./waf configure
+ ./waf
+ sudo ./waf install
+ cd ../
+}
+
+function mininet {
+ if [[ updated != true ]]; then
+ $update
+ updated="true"
+ fi
+
+ if [[ $pysetup != true ]]; then
+ pysetup="true"
+ fi
+
+ git clone --depth 1 https://github.com/mininet/mininet
+ cd mininet
+ sudo ./util/install.sh -fnv
+ cd ../
+}
+
+function minindn {
+ if [[ updated != true ]]; then
+ $update
+ updated="true"
+ fi
+
+ if [[ $pysetup != true ]]; then
+ $install python-setuptools
+ pysetup="true"
+ fi
+ sudo mkdir -p /usr/local/etc/mini-ndn/
+ sudo cp ndn_utils/client.conf.sample /usr/local/etc/mini-ndn/
+ sudo cp ndn_utils/nfd.conf /usr/local/etc/mini-ndn/
+ sudo cp ndn_utils/nlsr.conf /usr/local/etc/mini-ndn/
+ sudo python setup.py install
+}
+
+
+function usage {
+ printf '\nUsage: %s [-mfrti]\n\n' $(basename $0) >&2
+
+ printf 'options:\n' >&2
+ printf -- ' -f: install NFD\n' >&2
+ printf -- ' -i: install mini-ndn\n' >&2
+ printf -- ' -m: install mininet and dependencies\n' >&2
+ printf -- ' -r: install NLSR\n' >&2
+ printf -- ' -t: install tools\n' >&2
+ exit 2
+}
+
+if [[ $# -eq 0 ]]; then
+ usage
+else
+ while getopts 'mfrti' OPTION
+ do
+ case $OPTION in
+ f) forwarder;;
+ i) minindn;;
+ m) mininet;;
+ r) routing;;
+ t) tools;;
+ ?) usage;;
+ esac
+ done
+ shift $(($OPTIND - 1))
+fi
diff --git a/mininet/__init__.py b/mininet/__init__.py
deleted file mode 100644
index c15ea6a..0000000
--- a/mininet/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-"Docstring to silence pylint; ignores --ignore option for __init__.py"
diff --git a/mininet/clean.py b/mininet/clean.py
deleted file mode 100644
index eac8fda..0000000
--- a/mininet/clean.py
+++ /dev/null
@@ -1,60 +0,0 @@
-"""
-Mininet Cleanup
-author: Bob Lantz (rlantz@cs.stanford.edu)
-
-Unfortunately, Mininet and OpenFlow (and the Linux kernel)
-don't always clean up properly after themselves. Until they do
-(or until cleanup functionality is integrated into the Python
-code), this script may be used to get rid of unwanted garbage.
-It may also get rid of 'false positives', but hopefully
-nothing irreplaceable!
-"""
-
-from subprocess import Popen, PIPE
-
-from mininet.log import info
-from mininet.term import cleanUpScreens
-
-def sh( cmd ):
- "Print a command and send it to the shell"
- info( cmd + '\n' )
- return Popen( [ '/bin/sh', '-c', cmd ], stdout=PIPE ).communicate()[ 0 ]
-
-def cleanup():
- """Clean up junk which might be left over from old runs;
- do fast stuff before slow dp and link removal!"""
-
- info("*** Removing excess controllers/ofprotocols/ofdatapaths/pings/noxes"
- "\n")
- zombies = 'controller ofprotocol ofdatapath ping nox_core lt-nox_core '
- zombies += 'ovs-openflowd udpbwtest'
- # Note: real zombie processes can't actually be killed, since they
- # are already (un)dead. Then again,
- # you can't connect to them either, so they're mostly harmless.
- sh( 'killall -9 ' + zombies + ' 2> /dev/null' )
-
- info( "*** Removing junk from /tmp\n" )
- sh( 'rm -f /tmp/vconn* /tmp/vlogs* /tmp/*.out /tmp/*.log' )
-
- info( "*** Removing old screen sessions\n" )
- cleanUpScreens()
-
- info( "*** Removing excess kernel datapaths\n" )
- dps = sh( "ps ax | egrep -o 'dp[0-9]+' | sed 's/dp/nl:/'" ).split( '\n' )
- for dp in dps:
- if dp != '':
- sh( 'dpctl deldp ' + dp )
-
- info( "*** Removing OVS datapaths" )
- dps = sh("ovs-vsctl list-br").split( '\n' )
- for dp in dps:
- if dp:
- sh( 'ovs-vsctl del-br ' + dp )
-
- info( "*** Removing all links of the pattern foo-ethX\n" )
- links = sh( "ip link show | egrep -o '(\w+-eth\w+)'" ).split( '\n' )
- for link in links:
- if link != '':
- sh( "ip link del " + link )
-
- info( "*** Cleanup complete.\n" )
diff --git a/mininet/cli.py b/mininet/cli.py
deleted file mode 100644
index db03787..0000000
--- a/mininet/cli.py
+++ /dev/null
@@ -1,376 +0,0 @@
-"""
-A simple command-line interface for Mininet.
-
-The Mininet CLI provides a simple control console which
-makes it easy to talk to nodes. For example, the command
-
-mininet> h27 ifconfig
-
-runs 'ifconfig' on host h27.
-
-Having a single console rather than, for example, an xterm for each
-node is particularly convenient for networks of any reasonable
-size.
-
-The CLI automatically substitutes IP addresses for node names,
-so commands like
-
-mininet> h2 ping h3
-
-should work correctly and allow host h2 to ping host h3
-
-Several useful commands are provided, including the ability to
-list all nodes ('nodes'), to print out the network topology
-('net') and to check connectivity ('pingall', 'pingpair')
-and bandwidth ('iperf'.)
-"""
-
-from subprocess import call
-from cmd import Cmd
-from os import isatty
-from select import poll, POLLIN
-import sys
-import time
-
-from mininet.log import info, output, error
-from mininet.term import makeTerms
-from mininet.util import quietRun, isShellBuiltin, dumpNodeConnections
-
-class CLI( Cmd ):
- "Simple command-line interface to talk to nodes."
-
- prompt = 'minindn> '
-
- def __init__( self, mininet, stdin=sys.stdin, script=None ):
- self.mn = mininet
- self.nodelist = self.mn.controllers + self.mn.switches + self.mn.hosts
- self.nodemap = {} # map names to Node objects
- for node in self.nodelist:
- self.nodemap[ node.name ] = node
- # Local variable bindings for py command
- self.locals = { 'net': mininet }
- self.locals.update( self.nodemap )
- # Attempt to handle input
- self.stdin = stdin
- self.inPoller = poll()
- self.inPoller.register( stdin )
- self.inputFile = script
- Cmd.__init__( self )
- info( '*** Starting CLI:\n' )
- if self.inputFile:
- self.do_source( self.inputFile )
- return
- while True:
- try:
- # Make sure no nodes are still waiting
- for node in self.nodelist:
- while node.waiting:
- node.sendInt()
- node.monitor()
- if self.isatty():
- quietRun( 'stty sane' )
- self.cmdloop()
- break
- except KeyboardInterrupt:
- output( '\nInterrupt\n' )
-
- def emptyline( self ):
- "Don't repeat last command when you hit return."
- pass
-
- # Disable pylint "Unused argument: 'arg's'" messages, as well as
- # "method could be a function" warning, since each CLI function
- # must have the same interface
- # pylint: disable-msg=R0201
-
- helpStr = (
- 'You may also send a command to a node using:\n'
- ' <node> command {args}\n'
- 'For example:\n'
- ' mininet> h1 ifconfig\n'
- '\n'
- 'The interpreter automatically substitutes IP addresses\n'
- 'for node names when a node is the first arg, so commands\n'
- 'like\n'
- ' mininet> h2 ping h3\n'
- 'should work.\n'
- '\n'
- 'Some character-oriented interactive commands require\n'
- 'noecho:\n'
- ' mininet> noecho h2 vi foo.py\n'
- 'However, starting up an xterm/gterm is generally better:\n'
- ' mininet> xterm h2\n\n'
- )
-
- def do_help( self, line ):
- "Describe available CLI commands."
- Cmd.do_help( self, line )
- if line is '':
- output( self.helpStr )
-
- def do_nodes( self, _line ):
- "List all nodes."
- nodes = ' '.join( [ node.name for node in sorted( self.nodelist ) ] )
- output( 'available nodes are: \n%s\n' % nodes )
-
- def do_net( self, _line ):
- "List network connections."
- dumpNodeConnections( self.nodelist )
-
- def do_sh( self, line ):
- "Run an external shell command"
- call( line, shell=True )
-
- # do_py() needs to catch any exception during eval()
- # pylint: disable-msg=W0703
-
- def do_py( self, line ):
- """Evaluate a Python expression.
- Node names may be used, e.g.: h1.cmd('ls')"""
- try:
- result = eval( line, globals(), self.locals )
- if not result:
- return
- elif isinstance( result, str ):
- output( result + '\n' )
- else:
- output( repr( result ) + '\n' )
- except Exception, e:
- output( str( e ) + '\n' )
-
- # pylint: enable-msg=W0703
-
- def do_pingall( self, _line ):
- "Ping between all hosts."
- self.mn.pingAll()
-
- def do_pingpair( self, _line ):
- "Ping between first two hosts, useful for testing."
- self.mn.pingPair()
-
- def do_pingallfull( self, _line ):
- "Ping between first two hosts, returns all ping results."
- self.mn.pingAllFull()
-
- def do_pingpairfull( self, _line ):
- "Ping between first two hosts, returns all ping results."
- self.mn.pingPairFull()
-
- def do_iperf( self, line ):
- "Simple iperf TCP test between two (optionally specified) hosts."
- args = line.split()
- if not args:
- self.mn.iperf()
- elif len(args) == 2:
- hosts = []
- err = False
- for arg in args:
- if arg not in self.nodemap:
- err = True
- error( "node '%s' not in network\n" % arg )
- else:
- hosts.append( self.nodemap[ arg ] )
- if not err:
- self.mn.iperf( hosts )
- else:
- error( 'invalid number of args: iperf src dst\n' )
-
- def do_iperfudp( self, line ):
- "Simple iperf TCP test between two (optionally specified) hosts."
- args = line.split()
- if not args:
- self.mn.iperf( l4Type='UDP' )
- elif len(args) == 3:
- udpBw = args[ 0 ]
- hosts = []
- err = False
- for arg in args[ 1:3 ]:
- if arg not in self.nodemap:
- err = True
- error( "node '%s' not in network\n" % arg )
- else:
- hosts.append( self.nodemap[ arg ] )
- if not err:
- self.mn.iperf( hosts, l4Type='UDP', udpBw=udpBw )
- else:
- error( 'invalid number of args: iperfudp bw src dst\n' +
- 'bw examples: 10M\n' )
-
- def do_intfs( self, _line ):
- "List interfaces."
- for node in self.nodelist:
- output( '%s: %s\n' %
- ( node.name, ','.join( node.intfNames() ) ) )
-
- def do_ccndump(self, _line):
- "Dump FIB entries"
- for node in self.nodelist:
- if 'fib' in node.params:
- output(node.name + ': ')
- for name in node.params['fib']:
- output(str(name) + ' ')
- output('\n')
-
-
- def do_dump( self, _line ):
- "Dump node info."
- for node in self.nodelist:
- output( '%s\n' % repr( node ) )
-
- def do_link( self, line ):
- "Bring link(s) between two nodes up or down."
- args = line.split()
- if len(args) != 3:
- error( 'invalid number of args: link end1 end2 [up down]\n' )
- elif args[ 2 ] not in [ 'up', 'down' ]:
- error( 'invalid type: link end1 end2 [up down]\n' )
- else:
- self.mn.configLinkStatus( *args )
-
- def do_xterm( self, line, term='xterm' ):
- "Spawn xterm(s) for the given node(s)."
- args = line.split()
- if not args:
- error( 'usage: %s node1 node2 ...\n' % term )
- else:
- for arg in args:
- if arg not in self.nodemap:
- error( "node '%s' not in network\n" % arg )
- else:
- node = self.nodemap[ arg ]
- self.mn.terms += makeTerms( [ node ], term = term )
-
- def do_gterm( self, line ):
- "Spawn gnome-terminal(s) for the given node(s)."
- self.do_xterm( line, term='gterm' )
-
- def do_exit( self, _line ):
- "Exit"
- return 'exited by user command'
-
- def do_quit( self, line ):
- "Exit"
- return self.do_exit( line )
-
- def do_EOF( self, line ):
- "Exit"
- output( '\n' )
- return self.do_exit( line )
-
- def isatty( self ):
- "Is our standard input a tty?"
- return isatty( self.stdin.fileno() )
-
- def do_noecho( self, line ):
- "Run an interactive command with echoing turned off."
- if self.isatty():
- quietRun( 'stty -echo' )
- self.default( line )
- if self.isatty():
- quietRun( 'stty echo' )
-
- def do_source( self, line ):
- "Read commands from an input file."
- args = line.split()
- if len(args) != 1:
- error( 'usage: source <file>\n' )
- return
- try:
- self.inputFile = open( args[ 0 ] )
- while True:
- line = self.inputFile.readline()
- if len( line ) > 0:
- self.onecmd( line )
- else:
- break
- except IOError:
- error( 'error reading file %s\n' % args[ 0 ] )
- self.inputFile = None
-
- def do_dpctl( self, line ):
- "Run dpctl command on all switches."
- args = line.split()
- if len(args) < 1:
- error( 'usage: dpctl command [arg1] [arg2] ...\n' )
- return
- for sw in self.mn.switches:
- output( '*** ' + sw.name + ' ' + ('-' * 72) + '\n' )
- output( sw.dpctl( *args ) )
-
- def do_time( self, line ):
- "Measure time taken for any command in Mininet."
- start = time.time()
- self.onecmd(line)
- elapsed = time.time() - start
- self.stdout.write("*** Elapsed time: %0.6f secs\n" % elapsed)
-
- def default( self, line ):
- """Called on an input line when the command prefix is not recognized.
- Overridden to run shell commands when a node is the first CLI argument.
- Past the first CLI argument, node names are automatically replaced with
- corresponding IP addrs."""
-
- first, args, line = self.parseline( line )
- if not args:
- return
- if args and len(args) > 0 and args[ -1 ] == '\n':
- args = args[ :-1 ]
- rest = args.split( ' ' )
-
- if first in self.nodemap:
- node = self.nodemap[ first ]
- # Substitute IP addresses for node names in command
- rest = [ self.nodemap[ arg ].IP()
- if arg in self.nodemap else arg
- for arg in rest ]
- rest = ' '.join( rest )
- # Run cmd on node:
- builtin = isShellBuiltin( first )
- node.sendCmd( rest, printPid=( not builtin ) )
- self.waitForNode( node )
- else:
- error( '*** Unknown command: %s\n' % first )
-
- # pylint: enable-msg=R0201
-
- def waitForNode( self, node ):
- "Wait for a node to finish, and print its output."
- # Pollers
- nodePoller = poll()
- nodePoller.register( node.stdout )
- bothPoller = poll()
- bothPoller.register( self.stdin, POLLIN )
- bothPoller.register( node.stdout, POLLIN )
- if self.isatty():
- # Buffer by character, so that interactive
- # commands sort of work
- quietRun( 'stty -icanon min 1' )
- while True:
- try:
- bothPoller.poll()
- # XXX BL: this doesn't quite do what we want.
- if False and self.inputFile:
- key = self.inputFile.read( 1 )
- if key is not '':
- node.write(key)
- else:
- self.inputFile = None
- if isReadable( self.inPoller ):
- key = self.stdin.read( 1 )
- node.write( key )
- if isReadable( nodePoller ):
- data = node.monitor()
- output( data )
- if not node.waiting:
- break
- except KeyboardInterrupt:
- node.sendInt()
-
-# Helper functions
-
-def isReadable( poller ):
- "Check whether a Poll object has a readable fd."
- for fdmask in poller.poll( 0 ):
- mask = fdmask[ 1 ]
- if mask & POLLIN:
- return True
diff --git a/mininet/link.py b/mininet/link.py
deleted file mode 100644
index 6c9dd7d..0000000
--- a/mininet/link.py
+++ /dev/null
@@ -1,399 +0,0 @@
-"""
-link.py: interface and link abstractions for mininet
-
-It seems useful to bundle functionality for interfaces into a single
-class.
-
-Also it seems useful to enable the possibility of multiple flavors of
-links, including:
-
-- simple veth pairs
-- tunneled links
-- patchable links (which can be disconnected and reconnected via a patchbay)
-- link simulators (e.g. wireless)
-
-Basic division of labor:
-
- Nodes: know how to execute commands
- Intfs: know how to configure themselves
- Links: know how to connect nodes together
-
-Intf: basic interface object that can configure itself
-TCIntf: interface with bandwidth limiting and delay via tc
-
-Link: basic link class for creating veth pairs
-"""
-
-from mininet.log import info, error, debug
-from mininet.util import makeIntfPair
-from time import sleep
-import re
-
-class Intf( object ):
-
- "Basic interface object that can configure itself."
-
- def __init__( self, name, node=None, port=None, link=None, **params ):
- """name: interface name (e.g. h1-eth0)
- node: owning node (where this intf most likely lives)
- link: parent link if we're part of a link
- other arguments are passed to config()"""
- self.node = node
- self.name = name
- self.link = link
- self.mac, self.ip, self.prefixLen = None, None, None
- # Add to node (and move ourselves if necessary )
- node.addIntf( self, port=port )
- # Save params for future reference
- self.params = params
- self.config( **params )
-
- def cmd( self, *args, **kwargs ):
- "Run a command in our owning node"
- return self.node.cmd( *args, **kwargs )
-
- def ifconfig( self, *args ):
- "Configure ourselves using ifconfig"
- return self.cmd( 'ifconfig', self.name, *args )
-
- def setIP( self, ipstr, prefixLen=None ):
- """Set our IP address"""
- # This is a sign that we should perhaps rethink our prefix
- # mechanism and/or the way we specify IP addresses
- if '/' in ipstr:
- self.ip, self.prefixLen = ipstr.split( '/' )
- return self.ifconfig( ipstr, 'up' )
- else:
- self.ip, self.prefixLen = ipstr, prefixLen
- return self.ifconfig( '%s/%s' % ( ipstr, prefixLen ) )
-
- def setMAC( self, macstr ):
- """Set the MAC address for an interface.
- macstr: MAC address as string"""
- self.mac = macstr
- return ( self.ifconfig( 'down' ) +
- self.ifconfig( 'hw', 'ether', macstr ) +
- self.ifconfig( 'up' ) )
-
- _ipMatchRegex = re.compile( r'\d+\.\d+\.\d+\.\d+' )
- _macMatchRegex = re.compile( r'..:..:..:..:..:..' )
-
- def updateIP( self ):
- "Return updated IP address based on ifconfig"
- ifconfig = self.ifconfig()
- ips = self._ipMatchRegex.findall( ifconfig )
- self.ip = ips[ 0 ] if ips else None
- return self.ip
-
- def updateMAC( self ):
- "Return updated MAC address based on ifconfig"
- ifconfig = self.ifconfig()
- macs = self._macMatchRegex.findall( ifconfig )
- self.mac = macs[ 0 ] if macs else None
- return self.mac
-
- def IP( self ):
- "Return IP address"
- return self.ip
-
- def MAC( self ):
- "Return MAC address"
- return self.mac
-
- def isUp( self, setUp=False ):
- "Return whether interface is up"
- if setUp:
- self.ifconfig( 'up' )
- return "UP" in self.ifconfig()
-
- def rename( self, newname ):
- "Rename interface"
- self.ifconfig( 'down' )
- result = self.cmd( 'ip link set', self.name, 'name', newname )
- self.name = newname
- self.ifconfig( 'up' )
- return result
-
- # The reason why we configure things in this way is so
- # That the parameters can be listed and documented in
- # the config method.
- # Dealing with subclasses and superclasses is slightly
- # annoying, but at least the information is there!
-
- def setParam( self, results, method, **param ):
- """Internal method: configure a *single* parameter
- results: dict of results to update
- method: config method name
- param: arg=value (ignore if value=None)
- value may also be list or dict"""
- name, value = param.items()[ 0 ]
- f = getattr( self, method, None )
- if not f or value is None:
- return
- if type( value ) is list:
- result = f( *value )
- elif type( value ) is dict:
- result = f( **value )
- else:
- result = f( value )
- results[ name ] = result
- return result
-
- def config( self, mac=None, ip=None, ifconfig=None,
- up=True, **_params ):
- """Configure Node according to (optional) parameters:
- mac: MAC address
- ip: IP address
- ifconfig: arbitrary interface configuration
- Subclasses should override this method and call
- the parent class's config(**params)"""
- # If we were overriding this method, we would call
- # the superclass config method here as follows:
- # r = Parent.config( **params )
- r = {}
- self.setParam( r, 'setMAC', mac=mac )
- self.setParam( r, 'setIP', ip=ip )
- self.setParam( r, 'isUp', up=up )
- self.setParam( r, 'ifconfig', ifconfig=ifconfig )
- self.updateIP()
- self.updateMAC()
- return r
-
- def delete( self ):
- "Delete interface"
- self.cmd( 'ip link del ' + self.name )
- # Does it help to sleep to let things run?
- sleep( 0.001 )
-
- def __repr__( self ):
- return '<%s %s>' % ( self.__class__.__name__, self.name )
-
- def __str__( self ):
- return self.name
-
-
-class TCIntf( Intf ):
- """Interface customized by tc (traffic control) utility
- Allows specification of bandwidth limits (various methods)
- as well as delay, loss and max queue length"""
-
- def bwCmds( self, bw=None, speedup=0, use_hfsc=False, use_tbf=False,
- latency_ms=None, enable_ecn=False, enable_red=False ):
- "Return tc commands to set bandwidth"
-
-
- cmds, parent = [], ' root '
-
- if bw and ( bw < 0 or bw > 1000 ):
- error( 'Bandwidth', bw, 'is outside range 0..1000 Mbps\n' )
-
- elif bw is not None:
- # BL: this seems a bit brittle...
- if ( speedup > 0 and
- self.node.name[0:1] == 's' ):
- bw = speedup
- # This may not be correct - we should look more closely
- # at the semantics of burst (and cburst) to make sure we
- # are specifying the correct sizes. For now I have used
- # the same settings we had in the mininet-hifi code.
- if use_hfsc:
- cmds += [ '%s qdisc add dev %s root handle 1:0 hfsc default 1',
- '%s class add dev %s parent 1:0 classid 1:1 hfsc sc '
- + 'rate %fMbit ul rate %fMbit' % ( bw, bw ) ]
- elif use_tbf:
- if latency_ms is None:
- latency_ms = 15 * 8 / bw
- cmds += [ '%s qdisc add dev %s root handle 1: tbf ' +
- 'rate %fMbit burst 15000 latency %fms' %
- ( bw, latency_ms ) ]
- else:
- cmds += [ '%s qdisc add dev %s root handle 1:0 htb default 1',
- '%s class add dev %s parent 1:0 classid 1:1 htb ' +
- 'rate %fMbit burst 15k' % bw ]
- parent = ' parent 1:1 '
-
- # ECN or RED
- if enable_ecn:
- cmds += [ '%s qdisc add dev %s' + parent +
- 'handle 10: red limit 1000000 ' +
- 'min 30000 max 35000 avpkt 1500 ' +
- 'burst 20 ' +
- 'bandwidth %fmbit probability 1 ecn' % bw ]
- parent = ' parent 10: '
- elif enable_red:
- cmds += [ '%s qdisc add dev %s' + parent +
- 'handle 10: red limit 1000000 ' +
- 'min 30000 max 35000 avpkt 1500 ' +
- 'burst 20 ' +
- 'bandwidth %fmbit probability 1' % bw ]
- parent = ' parent 10: '
- return cmds, parent
-
- @staticmethod
- def delayCmds( parent, delay=None, jitter=None,
- loss=None, max_queue_size=None ):
- "Internal method: return tc commands for delay and loss"
-
- cmds = []
- if delay and delay < 0:
- error( 'Negative delay', delay, '\n' )
- elif jitter and jitter < 0:
- error( 'Negative jitter', jitter, '\n' )
- elif loss and ( loss < 0 or loss > 100 ):
- error( 'Bad loss percentage', loss, '%%\n' )
- else:
- # Delay/jitter/loss/max queue size
- netemargs = '%s%s%s%s' % (
- 'delay %s ' % delay if delay is not None else '',
- '%s ' % jitter if jitter is not None else '',
- 'loss %d ' % loss if loss is not None else '',
- 'limit %d' % max_queue_size if max_queue_size is not None
- else '' )
- if netemargs:
- cmds = [ '%s qdisc add dev %s ' + parent +
- ' handle 10: netem ' +
- netemargs ]
- return cmds
-
- def tc( self, cmd, tc='tc' ):
- "Execute tc command for our interface"
- c = cmd % (tc, self) # Add in tc command and our name
- debug(" *** executing command: %s\n" % c)
- return self.cmd( c )
-
- def config( self, bw=None, delay=None, jitter=None, loss=None,
- disable_gro=True, speedup=0, use_hfsc=False, use_tbf=False,
- latency_ms=None, enable_ecn=False, enable_red=False,
- max_queue_size=None, **params ):
- "Configure the port and set its properties."
-
- result = Intf.config( self, **params)
-
- # Disable GRO
- if disable_gro:
- self.cmd( 'ethtool -K %s gro off' % self )
-
- # Optimization: return if nothing else to configure
- # Question: what happens if we want to reset things?
- if ( bw is None and not delay and not loss
- and max_queue_size is None ):
- return
-
- # Clear existing configuration
- cmds = [ '%s qdisc del dev %s root' ]
-
- # Bandwidth limits via various methods
- bwcmds, parent = self.bwCmds( bw=bw, speedup=speedup,
- use_hfsc=use_hfsc, use_tbf=use_tbf,
- latency_ms=latency_ms,
- enable_ecn=enable_ecn,
- enable_red=enable_red )
- cmds += bwcmds
-
- # Delay/jitter/loss/max_queue_size using netem
- cmds += self.delayCmds( delay=delay, jitter=jitter, loss=loss,
- max_queue_size=max_queue_size,
- parent=parent )
-
- # Ugly but functional: display configuration info
- stuff = ( ( [ '%.2fMbit' % bw ] if bw is not None else [] ) +
- ( [ '%s delay' % delay ] if delay is not None else [] ) +
- ( [ '%s jitter' % jitter ] if jitter is not None else [] ) +
- ( ['%d%% loss' % loss ] if loss is not None else [] ) +
- ( [ 'ECN' ] if enable_ecn else [ 'RED' ]
- if enable_red else [] ) )
- info( '(' + ' '.join( stuff ) + ') ' )
-
- # Execute all the commands in our node
- debug("at map stage w/cmds: %s\n" % cmds)
- tcoutputs = [ self.tc(cmd) for cmd in cmds ]
- debug( "cmds:", cmds, '\n' )
- debug( "outputs:", tcoutputs, '\n' )
- result[ 'tcoutputs'] = tcoutputs
-
- return result
-
-
-class Link( object ):
-
- """A basic link is just a veth pair.
- Other types of links could be tunnels, link emulators, etc.."""
-
- def __init__( self, node1, node2, port1=None, port2=None,
- intfName1=None, intfName2=None,
- intf=Intf, cls1=None, cls2=None, params1=None,
- params2=None ):
- """Create veth link to another node, making two new interfaces.
- node1: first node
- node2: second node
- port1: node1 port number (optional)
- port2: node2 port number (optional)
- intf: default interface class/constructor
- cls1, cls2: optional interface-specific constructors
- intfName1: node1 interface name (optional)
- intfName2: node2 interface name (optional)
- params1: parameters for interface 1
- params2: parameters for interface 2"""
- # This is a bit awkward; it seems that having everything in
- # params would be more orthogonal, but being able to specify
- # in-line arguments is more convenient!
- if port1 is None:
- port1 = node1.newPort()
- if port2 is None:
- port2 = node2.newPort()
- if not intfName1:
- intfName1 = self.intfName( node1, port1 )
- if not intfName2:
- intfName2 = self.intfName( node2, port2 )
-
- self.makeIntfPair( intfName1, intfName2 )
-
- if not cls1:
- cls1 = intf
- if not cls2:
- cls2 = intf
- if not params1:
- params1 = {}
- if not params2:
- params2 = {}
-
- intf1 = cls1( name=intfName1, node=node1, port=port1,
- link=self, **params1 )
- intf2 = cls2( name=intfName2, node=node2, port=port2,
- link=self, **params2 )
-
- # All we are is dust in the wind, and our two interfaces
- self.intf1, self.intf2 = intf1, intf2
-
- @classmethod
- def intfName( cls, node, n ):
- "Construct a canonical interface name node-ethN for interface n."
- return node.name + '-eth' + repr( n )
-
- @classmethod
- def makeIntfPair( cls, intf1, intf2 ):
- """Create pair of interfaces
- intf1: name of interface 1
- intf2: name of interface 2
- (override this class method [and possibly delete()]
- to change link type)"""
- makeIntfPair( intf1, intf2 )
-
- def delete( self ):
- "Delete this link"
- self.intf1.delete()
- self.intf2.delete()
-
- def __str__( self ):
- return '%s<->%s' % ( self.intf1, self.intf2 )
-
-class TCLink( Link ):
- "Link with symmetric TC interfaces configured via opts"
- def __init__( self, node1, node2, port1=None, port2=None,
- intfName1=None, intfName2=None, **params ):
- Link.__init__( self, node1, node2, port1=port1, port2=port2,
- intfName1=intfName1, intfName2=intfName2,
- cls1=TCIntf,
- cls2=TCIntf,
- params1=params,
- params2=params)
diff --git a/mininet/log.py b/mininet/log.py
deleted file mode 100644
index cd00821..0000000
--- a/mininet/log.py
+++ /dev/null
@@ -1,178 +0,0 @@
-"Logging functions for Mininet."
-
-import logging
-from logging import Logger
-import types
-
-# Create a new loglevel, 'CLI info', which enables a Mininet user to see only
-# the output of the commands they execute, plus any errors or warnings. This
-# level is in between info and warning. CLI info-level commands should not be
-# printed during regression tests.
-OUTPUT = 25
-
-LEVELS = { 'debug': logging.DEBUG,
- 'info': logging.INFO,
- 'output': OUTPUT,
- 'warning': logging.WARNING,
- 'error': logging.ERROR,
- 'critical': logging.CRITICAL }
-
-# change this to logging.INFO to get printouts when running unit tests
-LOGLEVELDEFAULT = OUTPUT
-
-#default: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
-LOGMSGFORMAT = '%(message)s'
-
-
-# Modified from python2.5/__init__.py
-class StreamHandlerNoNewline( logging.StreamHandler ):
- """StreamHandler that doesn't print newlines by default.
- Since StreamHandler automatically adds newlines, define a mod to more
- easily support interactive mode when we want it, or errors-only logging
- for running unit tests."""
-
- def emit( self, record ):
- """Emit a record.
- If a formatter is specified, it is used to format the record.
- The record is then written to the stream with a trailing newline
- [ N.B. this may be removed depending on feedback ]. If exception
- information is present, it is formatted using
- traceback.printException and appended to the stream."""
- try:
- msg = self.format( record )
- fs = '%s' # was '%s\n'
- if not hasattr( types, 'UnicodeType' ): # if no unicode support...
- self.stream.write( fs % msg )
- else:
- try:
- self.stream.write( fs % msg )
- except UnicodeError:
- self.stream.write( fs % msg.encode( 'UTF-8' ) )
- self.flush()
- except ( KeyboardInterrupt, SystemExit ):
- raise
- except:
- self.handleError( record )
-
-
-class Singleton( type ):
- """Singleton pattern from Wikipedia
- See http://en.wikipedia.org/wiki/SingletonPattern#Python
-
- Intended to be used as a __metaclass_ param, as shown for the class
- below.
-
- Changed cls first args to mcs to satisfy pylint."""
-
- def __init__( mcs, name, bases, dict_ ):
- super( Singleton, mcs ).__init__( name, bases, dict_ )
- mcs.instance = None
-
- def __call__( mcs, *args, **kw ):
- if mcs.instance is None:
- mcs.instance = super( Singleton, mcs ).__call__( *args, **kw )
- return mcs.instance
-
-
-class MininetLogger( Logger, object ):
- """Mininet-specific logger
- Enable each mininet .py file to with one import:
-
- from mininet.log import [lg, info, error]
-
- ...get a default logger that doesn't require one newline per logging
- call.
-
- Inherit from object to ensure that we have at least one new-style base
- class, and can then use the __metaclass__ directive, to prevent this
- error:
-
- TypeError: Error when calling the metaclass bases
- a new-style class can't have only classic bases
-
- If Python2.5/logging/__init__.py defined Filterer as a new-style class,
- via Filterer( object ): rather than Filterer, we wouldn't need this.
-
- Use singleton pattern to ensure only one logger is ever created."""
-
- __metaclass__ = Singleton
-
- def __init__( self ):
-
- Logger.__init__( self, "mininet" )
-
- # create console handler
- ch = StreamHandlerNoNewline()
- # create formatter
- formatter = logging.Formatter( LOGMSGFORMAT )
- # add formatter to ch
- ch.setFormatter( formatter )
- # add ch to lg
- self.addHandler( ch )
-
- self.setLogLevel()
-
- def setLogLevel( self, levelname=None ):
- """Setup loglevel.
- Convenience function to support lowercase names.
- levelName: level name from LEVELS"""
- level = LOGLEVELDEFAULT
- if levelname is not None:
- if levelname not in LEVELS:
- raise Exception( 'unknown levelname seen in setLogLevel' )
- else:
- level = LEVELS.get( levelname, level )
-
- self.setLevel( level )
- self.handlers[ 0 ].setLevel( level )
-
- # pylint: disable-msg=E0202
- # "An attribute inherited from mininet.log hide this method"
- # Not sure why this is occurring - this function definitely gets called.
-
- # See /usr/lib/python2.5/logging/__init__.py; modified from warning()
- def output( self, msg, *args, **kwargs ):
- """Log 'msg % args' with severity 'OUTPUT'.
-
- To pass exception information, use the keyword argument exc_info
- with a true value, e.g.
-
- logger.warning("Houston, we have a %s", "cli output", exc_info=1)
- """
- if self.manager.disable >= OUTPUT:
- return
- if self.isEnabledFor( OUTPUT ):
- self._log( OUTPUT, msg, args, kwargs )
-
- # pylint: enable-msg=E0202
-
-lg = MininetLogger()
-
-# Make things a bit more convenient by adding aliases
-# (info, warn, error, debug) and allowing info( 'this', 'is', 'OK' )
-# In the future we may wish to make things more efficient by only
-# doing the join (and calling the function) unless the logging level
-# is high enough.
-
-def makeListCompatible( fn ):
- """Return a new function allowing fn( 'a 1 b' ) to be called as
- newfn( 'a', 1, 'b' )"""
-
- def newfn( *args ):
- "Generated function. Closure-ish."
- if len( args ) == 1:
- return fn( *args )
- args = ' '.join( [ str( arg ) for arg in args ] )
- return fn( args )
-
- # Fix newfn's name and docstring
- setattr( newfn, '__name__', fn.__name__ )
- setattr( newfn, '__doc__', fn.__doc__ )
- return newfn
-
-info, output, warn, error, debug = (
- lg.info, lg.output, lg.warn, lg.error, lg.debug ) = [
- makeListCompatible( f ) for f in
- lg.info, lg.output, lg.warn, lg.error, lg.debug ]
-
-setLogLevel = lg.setLogLevel
diff --git a/mininet/moduledeps.py b/mininet/moduledeps.py
deleted file mode 100644
index 862c1f6..0000000
--- a/mininet/moduledeps.py
+++ /dev/null
@@ -1,68 +0,0 @@
-"Module dependency utility functions for Mininet."
-
-from mininet.util import quietRun
-from mininet.log import info, error, debug
-from os import environ
-
-def lsmod():
- "Return output of lsmod."
- return quietRun( 'lsmod' )
-
-def rmmod( mod ):
- """Return output of lsmod.
- mod: module string"""
- return quietRun( [ 'rmmod', mod ] )
-
-def modprobe( mod ):
- """Return output of modprobe
- mod: module string"""
- return quietRun( [ 'modprobe', mod ] )
-
-OF_KMOD = 'ofdatapath'
-OVS_KMOD = 'openvswitch_mod' # Renamed 'openvswitch' in OVS 1.7+/Linux 3.5+
-TUN = 'tun'
-
-def moduleDeps( subtract=None, add=None ):
- """Handle module dependencies.
- subtract: string or list of module names to remove, if already loaded
- add: string or list of module names to add, if not already loaded"""
- subtract = subtract if subtract is not None else []
- add = add if add is not None else []
- if type( subtract ) is str:
- subtract = [ subtract ]
- if type( add ) is str:
- add = [ add ]
- for mod in subtract:
- if mod in lsmod():
- info( '*** Removing ' + mod + '\n' )
- rmmodOutput = rmmod( mod )
- if rmmodOutput:
- error( 'Error removing ' + mod + ': "%s">\n' % rmmodOutput )
- exit( 1 )
- if mod in lsmod():
- error( 'Failed to remove ' + mod + '; still there!\n' )
- exit( 1 )
- for mod in add:
- if mod not in lsmod():
- info( '*** Loading ' + mod + '\n' )
- modprobeOutput = modprobe( mod )
- if modprobeOutput:
- error( 'Error inserting ' + mod +
- ' - is it installed and available via modprobe?\n' +
- 'Error was: "%s"\n' % modprobeOutput )
- if mod not in lsmod():
- error( 'Failed to insert ' + mod + ' - quitting.\n' )
- exit( 1 )
- else:
- debug( '*** ' + mod + ' already loaded\n' )
-
-
-def pathCheck( *args, **kwargs ):
- "Make sure each program in *args can be found in $PATH."
- moduleName = kwargs.get( 'moduleName', 'it' )
- for arg in args:
- if not quietRun( 'which ' + arg ):
- error( 'Cannot find required executable %s.\n' % arg +
- 'Please make sure that %s is installed ' % moduleName +
- 'and available in your $PATH:\n(%s)\n' % environ[ 'PATH' ] )
- exit( 1 )
diff --git a/mininet/net.py b/mininet/net.py
deleted file mode 100644
index 847850b..0000000
--- a/mininet/net.py
+++ /dev/null
@@ -1,775 +0,0 @@
-"""
-
- Mininet: A simple networking testbed for OpenFlow/SDN!
-
-author: Bob Lantz (rlantz@cs.stanford.edu)
-author: Brandon Heller (brandonh@stanford.edu)
-
-Mininet creates scalable OpenFlow test networks by using
-process-based virtualization and network namespaces.
-
-Simulated hosts are created as processes in separate network
-namespaces. This allows a complete OpenFlow network to be simulated on
-top of a single Linux kernel.
-
-Each host has:
-
-A virtual console (pipes to a shell)
-A virtual interfaces (half of a veth pair)
-A parent shell (and possibly some child processes) in a namespace
-
-Hosts have a network interface which is configured via ifconfig/ip
-link/etc.
-
-This version supports both the kernel and user space datapaths
-from the OpenFlow reference implementation (openflowswitch.org)
-as well as OpenVSwitch (openvswitch.org.)
-
-In kernel datapath mode, the controller and switches are simply
-processes in the root namespace.
-
-Kernel OpenFlow datapaths are instantiated using dpctl(8), and are
-attached to the one side of a veth pair; the other side resides in the
-host namespace. In this mode, switch processes can simply connect to the
-controller via the loopback interface.
-
-In user datapath mode, the controller and switches can be full-service
-nodes that live in their own network namespaces and have management
-interfaces and IP addresses on a control network (e.g. 192.168.123.1,
-currently routed although it could be bridged.)
-
-In addition to a management interface, user mode switches also have
-several switch interfaces, halves of veth pairs whose other halves
-reside in the host nodes that the switches are connected to.
-
-Consistent, straightforward naming is important in order to easily
-identify hosts, switches and controllers, both from the CLI and
-from program code. Interfaces are named to make it easy to identify
-which interfaces belong to which node.
-
-The basic naming scheme is as follows:
-
- Host nodes are named h1-hN
- Switch nodes are named s1-sN
- Controller nodes are named c0-cN
- Interfaces are named {nodename}-eth0 .. {nodename}-ethN
-
-Note: If the network topology is created using mininet.topo, then
-node numbers are unique among hosts and switches (e.g. we have
-h1..hN and SN..SN+M) and also correspond to their default IP addresses
-of 10.x.y.z/8 where x.y.z is the base-256 representation of N for
-hN. This mapping allows easy determination of a node's IP
-address from its name, e.g. h1 -> 10.0.0.1, h257 -> 10.0.1.1.
-
-Note also that 10.0.0.1 can often be written as 10.1 for short, e.g.
-"ping 10.1" is equivalent to "ping 10.0.0.1".
-
-Currently we wrap the entire network in a 'mininet' object, which
-constructs a simulated network based on a network topology created
-using a topology object (e.g. LinearTopo) from mininet.topo or
-mininet.topolib, and a Controller which the switches will connect
-to. Several configuration options are provided for functions such as
-automatically setting MAC addresses, populating the ARP table, or
-even running a set of terminals to allow direct interaction with nodes.
-
-After the network is created, it can be started using start(), and a
-variety of useful tasks maybe performed, including basic connectivity
-and bandwidth tests and running the mininet CLI.
-
-Once the network is up and running, test code can easily get access
-to host and switch objects which can then be used for arbitrary
-experiments, typically involving running a series of commands on the
-hosts.
-
-After all desired tests or activities have been completed, the stop()
-method may be called to shut down the network.
-
-"""
-
-import os
-import re
-import select
-import signal
-from time import sleep
-
-from mininet.cli import CLI
-from mininet.log import info, error, debug, output
-from mininet.node import Host, OVSKernelSwitch, Controller
-from mininet.link import Link, Intf
-from mininet.util import quietRun, fixLimits, numCores, ensureRoot
-from mininet.util import macColonHex, ipStr, ipParse, netParse, ipAdd, nextCCNnet
-from mininet.term import cleanUpScreens, makeTerms
-import pdb
-
-# Mininet version: should be consistent with README and LICENSE
-VERSION = "2.0.0"
-
-class Mininet( object ):
- "Network emulation with hosts spawned in network namespaces."
-
- def __init__( self, topo=None, switch=OVSKernelSwitch, host=Host,
- controller=Controller, link=Link, intf=Intf,
- build=True, xterms=False, cleanup=False, ipBase='10.0.0.0/8',
- inNamespace=False,
- autoSetMacs=False, autoStaticArp=False, autoPinCpus=False,
- listenPort=None ):
- """Create Mininet object.
- topo: Topo (topology) object or None
- switch: default Switch class
- host: default Host class/constructor
- controller: default Controller class/constructor
- link: default Link class/constructor
- intf: default Intf class/constructor
- ipBase: base IP address for hosts,
- build: build now from topo?
- xterms: if build now, spawn xterms?
- cleanup: if build now, cleanup before creating?
- inNamespace: spawn switches and controller in net namespaces?
- autoSetMacs: set MAC addrs automatically like IP addresses?
- autoStaticArp: set all-pairs static MAC addrs?
- autoPinCpus: pin hosts to (real) cores (requires CPULimitedHost)?
- listenPort: base listening port to open; will be incremented for
- each additional switch in the net if inNamespace=False"""
- self.topo = topo
- self.switch = switch
- self.host = host
- self.controller = controller
- self.link = link
- self.intf = intf
- self.ipBase = ipBase
- self.ipBaseNum, self.prefixLen = netParse( self.ipBase )
- self.nextIP = 1 # start for address allocation
- self.ccnNetBase = '1.0.0.0'
- self.inNamespace = inNamespace
- self.xterms = xterms
- self.cleanup = cleanup
- self.autoSetMacs = autoSetMacs
- self.autoStaticArp = autoStaticArp
- self.autoPinCpus = autoPinCpus
- self.numCores = numCores()
- self.nextCore = 0 # next core for pinning hosts to CPUs
- self.listenPort = listenPort
-
- self.hosts = []
- self.switches = []
- self.controllers = []
-
- self.nameToNode = {} # name to Node (Host/Switch) objects
-
- self.terms = [] # list of spawned xterm processes
-
- Mininet.init() # Initialize Mininet if necessary
-
- self.built = False
- if topo and build:
- self.build()
-
- def isNdnhost(self, node):
- if 'fib' in node.params:
- return True
- else:
- return False
-
- def addHost( self, name, cls=None, **params ):
- """Add host.
- name: name of host to add
- cls: custom host class/constructor (optional)
- params: parameters for host
- returns: added host"""
- # Default IP and MAC addresses
- #pdb.set_trace()
- #defaults = { 'ip': ipAdd( self.nextIP,
- #ipBaseNum=self.ipBaseNum,
- #prefixLen=self.prefixLen ) +
- #'/%s' % self.prefixLen }
- #if self.autoSetMacs:
- #defaults[ 'mac'] = macColonHex( self.nextIP )
- #if self.autoPinCpus:
- #defaults[ 'cores' ] = self.nextCore
- #self.nextCore = ( self.nextCore + 1 ) % self.numCores
- #self.nextIP += 1
- defaults = {}
- defaults.update( params )
-
- if not cls:
- cls = self.host
- h = cls( name, **defaults )
- self.hosts.append( h )
- self.nameToNode[ name ] = h
- return h
-
- def addSwitch( self, name, cls=None, **params ):
- """Add switch.
- name: name of switch to add
- cls: custom switch class/constructor (optional)
- returns: added switch
- side effect: increments listenPort ivar ."""
- defaults = { 'listenPort': self.listenPort,
- 'inNamespace': self.inNamespace }
- defaults.update( params )
- if not cls:
- cls = self.switch
- sw = cls( name, **defaults )
- if not self.inNamespace and self.listenPort:
- self.listenPort += 1
- self.switches.append( sw )
- self.nameToNode[ name ] = sw
- return sw
-
- def addController( self, name='c0', controller=None, **params ):
- """Add controller.
- controller: Controller class"""
- if not controller:
- controller = self.controller
- controller_new = controller( name, **params )
- if controller_new: # allow controller-less setups
- self.controllers.append( controller_new )
- self.nameToNode[ name ] = controller_new
- return controller_new
-
- # BL: is this better than just using nameToNode[] ?
- # Should it have a better name?
- def getNodeByName( self, *args ):
- "Return node(s) with given name(s)"
- if len( args ) == 1:
- return self.nameToNode[ args[ 0 ] ]
- return [ self.nameToNode[ n ] for n in args ]
-
- def get( self, *args ):
- "Convenience alias for getNodeByName"
- return self.getNodeByName( *args )
-
- def addLink( self, node1, node2, port1=None, port2=None,
- cls=None, **params ):
- """"Add a link from node1 to node2
- node1: source node
- node2: dest node
- port1: source port
- port2: dest port
- returns: link object"""
- defaults = { 'port1': port1,
- 'port2': port2,
- 'intf': self.intf }
- defaults.update( params )
- if not cls:
- cls = self.link
- return cls( node1, node2, **defaults )
-
- def configHosts( self ):
- "Configure a set of hosts."
- for host in self.hosts:
- info( host.name + ' ' )
- intf = host.defaultIntf()
- if self.isNdnhost(host):
- host.configNdn()
- host.configDefault(ip=None,mac=None)
- elif intf:
- host.configDefault( defaultRoute=intf )
- else:
- # Don't configure nonexistent intf
- host.configDefault( ip=None, mac=None )
- # You're low priority, dude!
- # BL: do we want to do this here or not?
- # May not make sense if we have CPU lmiting...
- # quietRun( 'renice +18 -p ' + repr( host.pid ) )
- # This may not be the right place to do this, but
- # it needs to be done somewhere.
- host.cmd( 'ifconfig lo up' )
- info( '\n' )
-
- def buildFromTopo( self, topo=None ):
- """Build mininet from a topology object
- At the end of this function, everything should be connected
- and up."""
-
- # Possibly we should clean up here and/or validate
- # the topo
- if self.cleanup:
- pass
-
- info( '*** Creating network\n' )
-
- #if not self.controllers:
- # Add a default controller
- #info( '*** Adding controller\n' )
- #classes = self.controller
- #if type( classes ) is not list:
- # classes = [ classes ]
- #for i, cls in enumerate( classes ):
- # self.addController( 'c%d' % i, cls )
-
- info( '*** Adding hosts:\n' )
- for hostName in topo.hosts():
- #pdb.set_trace()
- self.addHost( hostName, **topo.nodeInfo( hostName ) )
- info( hostName + ' ' )
-
- info( '\n*** Adding switches:\n' )
- for switchName in topo.switches():
- self.addSwitch( switchName, **topo.nodeInfo( switchName) )
- info( switchName + ' ' )
-
- info( '\n*** Adding links:\n' )
- for srcName, dstName in topo.links(sort=True):
- src, dst = self.nameToNode[ srcName ], self.nameToNode[ dstName ]
- params = topo.linkInfo( srcName, dstName )
- srcPort, dstPort = topo.port( srcName, dstName )
- self.addLink( src, dst, srcPort, dstPort, **params )
- if self.isNdnhost(src):
- src.setIP(ipStr(ipParse(self.ccnNetBase) + 1) + '/30', intf=src.name + '-eth' + str(srcPort))
- dst.setIP(ipStr(ipParse(self.ccnNetBase) + 2) + '/30', intf=dst.name + '-eth' + str(dstPort))
- self.ccnNetBase=nextCCNnet(self.ccnNetBase)
-
- info( '(%s, %s) ' % ( src.name, dst.name ) )
-
- info( '\n' )
-
-
- def configureControlNetwork( self ):
- "Control net config hook: override in subclass"
- raise Exception( 'configureControlNetwork: '
- 'should be overriden in subclass', self )
-
- def build( self ):
- "Build mininet."
-
- if self.topo:
- self.buildFromTopo( self.topo )
- if ( self.inNamespace ):
- self.configureControlNetwork()
- info( '*** Configuring hosts\n' )
- self.configHosts()
- if self.xterms:
- self.startTerms()
- if self.autoStaticArp:
- self.staticArp()
- self.built = True
-
- def startTerms( self ):
- "Start a terminal for each node."
- info( "*** Running terms on %s\n" % os.environ[ 'DISPLAY' ] )
- cleanUpScreens()
- self.terms += makeTerms( self.controllers, 'controller' )
- self.terms += makeTerms( self.switches, 'switch' )
- self.terms += makeTerms( self.hosts, 'host' )
-
- def stopXterms( self ):
- "Kill each xterm."
- for term in self.terms:
- os.kill( term.pid, signal.SIGKILL )
- cleanUpScreens()
-
- def staticArp( self ):
- "Add all-pairs ARP entries to remove the need to handle broadcast."
- for src in self.hosts:
- for dst in self.hosts:
- if src != dst:
- src.setARP( ip=dst.IP(), mac=dst.MAC() )
-
- def start( self ):
- "Start controller and switches."
- if not self.built:
- self.build()
- info( '*** Starting controller\n' )
- for controller in self.controllers:
- controller.start()
- info( '*** Starting %s switches\n' % len( self.switches ) )
- for switch in self.switches:
- info( switch.name + ' ')
- switch.start( self.controllers )
- info( '\n' )
-
- def stop( self ):
- "Stop the controller(s), switches and hosts"
- if self.terms:
- info( '*** Stopping %i terms\n' % len( self.terms ) )
- self.stopXterms()
- info( '*** Stopping %i hosts\n' % len( self.hosts ) )
- for host in self.hosts:
- info( host.name + ' ' )
- host.terminate()
- info( '\n' )
- info( '*** Stopping %i switches\n' % len( self.switches ) )
- for switch in self.switches:
- info( switch.name + ' ' )
- switch.stop()
- info( '\n' )
- info( '*** Stopping %i controllers\n' % len( self.controllers ) )
- for controller in self.controllers:
- info( controller.name + ' ' )
- controller.stop()
- info( '\n*** Done\n' )
-
- def run( self, test, *args, **kwargs ):
- "Perform a complete start/test/stop cycle."
- self.start()
- info( '*** Running test\n' )
- result = test( *args, **kwargs )
- self.stop()
- return result
-
- def monitor( self, hosts=None, timeoutms=-1 ):
- """Monitor a set of hosts (or all hosts by default),
- and return their output, a line at a time.
- hosts: (optional) set of hosts to monitor
- timeoutms: (optional) timeout value in ms
- returns: iterator which returns host, line"""
- if hosts is None:
- hosts = self.hosts
- poller = select.poll()
- Node = hosts[ 0 ] # so we can call class method fdToNode
- for host in hosts:
- poller.register( host.stdout )
- while True:
- ready = poller.poll( timeoutms )
- for fd, event in ready:
- host = Node.fdToNode( fd )
- if event & select.POLLIN:
- line = host.readline()
- if line is not None:
- yield host, line
- # Return if non-blocking
- if not ready and timeoutms >= 0:
- yield None, None
-
- # XXX These test methods should be moved out of this class.
- # Probably we should create a tests.py for them
-
- @staticmethod
- def _parsePing( pingOutput ):
- "Parse ping output and return packets sent, received."
- # Check for downed link
- if 'connect: Network is unreachable' in pingOutput:
- return (1, 0)
- r = r'(\d+) packets transmitted, (\d+) received'
- m = re.search( r, pingOutput )
- if m is None:
- error( '*** Error: could not parse ping output: %s\n' %
- pingOutput )
- return (1, 0)
- sent, received = int( m.group( 1 ) ), int( m.group( 2 ) )
- return sent, received
-
- def ping( self, hosts=None, timeout=None ):
- """Ping between all specified hosts.
- hosts: list of hosts
- timeout: time to wait for a response, as string
- returns: ploss packet loss percentage"""
- # should we check if running?
- packets = 0
- lost = 0
- ploss = None
- if not hosts:
- hosts = self.hosts
- output( '*** Ping: testing ping reachability\n' )
- for node in hosts:
- output( '%s -> ' % node.name )
- for dest in hosts:
- if node != dest:
- opts = ''
- if timeout:
- opts = '-W %s' % timeout
- result = node.cmd( 'ping -c1 %s %s' % (opts, dest.IP()) )
- sent, received = self._parsePing( result )
- packets += sent
- if received > sent:
- error( '*** Error: received too many packets' )
- error( '%s' % result )
- node.cmdPrint( 'route' )
- exit( 1 )
- lost += sent - received
- output( ( '%s ' % dest.name ) if received else 'X ' )
- output( '\n' )
- ploss = 100 * lost / packets
- output( "*** Results: %i%% dropped (%d/%d lost)\n" %
- ( ploss, lost, packets ) )
- return ploss
-
- @staticmethod
- def _parsePingFull( pingOutput ):
- "Parse ping output and return all data."
- # Check for downed link
- if 'connect: Network is unreachable' in pingOutput:
- return (1, 0)
- r = r'(\d+) packets transmitted, (\d+) received'
- m = re.search( r, pingOutput )
- if m is None:
- error( '*** Error: could not parse ping output: %s\n' %
- pingOutput )
- return (1, 0, 0, 0, 0, 0)
- sent, received = int( m.group( 1 ) ), int( m.group( 2 ) )
- r = r'rtt min/avg/max/mdev = '
- r += r'(\d+\.\d+)/(\d+\.\d+)/(\d+\.\d+)/(\d+\.\d+) ms'
- m = re.search( r, pingOutput )
- rttmin = float( m.group( 1 ) )
- rttavg = float( m.group( 2 ) )
- rttmax = float( m.group( 3 ) )
- rttdev = float( m.group( 4 ) )
- return sent, received, rttmin, rttavg, rttmax, rttdev
-
- def pingFull( self, hosts=None, timeout=None ):
- """Ping between all specified hosts and return all data.
- hosts: list of hosts
- timeout: time to wait for a response, as string
- returns: all ping data; see function body."""
- # should we check if running?
- # Each value is a tuple: (src, dsd, [all ping outputs])
- all_outputs = []
- if not hosts:
- hosts = self.hosts
- output( '*** Ping: testing ping reachability\n' )
- for node in hosts:
- output( '%s -> ' % node.name )
- for dest in hosts:
- if node != dest:
- opts = ''
- if timeout:
- opts = '-W %s' % timeout
- result = node.cmd( 'ping -c1 %s %s' % (opts, dest.IP()) )
- outputs = self._parsePingFull( result )
- sent, received, rttmin, rttavg, rttmax, rttdev = outputs
- all_outputs.append( (node, dest, outputs) )
- output( ( '%s ' % dest.name ) if received else 'X ' )
- output( '\n' )
- output( "*** Results: \n" )
- for outputs in all_outputs:
- src, dest, ping_outputs = outputs
- sent, received, rttmin, rttavg, rttmax, rttdev = ping_outputs
- output( " %s->%s: %s/%s, " % (src, dest, sent, received ) )
- output( "rtt min/avg/max/mdev %0.3f/%0.3f/%0.3f/%0.3f ms\n" %
- (rttmin, rttavg, rttmax, rttdev) )
- return all_outputs
-
- def pingAll( self ):
- """Ping between all hosts.
- returns: ploss packet loss percentage"""
- return self.ping()
-
- def pingPair( self ):
- """Ping between first two hosts, useful for testing.
- returns: ploss packet loss percentage"""
- hosts = [ self.hosts[ 0 ], self.hosts[ 1 ] ]
- return self.ping( hosts=hosts )
-
- def pingAllFull( self ):
- """Ping between all hosts.
- returns: ploss packet loss percentage"""
- return self.pingFull()
-
- def pingPairFull( self ):
- """Ping between first two hosts, useful for testing.
- returns: ploss packet loss percentage"""
- hosts = [ self.hosts[ 0 ], self.hosts[ 1 ] ]
- return self.pingFull( hosts=hosts )
-
- @staticmethod
- def _parseIperf( iperfOutput ):
- """Parse iperf output and return bandwidth.
- iperfOutput: string
- returns: result string"""
- r = r'([\d\.]+ \w+/sec)'
- m = re.findall( r, iperfOutput )
- if m:
- return m[-1]
- else:
- # was: raise Exception(...)
- error( 'could not parse iperf output: ' + iperfOutput )
- return ''
-
- # XXX This should be cleaned up
-
- def iperf( self, hosts=None, l4Type='TCP', udpBw='10M' ):
- """Run iperf between two hosts.
- hosts: list of hosts; if None, uses opposite hosts
- l4Type: string, one of [ TCP, UDP ]
- returns: results two-element array of server and client speeds"""
- if not quietRun( 'which telnet' ):
- error( 'Cannot find telnet in $PATH - required for iperf test' )
- return
- if not hosts:
- hosts = [ self.hosts[ 0 ], self.hosts[ -1 ] ]
- else:
- assert len( hosts ) == 2
- client, server = hosts
- output( '*** Iperf: testing ' + l4Type + ' bandwidth between ' )
- output( "%s and %s\n" % ( client.name, server.name ) )
- server.cmd( 'killall -9 iperf' )
- iperfArgs = 'iperf '
- bwArgs = ''
- if l4Type == 'UDP':
- iperfArgs += '-u '
- bwArgs = '-b ' + udpBw + ' '
- elif l4Type != 'TCP':
- raise Exception( 'Unexpected l4 type: %s' % l4Type )
- server.sendCmd( iperfArgs + '-s', printPid=True )
- servout = ''
- while server.lastPid is None:
- servout += server.monitor()
- if l4Type == 'TCP':
- while 'Connected' not in client.cmd(
- 'sh -c "echo A | telnet -e A %s 5001"' % server.IP()):
- output('waiting for iperf to start up...')
- sleep(.5)
- cliout = client.cmd( iperfArgs + '-t 5 -c ' + server.IP() + ' ' +
- bwArgs )
- debug( 'Client output: %s\n' % cliout )
- server.sendInt()
- servout += server.waitOutput()
- debug( 'Server output: %s\n' % servout )
- result = [ self._parseIperf( servout ), self._parseIperf( cliout ) ]
- if l4Type == 'UDP':
- result.insert( 0, udpBw )
- output( '*** Results: %s\n' % result )
- return result
-
- def runCpuLimitTest( self, cpu, duration=5 ):
- """run CPU limit test with 'while true' processes.
- cpu: desired CPU fraction of each host
- duration: test duration in seconds
- returns a single list of measured CPU fractions as floats.
- """
- pct = cpu * 100
- info('*** Testing CPU %.0f%% bandwidth limit\n' % pct)
- hosts = self.hosts
- for h in hosts:
- h.cmd( 'while true; do a=1; done &' )
- pids = [h.cmd( 'echo $!' ).strip() for h in hosts]
- pids_str = ",".join(["%s" % pid for pid in pids])
- cmd = 'ps -p %s -o pid,%%cpu,args' % pids_str
- # It's a shame that this is what pylint prefers
- outputs = []
- for _ in range( duration ):
- sleep( 1 )
- outputs.append( quietRun( cmd ).strip() )
- for h in hosts:
- h.cmd( 'kill %1' )
- cpu_fractions = []
- for test_output in outputs:
- # Split by line. Ignore first line, which looks like this:
- # PID %CPU COMMAND\n
- for line in test_output.split('\n')[1:]:
- r = r'\d+\s*(\d+\.\d+)'
- m = re.search( r, line )
- if m is None:
- error( '*** Error: could not extract CPU fraction: %s\n' %
- line )
- return None
- cpu_fractions.append( float( m.group( 1 ) ) )
- output( '*** Results: %s\n' % cpu_fractions )
- return cpu_fractions
-
- # BL: I think this can be rewritten now that we have
- # a real link class.
- def configLinkStatus( self, src, dst, status ):
- """Change status of src <-> dst links.
- src: node name
- dst: node name
- status: string {up, down}"""
- if src not in self.nameToNode:
- error( 'src not in network: %s\n' % src )
- elif dst not in self.nameToNode:
- error( 'dst not in network: %s\n' % dst )
- else:
- if type( src ) is str:
- src = self.nameToNode[ src ]
- if type( dst ) is str:
- dst = self.nameToNode[ dst ]
- connections = src.connectionsTo( dst )
- if len( connections ) == 0:
- error( 'src and dst not connected: %s %s\n' % ( src, dst) )
- for srcIntf, dstIntf in connections:
- result = srcIntf.ifconfig( status )
- if result:
- error( 'link src status change failed: %s\n' % result )
- result = dstIntf.ifconfig( status )
- if result:
- error( 'link dst status change failed: %s\n' % result )
-
- def interact( self ):
- "Start network and run our simple CLI."
- self.start()
- result = CLI( self )
- self.stop()
- return result
-
- inited = False
-
- @classmethod
- def init( cls ):
- "Initialize Mininet"
- if cls.inited:
- return
- ensureRoot()
- fixLimits()
- cls.inited = True
-
-
-class MininetWithControlNet( Mininet ):
-
- """Control network support:
-
- Create an explicit control network. Currently this is only
- used/usable with the user datapath.
-
- Notes:
-
- 1. If the controller and switches are in the same (e.g. root)
- namespace, they can just use the loopback connection.
-
- 2. If we can get unix domain sockets to work, we can use them
- instead of an explicit control network.
-
- 3. Instead of routing, we could bridge or use 'in-band' control.
-
- 4. Even if we dispense with this in general, it could still be
- useful for people who wish to simulate a separate control
- network (since real networks may need one!)
-
- 5. Basically nobody ever used this code, so it has been moved
- into its own class.
-
- 6. Ultimately we may wish to extend this to allow us to create a
- control network which every node's control interface is
- attached to."""
-
- def configureControlNetwork( self ):
- "Configure control network."
- self.configureRoutedControlNetwork()
-
- # We still need to figure out the right way to pass
- # in the control network location.
-
- def configureRoutedControlNetwork( self, ip='192.168.123.1',
- prefixLen=16 ):
- """Configure a routed control network on controller and switches.
- For use with the user datapath only right now."""
- controller = self.controllers[ 0 ]
- info( controller.name + ' <->' )
- cip = ip
- snum = ipParse( ip )
- for switch in self.switches:
- info( ' ' + switch.name )
- link = self.link( switch, controller, port1=0 )
- sintf, cintf = link.intf1, link.intf2
- switch.controlIntf = sintf
- snum += 1
- while snum & 0xff in [ 0, 255 ]:
- snum += 1
- sip = ipStr( snum )
- cintf.setIP( cip, prefixLen )
- sintf.setIP( sip, prefixLen )
- controller.setHostRoute( sip, cintf )
- switch.setHostRoute( cip, sintf )
- info( '\n' )
- info( '*** Testing control network\n' )
- while not cintf.isUp():
- info( '*** Waiting for', cintf, 'to come up\n' )
- sleep( 1 )
- for switch in self.switches:
- while not sintf.isUp():
- info( '*** Waiting for', sintf, 'to come up\n' )
- sleep( 1 )
- if self.ping( hosts=[ switch, controller ] ) != 0:
- error( '*** Error: control network test failed\n' )
- exit( 1 )
- info( '\n' )
diff --git a/mininet/node.py b/mininet/node.py
deleted file mode 100644
index 84e0bc6..0000000
--- a/mininet/node.py
+++ /dev/null
@@ -1,1107 +0,0 @@
-"""
-Node objects for Mininet.
-
-Nodes provide a simple abstraction for interacting with hosts, switches
-and controllers. Local nodes are simply one or more processes on the local
-machine.
-
-Node: superclass for all (primarily local) network nodes.
-
-Host: a virtual host. By default, a host is simply a shell; commands
- may be sent using Cmd (which waits for output), or using sendCmd(),
- which returns immediately, allowing subsequent monitoring using
- monitor(). Examples of how to run experiments using this
- functionality are provided in the examples/ directory.
-
-CPULimitedHost: a virtual host whose CPU bandwidth is limited by
- RT or CFS bandwidth limiting.
-
-Switch: superclass for switch nodes.
-
-UserSwitch: a switch using the user-space switch from the OpenFlow
- reference implementation.
-
-KernelSwitch: a switch using the kernel switch from the OpenFlow reference
- implementation.
-
-OVSSwitch: a switch using the OpenVSwitch OpenFlow-compatible switch
- implementation (openvswitch.org).
-
-Controller: superclass for OpenFlow controllers. The default controller
- is controller(8) from the reference implementation.
-
-NOXController: a controller node using NOX (noxrepo.org).
-
-RemoteController: a remote controller node, which may use any
- arbitrary OpenFlow-compatible controller, and which is not
- created or managed by mininet.
-
-Future enhancements:
-
-- Possibly make Node, Switch and Controller more abstract so that
- they can be used for both local and remote nodes
-
-- Create proxy objects for remote nodes (Mininet: Cluster Edition)
-"""
-
-import os
-import re
-import signal
-import select
-from subprocess import Popen, PIPE, STDOUT
-
-from mininet.log import info, error, warn, debug
-from mininet.util import ( quietRun, errRun, errFail, moveIntf, isShellBuiltin,
- numCores, retry, mountCgroups, run )
-from mininet.moduledeps import moduleDeps, pathCheck, OVS_KMOD, OF_KMOD, TUN
-from mininet.link import Link, Intf, TCIntf
-import pdb
-
-class Node( object ):
- """A virtual network node is simply a shell in a network namespace.
- We communicate with it using pipes."""
-
- portBase = 0 # Nodes always start with eth0/port0, even in OF 1.0
-
- def __init__( self, name, inNamespace=True, **params ):
- """name: name of node
- inNamespace: in network namespace?
- params: Node parameters (see config() for details)"""
-
- # Make sure class actually works
- self.checkSetup()
-
- self.name = name
- self.inNamespace = inNamespace
-
- # Stash configuration parameters for future reference
- self.params = params
-
- self.intfs = {} # dict of port numbers to interfaces
- self.ports = {} # dict of interfaces to port numbers
- # replace with Port objects, eventually ?
- self.nameToIntf = {} # dict of interface names to Intfs
-
- # Make pylint happy
- ( self.shell, self.execed, self.pid, self.stdin, self.stdout,
- self.lastPid, self.lastCmd, self.pollOut ) = (
- None, None, None, None, None, None, None, None )
- self.waiting = False
- self.readbuf = ''
-
- # Start command interpreter shell
- self.startShell()
-
- # File descriptor to node mapping support
- # Class variables and methods
-
- inToNode = {} # mapping of input fds to nodes
- outToNode = {} # mapping of output fds to nodes
-
- @classmethod
- def fdToNode( cls, fd ):
- """Return node corresponding to given file descriptor.
- fd: file descriptor
- returns: node"""
- node = cls.outToNode.get( fd )
- return node or cls.inToNode.get( fd )
-
- # Command support via shell process in namespace
-
- def startShell( self ):
- "Start a shell process for running commands"
- if self.shell:
- error( "%s: shell is already running" )
- return
- # mnexec: (c)lose descriptors, (d)etach from tty,
- # (p)rint pid, and run in (n)amespace
- opts = '-cdp'
- if self.inNamespace:
- opts += 'n'
- # bash -m: enable job control
- cmd = [ 'mnexec', opts, 'bash', '-m' ]
- self.shell = Popen( cmd, stdin=PIPE, stdout=PIPE, stderr=STDOUT,
- close_fds=True )
- self.stdin = self.shell.stdin
- self.stdout = self.shell.stdout
- self.pid = self.shell.pid
- self.pollOut = select.poll()
- self.pollOut.register( self.stdout )
- # Maintain mapping between file descriptors and nodes
- # This is useful for monitoring multiple nodes
- # using select.poll()
- self.outToNode[ self.stdout.fileno() ] = self
- self.inToNode[ self.stdin.fileno() ] = self
- self.execed = False
- self.lastCmd = None
- self.lastPid = None
- self.readbuf = ''
- self.waiting = False
-
- def cleanup( self ):
- "Help python collect its garbage."
- if not self.inNamespace:
- for intfName in self.intfNames():
- if self.name in intfName:
- quietRun( 'ip link del ' + intfName )
- self.shell = None
-
- # Subshell I/O, commands and control
-
- def read( self, maxbytes=1024 ):
- """Buffered read from node, non-blocking.
- maxbytes: maximum number of bytes to return"""
- count = len( self.readbuf )
- if count < maxbytes:
- data = os.read( self.stdout.fileno(), maxbytes - count )
- self.readbuf += data
- if maxbytes >= len( self.readbuf ):
- result = self.readbuf
- self.readbuf = ''
- else:
- result = self.readbuf[ :maxbytes ]
- self.readbuf = self.readbuf[ maxbytes: ]
- return result
-
- def readline( self ):
- """Buffered readline from node, non-blocking.
- returns: line (minus newline) or None"""
- self.readbuf += self.read( 1024 )
- if '\n' not in self.readbuf:
- return None
- pos = self.readbuf.find( '\n' )
- line = self.readbuf[ 0: pos ]
- self.readbuf = self.readbuf[ pos + 1: ]
- return line
-
- def write( self, data ):
- """Write data to node.
- data: string"""
- os.write( self.stdin.fileno(), data )
-
- def terminate( self ):
- "Send kill signal to Node and clean up after it."
- os.kill( self.pid, signal.SIGKILL )
- self.cleanup()
-
- def stop( self ):
- "Stop node."
- self.terminate()
-
- def waitReadable( self, timeoutms=None ):
- """Wait until node's output is readable.
- timeoutms: timeout in ms or None to wait indefinitely."""
- if len( self.readbuf ) == 0:
- self.pollOut.poll( timeoutms )
-
- def sendCmd( self, *args, **kwargs ):
- """Send a command, followed by a command to echo a sentinel,
- and return without waiting for the command to complete.
- args: command and arguments, or string
- printPid: print command's PID?"""
- assert not self.waiting
- printPid = kwargs.get( 'printPid', True )
- # Allow sendCmd( [ list ] )
- if len( args ) == 1 and type( args[ 0 ] ) is list:
- cmd = args[ 0 ]
- # Allow sendCmd( cmd, arg1, arg2... )
- elif len( args ) > 0:
- cmd = args
- # Convert to string
- if not isinstance( cmd, str ):
- cmd = ' '.join( [ str( c ) for c in cmd ] )
- if not re.search( r'\w', cmd ):
- # Replace empty commands with something harmless
- cmd = 'echo -n'
- self.lastCmd = cmd
- printPid = printPid and not isShellBuiltin( cmd )
- if len( cmd ) > 0 and cmd[ -1 ] == '&':
- # print ^A{pid}\n{sentinel}
- cmd += ' printf "\\001%d\n\\177" $! \n'
- else:
- # print sentinel
- cmd += '; printf "\\177"'
- if printPid and not isShellBuiltin( cmd ):
- cmd = 'mnexec -p ' + cmd
- self.write( cmd + '\n' )
- self.lastPid = None
- self.waiting = True
-
- def sendInt( self, sig=signal.SIGINT ):
- "Interrupt running command."
- if self.lastPid:
- try:
- os.kill( self.lastPid, sig )
- except OSError:
- pass
-
- def monitor( self, timeoutms=None ):
- """Monitor and return the output of a command.
- Set self.waiting to False if command has completed.
- timeoutms: timeout in ms or None to wait indefinitely."""
- self.waitReadable( timeoutms )
- data = self.read( 1024 )
- # Look for PID
- marker = chr( 1 ) + r'\d+\n'
- if chr( 1 ) in data:
- markers = re.findall( marker, data )
- if markers:
- self.lastPid = int( markers[ 0 ][ 1: ] )
- data = re.sub( marker, '', data )
- # Look for sentinel/EOF
- if len( data ) > 0 and data[ -1 ] == chr( 127 ):
- self.waiting = False
- data = data[ :-1 ]
- elif chr( 127 ) in data:
- self.waiting = False
- data = data.replace( chr( 127 ), '' )
- return data
-
- def waitOutput( self, verbose=False ):
- """Wait for a command to complete.
- Completion is signaled by a sentinel character, ASCII(127)
- appearing in the output stream. Wait for the sentinel and return
- the output, including trailing newline.
- verbose: print output interactively"""
- log = info if verbose else debug
- output = ''
- while self.waiting:
- data = self.monitor()
- output += data
- log( data )
- return output
-
- def cmd( self, *args, **kwargs ):
- """Send a command, wait for output, and return it.
- cmd: string"""
- verbose = kwargs.get( 'verbose', False )
- log = info if verbose else debug
- log( '*** %s : %s\n' % ( self.name, args ) )
- self.sendCmd( *args, **kwargs )
- return self.waitOutput( verbose )
-
- def cmdPrint( self, *args):
- """Call cmd and printing its output
- cmd: string"""
- return self.cmd( *args, **{ 'verbose': True } )
-
- def popen( self, *args, **kwargs ):
- """Return a Popen() object in our namespace
- args: Popen() args, single list, or string
- kwargs: Popen() keyword args"""
- defaults = { 'stdout': PIPE, 'stderr': PIPE,
- 'mncmd':
- [ 'mnexec', '-a', str( self.pid ) ] }
- defaults.update( kwargs )
- if len( args ) == 1:
- if type( args[ 0 ] ) is list:
- # popen([cmd, arg1, arg2...])
- cmd = args[ 0 ]
- elif type( args[ 0 ] ) is str:
- # popen("cmd arg1 arg2...")
- cmd = args[ 0 ].split()
- else:
- raise Exception( 'popen() requires a string or list' )
- elif len( args ) > 0:
- # popen( cmd, arg1, arg2... )
- cmd = list( args )
- # Attach to our namespace using mnexec -a
- mncmd = defaults[ 'mncmd' ]
- del defaults[ 'mncmd' ]
- cmd = mncmd + cmd
- # Shell requires a string, not a list!
- if defaults.get( 'shell', False ):
- cmd = ' '.join( cmd )
- return Popen( cmd, **defaults )
-
- def pexec( self, *args, **kwargs ):
- """Execute a command using popen
- returns: out, err, exitcode"""
- popen = self.popen( *args, **kwargs)
- out, err = popen.communicate()
- exitcode = popen.wait()
- return out, err, exitcode
-
- # Interface management, configuration, and routing
-
- # BL notes: This might be a bit redundant or over-complicated.
- # However, it does allow a bit of specialization, including
- # changing the canonical interface names. It's also tricky since
- # the real interfaces are created as veth pairs, so we can't
- # make a single interface at a time.
-
- def newPort( self ):
- "Return the next port number to allocate."
- if len( self.ports ) > 0:
- return max( self.ports.values() ) + 1
- return self.portBase
-
- def addIntf( self, intf, port=None ):
- """Add an interface.
- intf: interface
- port: port number (optional, typically OpenFlow port number)"""
- if port is None:
- port = self.newPort()
- self.intfs[ port ] = intf
- self.ports[ intf ] = port
- self.nameToIntf[ intf.name ] = intf
- debug( '\n' )
- debug( 'added intf %s:%d to node %s\n' % ( intf, port, self.name ) )
- if self.inNamespace:
- debug( 'moving', intf, 'into namespace for', self.name, '\n' )
- moveIntf( intf.name, self )
-
- def defaultIntf( self ):
- "Return interface for lowest port"
- ports = self.intfs.keys()
- if ports:
- return self.intfs[ min( ports ) ]
- else:
- warn( '*** defaultIntf: warning:', self.name,
- 'has no interfaces\n' )
-
- def intf( self, intf='' ):
- """Return our interface object with given string name,
- default intf if name is falsy (None, empty string, etc).
- or the input intf arg.
-
- Having this fcn return its arg for Intf objects makes it
- easier to construct functions with flexible input args for
- interfaces (those that accept both string names and Intf objects).
- """
- if not intf:
- return self.defaultIntf()
- elif type( intf) is str:
- return self.nameToIntf[ intf ]
- else:
- return intf
-
- def connectionsTo( self, node):
- "Return [ intf1, intf2... ] for all intfs that connect self to node."
- # We could optimize this if it is important
- connections = []
- for intf in self.intfList():
- link = intf.link
- if link:
- node1, node2 = link.intf1.node, link.intf2.node
- if node1 == self and node2 == node:
- connections += [ ( intf, link.intf2 ) ]
- elif node1 == node and node2 == self:
- connections += [ ( intf, link.intf1 ) ]
- return connections
-
- def deleteIntfs( self ):
- "Delete all of our interfaces."
- # In theory the interfaces should go away after we shut down.
- # However, this takes time, so we're better off removing them
- # explicitly so that we won't get errors if we run before they
- # have been removed by the kernel. Unfortunately this is very slow,
- # at least with Linux kernels before 2.6.33
- for intf in self.intfs.values():
- intf.delete()
- info( '.' )
-
- # Routing support
-
- def setARP( self, ip, mac ):
- """Add an ARP entry.
- ip: IP address as string
- mac: MAC address as string"""
- result = self.cmd( 'arp', '-s', ip, mac )
- return result
-
- def setHostRoute( self, ip, intf ):
- """Add route to host.
- ip: IP address as dotted decimal
- intf: string, interface name"""
- return self.cmd( 'route add -host', ip, 'dev', intf )
-
- def setDefaultRoute( self, intf=None ):
- """Set the default route to go through intf.
- intf: string, interface name"""
- if not intf:
- intf = self.defaultIntf()
- self.cmd( 'ip route flush root 0/0' )
- return self.cmd( 'route add default %s' % intf )
-
- # Convenience and configuration methods
-
- def setMAC( self, mac, intf=None ):
- """Set the MAC address for an interface.
- intf: intf or intf name
- mac: MAC address as string"""
- return self.intf( intf ).setMAC( mac )
-
- def setIP( self, ip, prefixLen=8, intf=None ):
- """Set the IP address for an interface.
- intf: intf or intf name
- ip: IP address as a string
- prefixLen: prefix length, e.g. 8 for /8 or 16M addrs"""
- # This should probably be rethought
- if '/' not in ip:
- ip = '%s/%s' % ( ip, prefixLen )
- return self.intf( intf ).setIP( ip )
-
- def IP( self, intf=None ):
- "Return IP address of a node or specific interface."
- return self.intf( intf ).IP()
-
- def MAC( self, intf=None ):
- "Return MAC address of a node or specific interface."
- return self.intf( intf ).MAC()
-
- def intfIsUp( self, intf=None ):
- "Check if an interface is up."
- return self.intf( intf ).isUp()
-
- # The reason why we configure things in this way is so
- # That the parameters can be listed and documented in
- # the config method.
- # Dealing with subclasses and superclasses is slightly
- # annoying, but at least the information is there!
-
- def setParam( self, results, method, **param ):
- """Internal method: configure a *single* parameter
- results: dict of results to update
- method: config method name
- param: arg=value (ignore if value=None)
- value may also be list or dict"""
- name, value = param.items()[ 0 ]
- f = getattr( self, method, None )
- if not f or value is None:
- return
- if type( value ) is list:
- result = f( *value )
- elif type( value ) is dict:
- result = f( **value )
- else:
- result = f( value )
- results[ name ] = result
- return result
-
- def config( self, mac=None, ip=None,
- defaultRoute=None, lo='up', **_params ):
- """Configure Node according to (optional) parameters:
- mac: MAC address for default interface
- ip: IP address for default interface
- ifconfig: arbitrary interface configuration
- Subclasses should override this method and call
- the parent class's config(**params)"""
- # If we were overriding this method, we would call
- # the superclass config method here as follows:
- # r = Parent.config( **_params )
- r = {}
- self.setParam( r, 'setMAC', mac=mac )
- self.setParam( r, 'setIP', ip=ip )
- self.setParam( r, 'defaultRoute', defaultRoute=defaultRoute )
- # This should be examined
- self.cmd( 'ifconfig lo ' + lo )
- return r
-
- def configDefault( self, **moreParams ):
- "Configure with default parameters"
- self.params.update( moreParams )
- self.config( **self.params )
-
- # This is here for backward compatibility
- def linkTo( self, node, link=Link ):
- """(Deprecated) Link to another node
- replace with Link( node1, node2)"""
- return link( self, node )
-
- # Other methods
-
- def intfList( self ):
- "List of our interfaces sorted by port number"
- return [ self.intfs[ p ] for p in sorted( self.intfs.iterkeys() ) ]
-
- def intfNames( self ):
- "The names of our interfaces sorted by port number"
- return [ str( i ) for i in self.intfList() ]
-
- def __repr__( self ):
- "More informative string representation"
- intfs = ( ','.join( [ '%s:%s' % ( i.name, i.IP() )
- for i in self.intfList() ] ) )
- return '<%s %s: %s pid=%s> ' % (
- self.__class__.__name__, self.name, intfs, self.pid )
-
- def __str__( self ):
- "Abbreviated string representation"
- return self.name
-
- # Automatic class setup support
-
- isSetup = False
-
- @classmethod
- def checkSetup( cls ):
- "Make sure our class and superclasses are set up"
- while cls and not getattr( cls, 'isSetup', True ):
- cls.setup()
- cls.isSetup = True
- # Make pylint happy
- cls = getattr( type( cls ), '__base__', None )
-
- @classmethod
- def setup( cls ):
- "Make sure our class dependencies are available"
- pathCheck( 'mnexec', 'ifconfig', moduleName='Mininet')
-
-
-class Host( Node ):
- "A host is simply a Node"
- pass
-
-
-
-class CPULimitedHost( Host ):
-
- "CPU limited host"
-
- def __init__( self, name, sched='cfs', **kwargs ):
- Host.__init__( self, name, **kwargs )
- # Initialize class if necessary
- if not CPULimitedHost.inited:
- CPULimitedHost.init()
- # Create a cgroup and move shell into it
- self.cgroup = 'cpu,cpuacct,cpuset:/' + self.name
- errFail( 'cgcreate -g ' + self.cgroup )
- # We don't add ourselves to a cpuset because you must
- # specify the cpu and memory placement first
- errFail( 'cgclassify -g cpu,cpuacct:/%s %s' % ( self.name, self.pid ) )
- # BL: Setting the correct period/quota is tricky, particularly
- # for RT. RT allows very small quotas, but the overhead
- # seems to be high. CFS has a mininimum quota of 1 ms, but
- # still does better with larger period values.
- self.period_us = kwargs.get( 'period_us', 100000 )
- self.sched = sched
- self.rtprio = 20
-
- def cgroupSet( self, param, value, resource='cpu' ):
- "Set a cgroup parameter and return its value"
- cmd = 'cgset -r %s.%s=%s /%s' % (
- resource, param, value, self.name )
- quietRun( cmd )
- nvalue = int( self.cgroupGet( param, resource ) )
- if nvalue != value:
- error( '*** error: cgroupSet: %s set to %s instead of %s\n'
- % ( param, nvalue, value ) )
- return nvalue
-
- def cgroupGet( self, param, resource='cpu' ):
- "Return value of cgroup parameter"
- cmd = 'cgget -r %s.%s /%s' % (
- resource, param, self.name )
-
- return int(quietRun( cmd ).split()[ -1 ] )
-
- def cgroupDel( self ):
- "Clean up our cgroup"
- # info( '*** deleting cgroup', self.cgroup, '\n' )
- _out, _err, exitcode = errRun( 'cgdelete -r ' + self.cgroup )
- return exitcode != 0
-
- def popen( self, *args, **kwargs ):
- """Return a Popen() object in node's namespace
- args: Popen() args, single list, or string
- kwargs: Popen() keyword args"""
- # Tell mnexec to execute command in our cgroup
- mncmd = [ 'mnexec', '-a', str( self.pid ),
- '-g', self.name ]
- if self.sched == 'rt':
- mncmd += [ '-r', str( self.rtprio ) ]
- return Host.popen( self, *args, mncmd=mncmd, **kwargs )
-
- def cleanup( self ):
- "Clean up our cgroup"
- retry( retries=3, delaySecs=1, fn=self.cgroupDel )
-
- def chrt( self ):
- "Set RT scheduling priority"
- quietRun( 'chrt -p %s %s' % ( self.rtprio, self.pid ) )
- result = quietRun( 'chrt -p %s' % self.pid )
- firstline = result.split( '\n' )[ 0 ]
- lastword = firstline.split( ' ' )[ -1 ]
- if lastword != 'SCHED_RR':
- error( '*** error: could not assign SCHED_RR to %s\n' % self.name )
- return lastword
-
- def rtInfo( self, f ):
- "Internal method: return parameters for RT bandwidth"
- pstr, qstr = 'rt_period_us', 'rt_runtime_us'
- # RT uses wall clock time for period and quota
- quota = int( self.period_us * f * numCores() )
- return pstr, qstr, self.period_us, quota
-
- def cfsInfo( self, f):
- "Internal method: return parameters for CFS bandwidth"
- pstr, qstr = 'cfs_period_us', 'cfs_quota_us'
- # CFS uses wall clock time for period and CPU time for quota.
- quota = int( self.period_us * f * numCores() )
- period = self.period_us
- if f > 0 and quota < 1000:
- debug( '(cfsInfo: increasing default period) ' )
- quota = 1000
- period = int( quota / f / numCores() )
- return pstr, qstr, period, quota
-
- # BL comment:
- # This may not be the right API,
- # since it doesn't specify CPU bandwidth in "absolute"
- # units the way link bandwidth is specified.
- # We should use MIPS or SPECINT or something instead.
- # Alternatively, we should change from system fraction
- # to CPU seconds per second, essentially assuming that
- # all CPUs are the same.
-
- def setCPUFrac( self, f=-1, sched=None):
- """Set overall CPU fraction for this host
- f: CPU bandwidth limit (fraction)
- sched: 'rt' or 'cfs'
- Note 'cfs' requires CONFIG_CFS_BANDWIDTH"""
- if not f:
- return
- if not sched:
- sched = self.sched
- if sched == 'rt':
- pstr, qstr, period, quota = self.rtInfo( f )
- elif sched == 'cfs':
- pstr, qstr, period, quota = self.cfsInfo( f )
- else:
- return
- if quota < 0:
- # Reset to unlimited
- quota = -1
- # Set cgroup's period and quota
- self.cgroupSet( pstr, period )
- self.cgroupSet( qstr, quota )
- if sched == 'rt':
- # Set RT priority if necessary
- self.chrt()
- info( '(%s %d/%dus) ' % ( sched, quota, period ) )
-
- def setCPUs( self, cores, mems=0 ):
- "Specify (real) cores that our cgroup can run on"
- if type( cores ) is list:
- cores = ','.join( [ str( c ) for c in cores ] )
- self.cgroupSet( resource='cpuset', param='cpus',
- value=cores )
- # Memory placement is probably not relevant, but we
- # must specify it anyway
- self.cgroupSet( resource='cpuset', param='mems',
- value=mems)
- # We have to do this here after we've specified
- # cpus and mems
- errFail( 'cgclassify -g cpuset:/%s %s' % (
- self.name, self.pid ) )
-
- def config( self, cpu=None, cores=None, **params ):
- """cpu: desired overall system CPU fraction
- cores: (real) core(s) this host can run on
- params: parameters for Node.config()"""
- r = Node.config( self, **params )
- # Was considering cpu={'cpu': cpu , 'sched': sched}, but
- # that seems redundant
-
- self.setParam( r, 'setCPUFrac', cpu=cpu )
- self.setParam( r, 'setCPUs', cores=cores )
-
- return r
-
- inited = False
-
- @classmethod
- def init( cls ):
- "Initialization for CPULimitedHost class"
- mountCgroups()
- cls.inited = True
-
-# Some important things to note:
-#
-# The "IP" address which setIP() assigns to the switch is not
-# an "IP address for the switch" in the sense of IP routing.
-# Rather, it is the IP address for the control interface,
-# on the control network, and it is only relevant to the
-# controller. If you are running in the root namespace
-# (which is the only way to run OVS at the moment), the
-# control interface is the loopback interface, and you
-# normally never want to change its IP address!
-#
-# In general, you NEVER want to attempt to use Linux's
-# network stack (i.e. ifconfig) to "assign" an IP address or
-# MAC address to a switch data port. Instead, you "assign"
-# the IP and MAC addresses in the controller by specifying
-# packets that you want to receive or send. The "MAC" address
-# reported by ifconfig for a switch data port is essentially
-# meaningless. It is important to understand this if you
-# want to create a functional router using OpenFlow.
-
-class Switch( Node ):
- """A Switch is a Node that is running (or has execed?)
- an OpenFlow switch."""
-
- portBase = 1 # Switches start with port 1 in OpenFlow
- dpidLen = 16 # digits in dpid passed to switch
-
- def __init__( self, name, dpid=None, opts='', listenPort=None, **params):
- """dpid: dpid for switch (or None to derive from name, e.g. s1 -> 1)
- opts: additional switch options
- listenPort: port to listen on for dpctl connections"""
- Node.__init__( self, name, **params )
- self.dpid = dpid if dpid else self.defaultDpid()
- self.opts = opts
- self.listenPort = listenPort
- if not self.inNamespace:
- self.controlIntf = Intf( 'lo', self, port=0 )
-
- def defaultDpid( self ):
- "Derive dpid from switch name, s1 -> 1"
- try:
- dpid = int( re.findall( '\d+', self.name )[ 0 ] )
- dpid = hex( dpid )[ 2: ]
- dpid = '0' * ( self.dpidLen - len( dpid ) ) + dpid
- return dpid
- except IndexError:
- raise Exception( 'Unable to derive default datapath ID - '
- 'please either specify a dpid or use a '
- 'canonical switch name such as s23.' )
-
- def defaultIntf( self ):
- "Return control interface"
- if self.controlIntf:
- return self.controlIntf
- else:
- return Node.defaultIntf( self )
-
- def sendCmd( self, *cmd, **kwargs ):
- """Send command to Node.
- cmd: string"""
- kwargs.setdefault( 'printPid', False )
- if not self.execed:
- return Node.sendCmd( self, *cmd, **kwargs )
- else:
- error( '*** Error: %s has execed and cannot accept commands' %
- self.name )
-
- def __repr__( self ):
- "More informative string representation"
- intfs = ( ','.join( [ '%s:%s' % ( i.name, i.IP() )
- for i in self.intfList() ] ) )
- return '<%s %s: %s pid=%s> ' % (
- self.__class__.__name__, self.name, intfs, self.pid )
-
-class UserSwitch( Switch ):
- "User-space switch."
-
- dpidLen = 12
-
- def __init__( self, name, **kwargs ):
- """Init.
- name: name for the switch"""
- Switch.__init__( self, name, **kwargs )
- pathCheck( 'ofdatapath', 'ofprotocol',
- moduleName='the OpenFlow reference user switch' +
- '(openflow.org)' )
- if self.listenPort:
- self.opts += ' --listen=ptcp:%i ' % self.listenPort
-
- @classmethod
- def setup( cls ):
- "Ensure any dependencies are loaded; if not, try to load them."
- if not os.path.exists( '/dev/net/tun' ):
- moduleDeps( add=TUN )
-
- def dpctl( self, *args ):
- "Run dpctl command"
- if not self.listenPort:
- return "can't run dpctl without passive listening port"
- return self.cmd( 'dpctl ' + ' '.join( args ) +
- ' tcp:127.0.0.1:%i' % self.listenPort )
-
- def start( self, controllers ):
- """Start OpenFlow reference user datapath.
- Log to /tmp/sN-{ofd,ofp}.log.
- controllers: list of controller objects"""
- # Add controllers
- clist = ','.join( [ 'tcp:%s:%d' % ( c.IP(), c.port )
- for c in controllers ] )
- ofdlog = '/tmp/' + self.name + '-ofd.log'
- ofplog = '/tmp/' + self.name + '-ofp.log'
- self.cmd( 'ifconfig lo up' )
- intfs = [ str( i ) for i in self.intfList() if not i.IP() ]
- self.cmd( 'ofdatapath -i ' + ','.join( intfs ) +
- ' punix:/tmp/' + self.name + ' -d ' + self.dpid +
- ' 1> ' + ofdlog + ' 2> ' + ofdlog + ' &' )
- self.cmd( 'ofprotocol unix:/tmp/' + self.name +
- ' ' + clist +
- ' --fail=closed ' + self.opts +
- ' 1> ' + ofplog + ' 2>' + ofplog + ' &' )
-
- def stop( self ):
- "Stop OpenFlow reference user datapath."
- self.cmd( 'kill %ofdatapath' )
- self.cmd( 'kill %ofprotocol' )
- self.deleteIntfs()
-
-
-class OVSLegacyKernelSwitch( Switch ):
- """Open VSwitch legacy kernel-space switch using ovs-openflowd.
- Currently only works in the root namespace."""
-
- def __init__( self, name, dp=None, **kwargs ):
- """Init.
- name: name for switch
- dp: netlink id (0, 1, 2, ...)
- defaultMAC: default MAC as unsigned int; random value if None"""
- Switch.__init__( self, name, **kwargs )
- self.dp = dp if dp else self.name
- self.intf = self.dp
- if self.inNamespace:
- error( "OVSKernelSwitch currently only works"
- " in the root namespace.\n" )
- exit( 1 )
-
- @classmethod
- def setup( cls ):
- "Ensure any dependencies are loaded; if not, try to load them."
- pathCheck( 'ovs-dpctl', 'ovs-openflowd',
- moduleName='Open vSwitch (openvswitch.org)')
- moduleDeps( subtract=OF_KMOD, add=OVS_KMOD )
-
- def start( self, controllers ):
- "Start up kernel datapath."
- ofplog = '/tmp/' + self.name + '-ofp.log'
- quietRun( 'ifconfig lo up' )
- # Delete local datapath if it exists;
- # then create a new one monitoring the given interfaces
- self.cmd( 'ovs-dpctl del-dp ' + self.dp )
- self.cmd( 'ovs-dpctl add-dp ' + self.dp )
- intfs = [ str( i ) for i in self.intfList() if not i.IP() ]
- self.cmd( 'ovs-dpctl', 'add-if', self.dp, ' '.join( intfs ) )
- # Run protocol daemon
- clist = ','.join( [ 'tcp:%s:%d' % ( c.IP(), c.port )
- for c in controllers ] )
- self.cmd( 'ovs-openflowd ' + self.dp +
- ' ' + clist +
- ' --fail=secure ' + self.opts +
- ' --datapath-id=' + self.dpid +
- ' 1>' + ofplog + ' 2>' + ofplog + '&' )
- self.execed = False
-
- def stop( self ):
- "Terminate kernel datapath."
- quietRun( 'ovs-dpctl del-dp ' + self.dp )
- self.cmd( 'kill %ovs-openflowd' )
- self.deleteIntfs()
-
-
-class OVSSwitch( Switch ):
- "Open vSwitch switch. Depends on ovs-vsctl."
-
- def __init__( self, name, failMode='secure', **params ):
- """Init.
- name: name for switch
- failMode: controller loss behavior (secure|open)"""
- Switch.__init__( self, name, **params )
- self.failMode = failMode
-
- @classmethod
- def setup( cls ):
- "Make sure Open vSwitch is installed and working"
- pathCheck( 'ovs-vsctl',
- moduleName='Open vSwitch (openvswitch.org)')
- # This should no longer be needed, and it breaks
- # with OVS 1.7 which has renamed the kernel module:
- # moduleDeps( subtract=OF_KMOD, add=OVS_KMOD )
- out, err, exitcode = errRun( 'ovs-vsctl -t 1 show' )
- if exitcode:
- error( out + err +
- 'ovs-vsctl exited with code %d\n' % exitcode +
- '*** Error connecting to ovs-db with ovs-vsctl\n'
- 'Make sure that Open vSwitch is installed, '
- 'that ovsdb-server is running, and that\n'
- '"ovs-vsctl show" works correctly.\n'
- 'You may wish to try '
- '"service openvswitch-switch start".\n' )
- exit( 1 )
-
- def dpctl( self, *args ):
- "Run ovs-dpctl command"
- return self.cmd( 'ovs-dpctl', args[ 0 ], self, *args[ 1: ] )
-
- @staticmethod
- def TCReapply( intf ):
- """Unfortunately OVS and Mininet are fighting
- over tc queuing disciplines. As a quick hack/
- workaround, we clear OVS's and reapply our own."""
- if type( intf ) is TCIntf:
- intf.config( **intf.params )
-
- def attach( self, intf ):
- "Connect a data port"
- self.cmd( 'ovs-vsctl add-port', self, intf )
- self.cmd( 'ifconfig', intf, 'up' )
- self.TCReapply( intf )
-
- def detach( self, intf ):
- "Disconnect a data port"
- self.cmd( 'ovs-vsctl del-port', self, intf )
-
- def start( self, controllers ):
- "Start up a new OVS OpenFlow switch using ovs-vsctl"
- if self.inNamespace:
- raise Exception(
- 'OVS kernel switch does not work in a namespace' )
- # We should probably call config instead, but this
- # requires some rethinking...
- self.cmd( 'ifconfig lo up' )
- # Annoyingly, --if-exists option seems not to work
- self.cmd( 'ovs-vsctl del-br', self )
- self.cmd( 'ovs-vsctl add-br', self )
- self.cmd( 'ovs-vsctl -- set Bridge', self,
- 'other_config:datapath-id=' + self.dpid )
- self.cmd( 'ovs-vsctl set-fail-mode', self, self.failMode )
- for intf in self.intfList():
- if not intf.IP():
- self.attach( intf )
- # Add controllers
- clist = ' '.join( [ 'tcp:%s:%d' % ( c.IP(), c.port )
- for c in controllers ] )
- if self.listenPort:
- clist += ' ptcp:%s' % self.listenPort
- self.cmd( 'ovs-vsctl set-controller', self, clist )
-
- def stop( self ):
- "Terminate OVS switch."
- self.cmd( 'ovs-vsctl del-br', self )
- self.deleteIntfs()
-
-OVSKernelSwitch = OVSSwitch
-
-
-class Controller( Node ):
- """A Controller is a Node that is running (or has execed?) an
- OpenFlow controller."""
-
- def __init__( self, name, inNamespace=False, command='controller',
- cargs='-v ptcp:%d', cdir=None, ip="127.0.0.1",
- port=6633, **params ):
- self.command = command
- self.cargs = cargs
- self.cdir = cdir
- self.ip = ip
- self.port = port
- Node.__init__( self, name, inNamespace=inNamespace,
- ip=ip, **params )
- self.cmd( 'ifconfig lo up' ) # Shouldn't be necessary
- self.checkListening()
-
- def checkListening( self ):
- "Make sure no controllers are running on our port"
- # Verify that Telnet is installed first:
- out, _err, returnCode = errRun( "which telnet" )
- if 'telnet' not in out or returnCode != 0:
- raise Exception( "Error running telnet to check for listening "
- "controllers; please check that it is "
- "installed." )
- listening = self.cmd( "echo A | telnet -e A %s %d" %
- ( self.ip, self.port ) )
- if 'Unable' not in listening:
- servers = self.cmd( 'netstat -atp' ).split( '\n' )
- pstr = ':%d ' % self.port
- clist = servers[ 0:1 ] + [ s for s in servers if pstr in s ]
- raise Exception( "Please shut down the controller which is"
- " running on port %d:\n" % self.port +
- '\n'.join( clist ) )
-
- def start( self ):
- """Start <controller> <args> on controller.
- Log to /tmp/cN.log"""
- pathCheck( self.command )
- cout = '/tmp/' + self.name + '.log'
- if self.cdir is not None:
- self.cmd( 'cd ' + self.cdir )
- self.cmd( self.command + ' ' + self.cargs % self.port +
- ' 1>' + cout + ' 2>' + cout + '&' )
- self.execed = False
-
- def stop( self ):
- "Stop controller."
- self.cmd( 'kill %' + self.command )
- self.terminate()
-
- def IP( self, intf=None ):
- "Return IP address of the Controller"
- if self.intfs:
- ip = Node.IP( self, intf )
- else:
- ip = self.ip
- return ip
-
- def __repr__( self ):
- "More informative string representation"
- return '<%s %s: %s:%s pid=%s> ' % (
- self.__class__.__name__, self.name,
- self.IP(), self.port, self.pid )
-
-
-class OVSController( Controller ):
- "Open vSwitch controller"
- def __init__( self, name, command='ovs-controller', **kwargs ):
- Controller.__init__( self, name, command=command, **kwargs )
-
-
-class NOX( Controller ):
- "Controller to run a NOX application."
-
- def __init__( self, name, *noxArgs, **kwargs ):
- """Init.
- name: name to give controller
- noxArgs: arguments (strings) to pass to NOX"""
- if not noxArgs:
- warn( 'warning: no NOX modules specified; '
- 'running packetdump only\n' )
- noxArgs = [ 'packetdump' ]
- elif type( noxArgs ) not in ( list, tuple ):
- noxArgs = [ noxArgs ]
-
- if 'NOX_CORE_DIR' not in os.environ:
- exit( 'exiting; please set missing NOX_CORE_DIR env var' )
- noxCoreDir = os.environ[ 'NOX_CORE_DIR' ]
-
- Controller.__init__( self, name,
- command=noxCoreDir + '/nox_core',
- cargs='--libdir=/usr/local/lib -v -i ptcp:%s ' +
- ' '.join( noxArgs ),
- cdir=noxCoreDir,
- **kwargs )
-
-
-class RemoteController( Controller ):
- "Controller running outside of Mininet's control."
-
- def __init__( self, name, ip='127.0.0.1',
- port=6633, **kwargs):
- """Init.
- name: name to give controller
- ip: the IP address where the remote controller is
- listening
- port: the port where the remote controller is listening"""
- Controller.__init__( self, name, ip=ip, port=port, **kwargs )
-
- def start( self ):
- "Overridden to do nothing."
- return
-
- def stop( self ):
- "Overridden to do nothing."
- return
-
- def checkListening( self ):
- "Warn if remote controller is not accessible"
- listening = self.cmd( "echo A | telnet -e A %s %d" %
- ( self.ip, self.port ) )
- if 'Unable' in listening:
- warn( "Unable to contact the remote controller"
- " at %s:%d\n" % ( self.ip, self.port ) )
diff --git a/mininet/term.py b/mininet/term.py
deleted file mode 100644
index 3cd70f2..0000000
--- a/mininet/term.py
+++ /dev/null
@@ -1,60 +0,0 @@
-"""
-Terminal creation and cleanup.
-Utility functions to run a term (connected via screen(1)) on each host.
-
-Requires GNU screen(1) and xterm(1).
-Optionally uses gnome-terminal.
-"""
-
-import re
-from subprocess import Popen
-
-from mininet.log import error
-from mininet.util import quietRun
-
-def quoteArg( arg ):
- "Quote an argument if it contains spaces."
- return repr( arg ) if ' ' in arg else arg
-
-def makeTerm( node, title='Node', term='xterm' ):
- """Run screen on a node, and hook up a terminal.
- node: Node object
- title: base title
- term: 'xterm' or 'gterm'
- returns: process created"""
- title += ': ' + node.name
- if not node.inNamespace:
- title += ' (root)'
- cmds = {
- 'xterm': [ 'xterm', '-title', title, '-e' ],
- 'gterm': [ 'gnome-terminal', '--title', title, '-e' ]
- }
- if term not in cmds:
- error( 'invalid terminal type: %s' % term )
- return
- if not node.execed:
- node.cmd( 'screen -dmS ' + 'mininet.' + node.name)
- args = [ 'screen', '-D', '-RR', '-S', 'mininet.' + node.name ]
- else:
- args = [ 'sh', '-c', 'exec tail -f /tmp/' + node.name + '*.log' ]
- if term == 'gterm':
- # Compress these for gnome-terminal, which expects one token
- # to follow the -e option
- args = [ ' '.join( [ quoteArg( arg ) for arg in args ] ) ]
- return Popen( cmds[ term ] + args )
-
-def cleanUpScreens():
- "Remove moldy old screen sessions."
- r = r'(\d+\.mininet\.[hsc]\d+)'
- output = quietRun( 'screen -ls' ).split( '\n' )
- for line in output:
- m = re.search( r, line )
- if m:
- quietRun( 'screen -S ' + m.group( 1 ) + ' -X quit' )
-
-def makeTerms( nodes, title='Node', term='xterm' ):
- """Create terminals.
- nodes: list of Node objects
- title: base title for each
- returns: list of created terminal processes"""
- return [ makeTerm( node, title, term ) for node in nodes ]
diff --git a/mininet/test/test_hifi.py b/mininet/test/test_hifi.py
deleted file mode 100644
index ace7bb5..0000000
--- a/mininet/test/test_hifi.py
+++ /dev/null
@@ -1,127 +0,0 @@
-#!/usr/bin/env python
-
-"""Package: mininet
- Test creation and pings for topologies with link and/or CPU options."""
-
-import unittest
-
-from mininet.net import Mininet
-from mininet.node import OVSKernelSwitch
-from mininet.node import CPULimitedHost
-from mininet.link import TCLink
-from mininet.topo import Topo
-from mininet.log import setLogLevel
-
-
-SWITCH = OVSKernelSwitch
-# Number of hosts for each test
-N = 2
-
-
-class SingleSwitchOptionsTopo(Topo):
- "Single switch connected to n hosts."
- def __init__(self, n=2, hopts=None, lopts=None):
- if not hopts:
- hopts = {}
- if not lopts:
- lopts = {}
- Topo.__init__(self, hopts=hopts, lopts=lopts)
- switch = self.addSwitch('s1')
- for h in range(n):
- host = self.addHost('h%s' % (h + 1))
- self.addLink(host, switch)
-
-
-class testOptionsTopo( unittest.TestCase ):
- "Verify ability to create networks with host and link options."
-
- def runOptionsTopoTest( self, n, hopts=None, lopts=None ):
- "Generic topology-with-options test runner."
- mn = Mininet( topo=SingleSwitchOptionsTopo( n=n, hopts=hopts,
- lopts=lopts ),
- host=CPULimitedHost, link=TCLink )
- dropped = mn.run( mn.ping )
- self.assertEqual( dropped, 0 )
-
- def assertWithinTolerance(self, measured, expected, tolerance_frac):
- """Check that a given value is within a tolerance of expected
- tolerance_frac: less-than-1.0 value; 0.8 would yield 20% tolerance.
- """
- self.assertTrue( float(measured) >= float(expected) * tolerance_frac )
- self.assertTrue( float(measured) >= float(expected) * tolerance_frac )
-
- def testCPULimits( self ):
- "Verify topology creation with CPU limits set for both schedulers."
- CPU_FRACTION = 0.1
- CPU_TOLERANCE = 0.8 # CPU fraction below which test should fail
- hopts = { 'cpu': CPU_FRACTION }
- #self.runOptionsTopoTest( N, hopts=hopts )
-
- mn = Mininet( SingleSwitchOptionsTopo( n=N, hopts=hopts ),
- host=CPULimitedHost )
- mn.start()
- results = mn.runCpuLimitTest( cpu=CPU_FRACTION )
- mn.stop()
- for cpu in results:
- self.assertWithinTolerance( cpu, CPU_FRACTION, CPU_TOLERANCE )
-
- def testLinkBandwidth( self ):
- "Verify that link bandwidths are accurate within a bound."
- BW = 5 # Mbps
- BW_TOLERANCE = 0.8 # BW fraction below which test should fail
- # Verify ability to create limited-link topo first;
- lopts = { 'bw': BW, 'use_htb': True }
- # Also verify correctness of limit limitng within a bound.
- mn = Mininet( SingleSwitchOptionsTopo( n=N, lopts=lopts ),
- link=TCLink )
- bw_strs = mn.run( mn.iperf )
- for bw_str in bw_strs:
- bw = float( bw_str.split(' ')[0] )
- self.assertWithinTolerance( bw, BW, BW_TOLERANCE )
-
- def testLinkDelay( self ):
- "Verify that link delays are accurate within a bound."
- DELAY_MS = 15
- DELAY_TOLERANCE = 0.8 # Delay fraction below which test should fail
- lopts = { 'delay': '%sms' % DELAY_MS, 'use_htb': True }
- mn = Mininet( SingleSwitchOptionsTopo( n=N, lopts=lopts ),
- link=TCLink )
- ping_delays = mn.run( mn.pingFull )
- test_outputs = ping_delays[0]
- # Ignore unused variables below
- # pylint: disable-msg=W0612
- node, dest, ping_outputs = test_outputs
- sent, received, rttmin, rttavg, rttmax, rttdev = ping_outputs
- self.assertEqual( sent, received )
- # pylint: enable-msg=W0612
- for rttval in [rttmin, rttavg, rttmax]:
- # Multiply delay by 4 to cover there & back on two links
- self.assertWithinTolerance( rttval, DELAY_MS * 4.0,
- DELAY_TOLERANCE)
-
- def testLinkLoss( self ):
- "Verify that we see packet drops with a high configured loss rate."
- LOSS_PERCENT = 99
- REPS = 1
- lopts = { 'loss': LOSS_PERCENT, 'use_htb': True }
- mn = Mininet( topo=SingleSwitchOptionsTopo( n=N, lopts=lopts ),
- host=CPULimitedHost, link=TCLink )
- # Drops are probabilistic, but the chance of no dropped packets is
- # 1 in 100 million with 4 hops for a link w/99% loss.
- dropped_total = 0
- mn.start()
- for _ in range(REPS):
- dropped_total += mn.ping(timeout='1')
- mn.stop()
- self.assertTrue(dropped_total > 0)
-
- def testMostOptions( self ):
- "Verify topology creation with most link options and CPU limits."
- lopts = { 'bw': 10, 'delay': '5ms', 'use_htb': True }
- hopts = { 'cpu': 0.5 / N }
- self.runOptionsTopoTest( N, hopts=hopts, lopts=lopts )
-
-
-if __name__ == '__main__':
- setLogLevel( 'warning' )
- unittest.main()
diff --git a/mininet/test/test_nets.py b/mininet/test/test_nets.py
deleted file mode 100644
index fde8e87..0000000
--- a/mininet/test/test_nets.py
+++ /dev/null
@@ -1,50 +0,0 @@
-#!/usr/bin/env python
-
-"""Package: mininet
- Test creation and all-pairs ping for each included mininet topo type."""
-
-import unittest
-
-from mininet.net import Mininet
-from mininet.node import Host, Controller
-from mininet.node import UserSwitch, OVSKernelSwitch
-from mininet.topo import SingleSwitchTopo, LinearTopo
-from mininet.log import setLogLevel
-
-SWITCHES = { 'user': UserSwitch,
- 'ovsk': OVSKernelSwitch,
-}
-
-
-class testSingleSwitch( unittest.TestCase ):
- "For each datapath type, test ping with single switch topologies."
-
- def testMinimal( self ):
- "Ping test with both datapaths on minimal topology"
- for switch in SWITCHES.values():
- mn = Mininet( SingleSwitchTopo(), switch, Host, Controller )
- dropped = mn.run( mn.ping )
- self.assertEqual( dropped, 0 )
-
- def testSingle5( self ):
- "Ping test with both datapaths on 5-host single-switch topology"
- for switch in SWITCHES.values():
- mn = Mininet( SingleSwitchTopo( k=5 ), switch, Host, Controller )
- dropped = mn.run( mn.ping )
- self.assertEqual( dropped, 0 )
-
-
-class testLinear( unittest.TestCase ):
- "For each datapath type, test all-pairs ping with LinearNet."
-
- def testLinear5( self ):
- "Ping test with both datapaths on a 5-switch topology"
- for switch in SWITCHES.values():
- mn = Mininet( LinearTopo( k=5 ), switch, Host, Controller )
- dropped = mn.run( mn.ping )
- self.assertEqual( dropped, 0 )
-
-
-if __name__ == '__main__':
- setLogLevel( 'warning' )
- unittest.main()
diff --git a/mininet/topo.py b/mininet/topo.py
deleted file mode 100644
index bd4afb5..0000000
--- a/mininet/topo.py
+++ /dev/null
@@ -1,237 +0,0 @@
-#!/usr/bin/env python
-'''@package topo
-
-Network topology creation.
-
-@author Brandon Heller (brandonh@stanford.edu)
-
-This package includes code to represent network topologies.
-
-A Topo object can be a topology database for NOX, can represent a physical
-setup for testing, and can even be emulated with the Mininet package.
-'''
-
-# BL: we may have to fix compatibility here.
-# networkx is also a fairly heavyweight dependency
-# from networkx.classes.graph import Graph
-
-from networkx import Graph
-from mininet.util import irange, natural, naturalSeq
-import pdb
-
-class Topo(object):
- "Data center network representation for structured multi-trees."
-
- def __init__(self, hopts=None, sopts=None, lopts=None, ropts=None):
- """Topo object:
- hinfo: default host options
- sopts: default switch options
- lopts: default link options"""
- self.g = Graph()
- self.node_info = {}
- self.link_info = {} # (src, dst) tuples hash to EdgeInfo objects
- self.hopts = {} if hopts is None else hopts
- self.ropts = {} if ropts is None else ropts
- self.sopts = {} if sopts is None else sopts
- self.lopts = {} if lopts is None else lopts
- self.ports = {} # ports[src][dst] is port on src that connects to dst
-
- def addNode(self, name, **opts):
- """Add Node to graph.
- name: name
- opts: node options
- returns: node name"""
- self.g.add_node(name)
- self.node_info[name] = opts
- return name
-
- def addHost(self, name, **opts):
- """Convenience method: Add host to graph.
- name: host name
- opts: host options
- returns: host name"""
- #pdb.set_trace()
- if not opts:
- if self.hopts:
- opts = self.hopts
- elif self.ropts:
- opts = self.ropts
- return self.addNode(name, **opts)
-
- def addSwitch(self, name, **opts):
- """Convenience method: Add switch to graph.
- name: switch name
- opts: switch options
- returns: switch name"""
- if not opts and self.sopts:
- opts = self.sopts
- result = self.addNode(name, isSwitch=True, **opts)
- return result
-
- def addLink(self, node1, node2, port1=None, port2=None,
- **opts):
- """node1, node2: nodes to link together
- port1, port2: ports (optional)
- opts: link options (optional)
- returns: link info key"""
- if not opts and self.lopts:
- opts = self.lopts
- self.addPort(node1, node2, port1, port2)
- key = tuple(self.sorted([node1, node2]))
- self.link_info[key] = opts
- self.g.add_edge(*key)
- return key
-
- def addPort(self, src, dst, sport=None, dport=None):
- '''Generate port mapping for new edge.
- @param src source switch name
- @param dst destination switch name
- '''
- self.ports.setdefault(src, {})
- self.ports.setdefault(dst, {})
- # New port: number of outlinks + base
- src_base = 1 if self.isSwitch(src) else 0
- dst_base = 1 if self.isSwitch(dst) else 0
- if sport is None:
- sport = len(self.ports[src]) + src_base
- if dport is None:
- dport = len(self.ports[dst]) + dst_base
- self.ports[src][dst] = sport
- self.ports[dst][src] = dport
-
- def nodes(self, sort=True):
- "Return nodes in graph"
- if sort:
- return self.sorted( self.g.nodes() )
- else:
- return self.g.nodes()
-
- def isSwitch(self, n):
- '''Returns true if node is a switch.'''
- #pdb.set_trace()
- info = self.node_info[n]
- return info and info.get('isSwitch', False)
-
- def switches(self, sort=True):
- '''Return switches.
- sort: sort switches alphabetically
- @return dpids list of dpids
- '''
- return [n for n in self.nodes(sort) if self.isSwitch(n)]
-
- def hosts(self, sort=True):
- '''Return hosts.
- sort: sort hosts alphabetically
- @return dpids list of dpids
- '''
- return [n for n in self.nodes(sort) if not self.isSwitch(n)]
-
- def links(self, sort=True):
- '''Return links.
- sort: sort links alphabetically
- @return links list of name pairs
- '''
- if not sort:
- return self.g.edges()
- else:
- links = [tuple(self.sorted(e)) for e in self.g.edges()]
- return sorted( links, key=naturalSeq )
-
- def port(self, src, dst):
- '''Get port number.
-
- @param src source switch name
- @param dst destination switch name
- @return tuple (src_port, dst_port):
- src_port: port on source switch leading to the destination switch
- dst_port: port on destination switch leading to the source switch
- '''
- if src in self.ports and dst in self.ports[src]:
- assert dst in self.ports and src in self.ports[dst]
- return (self.ports[src][dst], self.ports[dst][src])
-
- def linkInfo( self, src, dst ):
- "Return link metadata"
- src, dst = self.sorted([src, dst])
- return self.link_info[(src, dst)]
-
- def setlinkInfo( self, src, dst, info ):
- "Set link metadata"
- src, dst = self.sorted([src, dst])
- self.link_info[(src, dst)] = info
-
- def nodeInfo( self, name ):
- "Return metadata (dict) for node"
- info = self.node_info[ name ]
- return info if info is not None else {}
-
- def setNodeInfo( self, name, info ):
- "Set metadata (dict) for node"
- self.node_info[ name ] = info
-
- @staticmethod
- def sorted( items ):
- "Items sorted in natural (i.e. alphabetical) order"
- return sorted(items, key=natural)
-
-class SingleSwitchTopo(Topo):
- '''Single switch connected to k hosts.'''
-
- def __init__(self, k=2, **opts):
- '''Init.
-
- @param k number of hosts
- @param enable_all enables all nodes and switches?
- '''
- super(SingleSwitchTopo, self).__init__(**opts)
-
- self.k = k
-
- switch = self.addSwitch('s1')
- for h in irange(1, k):
- host = self.addHost('h%s' % h)
- self.addLink(host, switch)
-
-
-class SingleSwitchReversedTopo(Topo):
- '''Single switch connected to k hosts, with reversed ports.
-
- The lowest-numbered host is connected to the highest-numbered port.
-
- Useful to verify that Mininet properly handles custom port numberings.
- '''
- def __init__(self, k=2, **opts):
- '''Init.
-
- @param k number of hosts
- @param enable_all enables all nodes and switches?
- '''
- super(SingleSwitchReversedTopo, self).__init__(**opts)
- self.k = k
- switch = self.addSwitch('s1')
- for h in irange(1, k):
- host = self.addHost('h%s' % h)
- self.addLink(host, switch,
- port1=0, port2=(k - h + 1))
-
-class LinearTopo(Topo):
- "Linear topology of k switches, with one host per switch."
-
- def __init__(self, k=2, **opts):
- """Init.
- k: number of switches (and hosts)
- hconf: host configuration options
- lconf: link configuration options"""
-
- super(LinearTopo, self).__init__(**opts)
-
- self.k = k
-
- lastSwitch = None
- for i in irange(1, k):
- host = self.addHost('h%s' % i)
- switch = self.addSwitch('s%s' % i)
- self.addLink( host, switch)
- if lastSwitch:
- self.addLink( switch, lastSwitch)
- lastSwitch = switch
diff --git a/mininet/topolib.py b/mininet/topolib.py
deleted file mode 100644
index 63ba36d..0000000
--- a/mininet/topolib.py
+++ /dev/null
@@ -1,36 +0,0 @@
-"Library of potentially useful topologies for Mininet"
-
-from mininet.topo import Topo
-from mininet.net import Mininet
-
-class TreeTopo( Topo ):
- "Topology for a tree network with a given depth and fanout."
-
- def __init__( self, depth=1, fanout=2 ):
- super( TreeTopo, self ).__init__()
- # Numbering: h1..N, s1..M
- self.hostNum = 1
- self.switchNum = 1
- # Build topology
- self.addTree( depth, fanout )
-
- def addTree( self, depth, fanout ):
- """Add a subtree starting with node n.
- returns: last node added"""
- isSwitch = depth > 0
- if isSwitch:
- node = self.addSwitch( 's%s' % self.switchNum )
- self.switchNum += 1
- for _ in range( fanout ):
- child = self.addTree( depth - 1, fanout )
- self.addLink( node, child )
- else:
- node = self.addHost( 'h%s' % self.hostNum )
- self.hostNum += 1
- return node
-
-
-def TreeNet( depth=1, fanout=2, **kwargs ):
- "Convenience function for creating tree networks."
- topo = TreeTopo( depth, fanout )
- return Mininet( topo, **kwargs )
diff --git a/mininet/util.py b/mininet/util.py
deleted file mode 100644
index 6a3dc14..0000000
--- a/mininet/util.py
+++ /dev/null
@@ -1,473 +0,0 @@
-"Utility functions for Mininet."
-
-from mininet.log import output, info, error, warn
-
-from time import sleep
-from resource import setrlimit, RLIMIT_NPROC, RLIMIT_NOFILE
-from select import poll, POLLIN
-from subprocess import call, check_call, Popen, PIPE, STDOUT
-import re
-from fcntl import fcntl, F_GETFL, F_SETFL
-from os import O_NONBLOCK
-import os
-
-# Command execution support
-
-def run( cmd ):
- """Simple interface to subprocess.call()
- cmd: list of command params"""
- return call( cmd.split( ' ' ) )
-
-def checkRun( cmd ):
- """Simple interface to subprocess.check_call()
- cmd: list of command params"""
- return check_call( cmd.split( ' ' ) )
-
-# pylint doesn't understand explicit type checking
-# pylint: disable-msg=E1103
-
-def oldQuietRun( *cmd ):
- """Run a command, routing stderr to stdout, and return the output.
- cmd: list of command params"""
- if len( cmd ) == 1:
- cmd = cmd[ 0 ]
- if isinstance( cmd, str ):
- cmd = cmd.split( ' ' )
- popen = Popen( cmd, stdout=PIPE, stderr=STDOUT )
- # We can't use Popen.communicate() because it uses
- # select(), which can't handle
- # high file descriptor numbers! poll() can, however.
- out = ''
- readable = poll()
- readable.register( popen.stdout )
- while True:
- while readable.poll():
- data = popen.stdout.read( 1024 )
- if len( data ) == 0:
- break
- out += data
- popen.poll()
- if popen.returncode is not None:
- break
- return out
-
-
-# This is a bit complicated, but it enables us to
-# monitor command output as it is happening
-
-def errRun( *cmd, **kwargs ):
- """Run a command and return stdout, stderr and return code
- cmd: string or list of command and args
- stderr: STDOUT to merge stderr with stdout
- shell: run command using shell
- echo: monitor output to console"""
- # Allow passing in a list or a string
- if len( cmd ) == 1:
- cmd = cmd[ 0 ]
- if isinstance( cmd, str ):
- cmd = cmd.split( ' ' )
- cmd = [ str( arg ) for arg in cmd ]
- # By default we separate stderr, don't run in a shell, and don't echo
- stderr = kwargs.get( 'stderr', PIPE )
- shell = kwargs.get( 'shell', False )
- echo = kwargs.get( 'echo', False )
- if echo:
- # cmd goes to stderr, output goes to stdout
- info( cmd, '\n' )
- popen = Popen( cmd, stdout=PIPE, stderr=stderr, shell=shell )
- # We use poll() because select() doesn't work with large fd numbers,
- # and thus communicate() doesn't work either
- out, err = '', ''
- poller = poll()
- poller.register( popen.stdout, POLLIN )
- fdtofile = { popen.stdout.fileno(): popen.stdout }
- outDone, errDone = False, True
- if popen.stderr:
- fdtofile[ popen.stderr.fileno() ] = popen.stderr
- poller.register( popen.stderr, POLLIN )
- errDone = False
- while not outDone or not errDone:
- readable = poller.poll()
- for fd, _event in readable:
- f = fdtofile[ fd ]
- data = f.read( 1024 )
- if echo:
- output( data )
- if f == popen.stdout:
- out += data
- if data == '':
- outDone = True
- elif f == popen.stderr:
- err += data
- if data == '':
- errDone = True
- returncode = popen.wait()
- return out, err, returncode
-
-def errFail( *cmd, **kwargs ):
- "Run a command using errRun and raise exception on nonzero exit"
- out, err, ret = errRun( *cmd, **kwargs )
- if ret:
- raise Exception( "errFail: %s failed with return code %s: %s"
- % ( cmd, ret, err ) )
- return out, err, ret
-
-def quietRun( cmd, **kwargs ):
- "Run a command and return merged stdout and stderr"
- return errRun( cmd, stderr=STDOUT, **kwargs )[ 0 ]
-
-# pylint: enable-msg=E1103
-# pylint: disable-msg=E1101
-
-def isShellBuiltin( cmd ):
- "Return True if cmd is a bash builtin."
- if isShellBuiltin.builtIns is None:
- isShellBuiltin.builtIns = quietRun( 'bash -c enable' )
- space = cmd.find( ' ' )
- if space > 0:
- cmd = cmd[ :space]
- return cmd in isShellBuiltin.builtIns
-
-isShellBuiltin.builtIns = None
-
-# pylint: enable-msg=E1101
-
-# Interface management
-#
-# Interfaces are managed as strings which are simply the
-# interface names, of the form 'nodeN-ethM'.
-#
-# To connect nodes, we create a pair of veth interfaces, and then place them
-# in the pair of nodes that we want to communicate. We then update the node's
-# list of interfaces and connectivity map.
-#
-# For the kernel datapath, switch interfaces
-# live in the root namespace and thus do not have to be
-# explicitly moved.
-
-def makeIntfPair( intf1, intf2 ):
- """Make a veth pair connecting intf1 and intf2.
- intf1: string, interface
- intf2: string, interface
- returns: success boolean"""
- # Delete any old interfaces with the same names
- quietRun( 'ip link del ' + intf1 )
- quietRun( 'ip link del ' + intf2 )
- # Create new pair
- cmd = 'ip link add name ' + intf1 + ' type veth peer name ' + intf2
- return quietRun( cmd )
-
-def retry( retries, delaySecs, fn, *args, **keywords ):
- """Try something several times before giving up.
- n: number of times to retry
- delaySecs: wait this long between tries
- fn: function to call
- args: args to apply to function call"""
- tries = 0
- while not fn( *args, **keywords ) and tries < retries:
- sleep( delaySecs )
- tries += 1
- if tries >= retries:
- error( "*** gave up after %i retries\n" % tries )
- exit( 1 )
-
-def moveIntfNoRetry( intf, node, printError=False ):
- """Move interface to node, without retrying.
- intf: string, interface
- node: Node object
- printError: if true, print error"""
- cmd = 'ip link set ' + intf + ' netns ' + repr( node.pid )
- quietRun( cmd )
- links = node.cmd( 'ip link show' )
- if not ( ' %s:' % intf ) in links:
- if printError:
- error( '*** Error: moveIntf: ' + intf +
- ' not successfully moved to ' + node.name + '\n' )
- return False
- return True
-
-def moveIntf( intf, node, printError=False, retries=3, delaySecs=0.001 ):
- """Move interface to node, retrying on failure.
- intf: string, interface
- node: Node object
- printError: if true, print error"""
- retry( retries, delaySecs, moveIntfNoRetry, intf, node, printError )
-
-# Support for dumping network
-
-def dumpNodeConnections( nodes ):
- "Dump connections to/from nodes."
-
- def dumpConnections( node ):
- "Helper function: dump connections to node"
- for intf in node.intfList():
- output( ' %s:' % intf )
- if intf.link:
- intfs = [ intf.link.intf1, intf.link.intf2 ]
- intfs.remove( intf )
- output( intfs[ 0 ] )
- else:
- output( ' ' )
-
- for node in nodes:
- output( node.name )
- dumpConnections( node )
- output( '\n' )
-
-def dumpNetConnections( net ):
- "Dump connections in network"
- nodes = net.controllers + net.switches + net.hosts
- dumpNodeConnections( nodes )
-
-# IP and Mac address formatting and parsing
-
-def _colonHex( val, bytecount ):
- """Generate colon-hex string.
- val: input as unsigned int
- bytecount: number of bytes to convert
- returns: chStr colon-hex string"""
- pieces = []
- for i in range( bytecount - 1, -1, -1 ):
- piece = ( ( 0xff << ( i * 8 ) ) & val ) >> ( i * 8 )
- pieces.append( '%02x' % piece )
- chStr = ':'.join( pieces )
- return chStr
-
-def macColonHex( mac ):
- """Generate MAC colon-hex string from unsigned int.
- mac: MAC address as unsigned int
- returns: macStr MAC colon-hex string"""
- return _colonHex( mac, 6 )
-
-def ipStr( ip ):
- """Generate IP address string from an unsigned int.
- ip: unsigned int of form w << 24 | x << 16 | y << 8 | z
- returns: ip address string w.x.y.z, or 10.x.y.z if w==0"""
- w = ( ip >> 24 ) & 0xff
- w = 10 if w == 0 else w
- x = ( ip >> 16 ) & 0xff
- y = ( ip >> 8 ) & 0xff
- z = ip & 0xff
- return "%i.%i.%i.%i" % ( w, x, y, z )
-
-def ipNum( w, x, y, z ):
- """Generate unsigned int from components of IP address
- returns: w << 24 | x << 16 | y << 8 | z"""
- return ( w << 24 ) | ( x << 16 ) | ( y << 8 ) | z
-
-def nextCCNnet(curCCNnet):
- netNum = ipParse(curCCNnet)
- return ipStr(netNum+4)
-
-def ipAdd( i, prefixLen=8, ipBaseNum=0x0a000000 ):
- """Return IP address string from ints
- i: int to be added to ipbase
- prefixLen: optional IP prefix length
- ipBaseNum: option base IP address as int
- returns IP address as string"""
- # Ugly but functional
- assert i < ( 1 << ( 32 - prefixLen ) )
- mask = 0xffffffff ^ ( ( 1 << prefixLen ) - 1 )
- ipnum = i + ( ipBaseNum & mask )
- return ipStr( ipnum )
-
-def ipParse( ip ):
- "Parse an IP address and return an unsigned int."
- args = [ int( arg ) for arg in ip.split( '.' ) ]
- return ipNum( *args )
-
-def netParse( ipstr ):
- """Parse an IP network specification, returning
- address and prefix len as unsigned ints"""
- prefixLen = 0
- if '/' in ipstr:
- ip, pf = ipstr.split( '/' )
- prefixLen = int( pf )
- return ipParse( ip ), prefixLen
-
-def checkInt( s ):
- "Check if input string is an int"
- try:
- int( s )
- return True
- except ValueError:
- return False
-
-def checkFloat( s ):
- "Check if input string is a float"
- try:
- float( s )
- return True
- except ValueError:
- return False
-
-def makeNumeric( s ):
- "Convert string to int or float if numeric."
- if checkInt( s ):
- return int( s )
- elif checkFloat( s ):
- return float( s )
- else:
- return s
-
-# Popen support
-
-def pmonitor(popens, timeoutms=500, readline=True,
- readmax=1024 ):
- """Monitor dict of hosts to popen objects
- a line at a time
- timeoutms: timeout for poll()
- readline: return single line of output
- yields: host, line/output (if any)
- terminates: when all EOFs received"""
- poller = poll()
- fdToHost = {}
- for host, popen in popens.iteritems():
- fd = popen.stdout.fileno()
- fdToHost[ fd ] = host
- poller.register( fd, POLLIN )
- if not readline:
- # Use non-blocking reads
- flags = fcntl( fd, F_GETFL )
- fcntl( fd, F_SETFL, flags | O_NONBLOCK )
- while True:
- fds = poller.poll( timeoutms )
- if fds:
- for fd, _event in fds:
- host = fdToHost[ fd ]
- popen = popens[ host ]
- if readline:
- # Attempt to read a line of output
- # This blocks until we receive a newline!
- line = popen.stdout.readline()
- else:
- line = popen.stdout.read( readmax )
- yield host, line
- # Check for EOF
- if not line:
- popen.poll()
- if popen.returncode is not None:
- poller.unregister( fd )
- del popens[ host ]
- if not popens:
- return
- else:
- yield None, ''
-
-# Other stuff we use
-
-def fixLimits():
- "Fix ridiculously small resource limits."
- setrlimit( RLIMIT_NPROC, ( 8192, 8192 ) )
- setrlimit( RLIMIT_NOFILE, ( 16384, 16384 ) )
-
-def mountCgroups():
- "Make sure cgroups file system is mounted"
- mounts = quietRun( 'mount' )
- cgdir = '/sys/fs/cgroup'
- csdir = cgdir + '/cpuset'
- if ('cgroup on %s' % cgdir not in mounts and
- 'cgroups on %s' % cgdir not in mounts):
- raise Exception( "cgroups not mounted on " + cgdir )
- if 'cpuset on %s' % csdir not in mounts:
- errRun( 'mkdir -p ' + csdir )
- errRun( 'mount -t cgroup -ocpuset cpuset ' + csdir )
-
-def natural( text ):
- "To sort sanely/alphabetically: sorted( l, key=natural )"
- def num( s ):
- "Convert text segment to int if necessary"
- return int( s ) if s.isdigit() else s
- return [ num( s ) for s in re.split( r'(\d+)', text ) ]
-
-def naturalSeq( t ):
- "Natural sort key function for sequences"
- return [ natural( x ) for x in t ]
-
-def numCores():
- "Returns number of CPU cores based on /proc/cpuinfo"
- if hasattr( numCores, 'ncores' ):
- return numCores.ncores
- try:
- numCores.ncores = int( quietRun('grep -c processor /proc/cpuinfo') )
- except ValueError:
- return 0
- return numCores.ncores
-
-def irange(start, end):
- """Inclusive range from start to end (vs. Python insanity.)
- irange(1,5) -> 1, 2, 3, 4, 5"""
- return range( start, end + 1 )
-
-def custom( cls, **params ):
- "Returns customized constructor for class cls."
- # Note: we may wish to see if we can use functools.partial() here
- # and in customConstructor
- def customized( *args, **kwargs):
- "Customized constructor"
- kwargs = kwargs.copy()
- kwargs.update( params )
- return cls( *args, **kwargs )
- customized.__name__ = 'custom(%s,%s)' % ( cls, params )
- return customized
-
-def splitArgs( argstr ):
- """Split argument string into usable python arguments
- argstr: argument string with format fn,arg2,kw1=arg3...
- returns: fn, args, kwargs"""
- split = argstr.split( ',' )
- fn = split[ 0 ]
- params = split[ 1: ]
- # Convert int and float args; removes the need for function
- # to be flexible with input arg formats.
- args = [ makeNumeric( s ) for s in params if '=' not in s ]
- kwargs = {}
- for s in [ p for p in params if '=' in p ]:
- key, val = s.split( '=' )
- kwargs[ key ] = makeNumeric( val )
- return fn, args, kwargs
-
-def customConstructor( constructors, argStr ):
- """Return custom constructor based on argStr
- The args and key/val pairs in argsStr will be automatically applied
- when the generated constructor is later used.
- """
- cname, newargs, kwargs = splitArgs( argStr )
- constructor = constructors.get( cname, None )
-
- if not constructor:
- raise Exception( "error: %s is unknown - please specify one of %s" %
- ( cname, constructors.keys() ) )
-
- def customized( name, *args, **params ):
- "Customized constructor, useful for Node, Link, and other classes"
- params = params.copy()
- params.update( kwargs )
- if not newargs:
- return constructor( name, *args, **params )
- if args:
- warn( 'warning: %s replacing %s with %s\n' % (
- constructor, args, newargs ) )
- return constructor( name, *newargs, **params )
-
- customized.__name__ = 'customConstructor(%s)' % argStr
- return customized
-
-def buildTopo( topos, topoStr ):
- """Create topology from string with format (object, arg1, arg2,...).
- input topos is a dict of topo names to constructors, possibly w/args.
- """
- topo, args, kwargs = splitArgs( topoStr )
- if topo not in topos:
- raise Exception( 'Invalid topo name %s' % topo )
- return topos[ topo ]( *args, **kwargs )
-
-def ensureRoot():
- """Ensure that we are running as root.
-
- Probably we should only sudo when needed as per Big Switch's patch.
- """
- if os.getuid() != 0:
- print "*** Mininet must run as root."
- exit( 1 )
- return
diff --git a/mnexec.c b/mnexec.c
deleted file mode 100644
index 42a9cf6..0000000
--- a/mnexec.c
+++ /dev/null
@@ -1,179 +0,0 @@
-/* mnexec: execution utility for mininet
- *
- * Starts up programs and does things that are slow or
- * difficult in Python, including:
- *
- * - closing all file descriptors except stdin/out/error
- * - detaching from a controlling tty using setsid
- * - running in a network namespace
- * - printing out the pid of a process so we can identify it later
- * - attaching to a namespace and cgroup
- * - setting RT scheduling
- *
- * Partially based on public domain setsid(1)
-*/
-
-#include <stdio.h>
-#include <linux/sched.h>
-#include <unistd.h>
-#include <limits.h>
-#include <syscall.h>
-#include <fcntl.h>
-#include <stdlib.h>
-#include <limits.h>
-#include <sched.h>
-
-#if !defined(VERSION)
-#define VERSION "(devel)"
-#endif
-
-void usage(char *name)
-{
- printf("Execution utility for Mininet\n\n"
- "Usage: %s [-cdnp] [-a pid] [-g group] [-r rtprio] cmd args...\n\n"
- "Options:\n"
- " -c: close all file descriptors except stdin/out/error\n"
- " -d: detach from tty by calling setsid()\n"
- " -n: run in new network namespace\n"
- " -p: print ^A + pid\n"
- " -a pid: attach to pid's network namespace\n"
- " -g group: add to cgroup\n"
- " -r rtprio: run with SCHED_RR (usually requires -g)\n"
- " -v: print version\n",
- name);
-}
-
-
-int setns(int fd, int nstype)
-{
- return syscall(308, fd, nstype);
-}
-
-/* Validate alphanumeric path foo1/bar2/baz */
-void validate(char *path)
-{
- char *s;
- for (s=path; *s; s++) {
- if (!isalnum(*s) && *s != '/') {
- fprintf(stderr, "invalid path: %s\n", path);
- exit(1);
- }
- }
-}
-
-/* Add our pid to cgroup */
-int cgroup(char *gname)
-{
- static char path[PATH_MAX];
- static char *groups[] = {
- "cpu", "cpuacct", "cpuset", NULL
- };
- char **gptr;
- pid_t pid = getpid();
- int count = 0;
- validate(gname);
- for (gptr = groups; *gptr; gptr++) {
- FILE *f;
- snprintf(path, PATH_MAX, "/sys/fs/cgroup/%s/%s/tasks",
- *gptr, gname);
- f = fopen(path, "w");
- if (f) {
- count++;
- fprintf(f, "%d\n", pid);
- fclose(f);
- }
- }
- if (!count) {
- fprintf(stderr, "cgroup: could not add to cgroup %s\n",
- gname);
- exit(1);
- }
-}
-
-int main(int argc, char *argv[])
-{
- char c;
- int fd;
- char path[PATH_MAX];
- int nsid;
- int pid;
- static struct sched_param sp;
- while ((c = getopt(argc, argv, "+cdnpa:g:r:vh")) != -1)
- switch(c) {
- case 'c':
- /* close file descriptors except stdin/out/error */
- for (fd = getdtablesize(); fd > 2; fd--)
- close(fd);
- break;
- case 'd':
- /* detach from tty */
- if (getpgrp() == getpid()) {
- switch(fork()) {
- case -1:
- perror("fork");
- return 1;
- case 0: /* child */
- break;
- default: /* parent */
- return 0;
- }
- }
- setsid();
- break;
- case 'n':
- /* run in network namespace */
- if (unshare(CLONE_NEWNET) == -1) {
- perror("unshare");
- return 1;
- }
- break;
- case 'p':
- /* print pid */
- printf("\001%d\n", getpid());
- fflush(stdout);
- break;
- case 'a':
- /* Attach to pid's network namespace */
- pid = atoi(optarg);
- sprintf(path, "/proc/%d/ns/net", pid );
- nsid = open(path, O_RDONLY);
- if (nsid < 0) {
- perror(path);
- return 1;
- }
- if (setns(nsid, 0) != 0) {
- perror("setns");
- return 1;
- }
- break;
- case 'g':
- /* Attach to cgroup */
- cgroup(optarg);
- break;
- case 'r':
- /* Set RT scheduling priority */
- sp.sched_priority = atoi(optarg);
- if (sched_setscheduler(getpid(), SCHED_RR, &sp) < 0) {
- perror("sched_setscheduler");
- return 1;
- }
- break;
- case 'v':
- printf("%s\n", VERSION);
- exit(0);
- case 'h':
- usage(argv[0]);
- exit(0);
- default:
- usage(argv[0]);
- exit(1);
- }
-
- if (optind < argc) {
- execvp(argv[optind], &argv[optind]);
- perror(argv[optind]);
- return 1;
- }
-
- usage(argv[0]);
-}
diff --git a/mininet/conf_parser.py b/ndn/conf_parser.py
similarity index 100%
rename from mininet/conf_parser.py
rename to ndn/conf_parser.py
diff --git a/ndn/nfd.py b/ndn/nfd.py
index b33fc37..6fea538 100644
--- a/ndn/nfd.py
+++ b/ndn/nfd.py
@@ -21,8 +21,8 @@
self.ndnFolder = "%s/.ndn" % self.homeFolder
self.clientConf = "%s/client.conf" % self.ndnFolder
- # Copy nfd.conf file from mini-ndn/ndn_utils to the node's home
- node.cmd("sudo cp ~/mini-ndn/ndn_utils/nfd.conf %s" % self.confFile)
+ # Copy nfd.conf file from /usr/local/etc/mini-ndn to the node's home
+ node.cmd("sudo cp /usr/local/etc/mini-ndn/nfd.conf %s" % self.confFile)
# Open the conf file and change socket file name
node.cmd("sudo sed -i 's|nfd.sock|%s.sock|g' %s" % (node.name, self.confFile))
@@ -31,7 +31,7 @@
node.cmd("sudo mkdir %s" % self.ndnFolder)
# Copy the client.conf file and change the unix socket
- node.cmd("sudo cp ~/mini-ndn/ndn_utils/client.conf.sample %s" % self.clientConf)
+ node.cmd("sudo cp /usr/local/etc/mini-ndn/client.conf.sample %s" % self.clientConf)
node.cmd("sudo sed -i 's|nfd.sock|%s.sock|g' %s" % (node.name, self.clientConf))
# Change home folder
diff --git a/ndn/nlsr.py b/ndn/nlsr.py
index cda91ea..d376531 100644
--- a/ndn/nlsr.py
+++ b/ndn/nlsr.py
@@ -42,8 +42,8 @@
ROUTING_LINK_STATE = "ls"
ROUTING_HYPERBOLIC = "hr"
- def __init__(self, node, home):
- node.cmd("sudo cp %s/mini-ndn/ndn_utils/nlsr.conf nlsr.conf" % home)
+ def __init__(self, node):
+ node.cmd("sudo cp /usr/local/etc/mini-ndn/nlsr.conf nlsr.conf")
self.node = node
parameters = node.nlsrParameters
diff --git a/setup.py b/setup.py
index 19bfd83..ec8f76a 100644
--- a/setup.py
+++ b/setup.py
@@ -1,46 +1,9 @@
#!/usr/bin/env python
-"Setuptools params"
-
from setuptools import setup, find_packages
-from os.path import join
-
-# Get version number from source tree
-import sys
-sys.path.append( '.' )
-from mininet.net import VERSION
-
-scripts = [ join( 'bin', filename ) for filename in [ 'mn', 'minindn', 'minindnedit' ] ]
-
-modname = distname = 'mininet'
setup(
- name=distname,
- version=VERSION,
- description='Process-based OpenFlow emulator with NDN extension',
- author='Bob Lantz, Carlos Cabral',
- author_email='rlantz@cs.stanford.edu, cabral@dca.fee.unicamp.br',
- packages=find_packages(exclude='test'),
- long_description="""
- Mininet is a network emulator which uses lightweight
- virtualization to create virtual networks for rapid
- prototyping of Software-Defined Network (SDN) designs
- using OpenFlow. http://openflow.org/mininet.
- This also includes an extension for using Content Centric
- Networks based on the Named Data Networking (NDN) model.
- """,
- classifiers=[
- "License :: OSI Approved :: BSD License",
- "Programming Language :: Python",
- "Development Status :: 2 - Pre-Alpha",
- "Intended Audience :: Developers",
- "Topic :: Internet",
- ],
- keywords='networking emulator protocol Internet OpenFlow SDN NDN NFD NLSR',
- license='BSD',
- install_requires=[
- 'setuptools',
- 'networkx'
- ],
- scripts=scripts,
+ name = "Mini-NDN",
+ packages = find_packages(),
+ scripts = ['bin/minindn', 'bin/minindnedit'],
)
diff --git a/util/build-ovs-packages.sh b/util/build-ovs-packages.sh
deleted file mode 100644
index 6a14659..0000000
--- a/util/build-ovs-packages.sh
+++ /dev/null
@@ -1,94 +0,0 @@
-#!/bin/bash
-
-# Attempt to build debian packages for OVS
-
-set -e # exit on error
-set -u # exit on undefined variable
-
-kvers=`uname -r`
-ksrc=/lib/modules/$kvers/build
-dist=`lsb_release -is | tr [A-Z] [a-z]`
-release=`lsb_release -rs`
-arch=`uname -m`
-buildsuffix=-2
-if [ "$arch" = "i686" ]; then arch=i386; fi
-if [ "$arch" = "x86_64" ]; then arch=amd64; fi
-
-overs=1.4.0
-ovs=openvswitch-$overs
-ovstgz=$ovs.tar.gz
-ovsurl=http://openvswitch.org/releases/$ovstgz
-
-install='sudo apt-get install -y'
-
-echo "*** Installing debian/ubuntu build system"
- $install build-essential devscripts ubuntu-dev-tools debhelper dh-make
- $install diff patch cdbs quilt gnupg fakeroot lintian pbuilder piuparts
- $install module-assistant
-
-echo "*** Installing OVS dependencies"
- $install pkg-config gcc make python-dev libssl-dev libtool
- $install dkms ipsec-tools
-
-echo "*** Installing headers for $kvers"
- $install linux-headers-$kvers
-
-echo "*** Retrieving OVS source"
- wget -c $ovsurl
- tar xzf $ovstgz
- cd $ovs
-
-echo "*** Patching OVS source"
- # Not sure why this fails, but off it goes!
- sed -i -e 's/dh_strip/# dh_strip/' debian/rules
- if [ "$release" = "10.04" ]; then
- # Lucid doesn't seem to have all the packages for ovsdbmonitor
- echo "*** Patching debian/rules to remove dh_python2"
- sed -i -e 's/dh_python2/dh_pysupport/' debian/rules
- echo "*** Not building ovsdbmonitor since it's too hard on 10.04"
- mv debian/ovsdbmonitor.install debian/ovsdbmonitor.install.backup
- sed -i -e 's/ovsdbmonitor.install/ovsdbmonitor.install.backup/' Makefile.in
- else
- # Install a bag of hurt for ovsdbmonitor
- $install python-pyside.qtcore pyqt4-dev-tools python-twisted python-twisted-bin \
- python-twisted-core python-twisted-conch python-anyjson python-zope.interface
- fi
- # init script was written to assume that commands complete
- sed -i -e 's/^set -e/#set -e/' debian/openvswitch-controller.init
-
-echo "*** Building OVS user packages"
- opts=--with-linux=/lib/modules/`uname -r`/build
- fakeroot make -f debian/rules DATAPATH_CONFIGURE_OPTS=$opts binary
-
-echo "*** Building OVS datapath kernel module package"
- # Still looking for the "right" way to do this...
- sudo mkdir -p /usr/src/linux
- ln -sf _debian/openvswitch.tar.gz .
- sudo make -f debian/rules.modules KSRC=$ksrc KVERS=$kvers binary-modules
-
-echo "*** Built the following packages:"
- cd ~
- ls -l *deb
-
-archive=ovs-$overs-core-$dist-$release-$arch$buildsuffix.tar
-ovsbase='common pki switch brcompat controller datapath-dkms'
-echo "*** Packing up $ovsbase .debs into:"
-echo " $archive"
- pkgs=""
- for component in $ovsbase; do
- if echo $component | egrep 'dkms|pki'; then
- # Architecture-independent packages
- deb=(openvswitch-${component}_$overs*all.deb)
- else
- deb=(openvswitch-${component}_$overs*$arch.deb)
- fi
- pkgs="$pkgs $deb"
- done
- rm -rf $archive
- tar cf $archive $pkgs
-
-echo "*** Contents of archive $archive:"
- tar tf $archive
-
-echo "*** Done (hopefully)"
-
diff --git a/util/colorfilters b/util/colorfilters
deleted file mode 100644
index 745fe21..0000000
--- a/util/colorfilters
+++ /dev/null
@@ -1,21 +0,0 @@
-# DO NOT EDIT THIS FILE! It was created by Wireshark
-@Bad TCP@tcp.analysis.flags@[0,0,0][65535,24383,24383]
-@HSRP State Change@hsrp.state != 8 && hsrp.state != 16@[0,0,0][65535,63222,0]
-@Spanning Tree Topology Change@stp.type == 0x80@[0,0,0][65535,63222,0]
-@OSPF State Change@ospf.msg != 1@[0,0,0][65535,63222,0]
-@ICMP errors@icmp.type eq 3 || icmp.type eq 4 || icmp.type eq 5 || icmp.type eq 11@[0,0,0][0,65535,3616]
-@ARP@arp@[55011,59486,65534][0,0,0]
-@ICMP@icmp@[49680,49737,65535][0,0,0]
-@TCP RST@tcp.flags.reset eq 1@[37008,0,0][65535,63121,32911]
-@TTL low or unexpected@( ! ip.dst == 224.0.0.0/4 && ip.ttl < 5) || (ip.dst == 224.0.0.0/24 && ip.ttl != 1)@[37008,0,0][65535,65535,65535]
-@of@of@[0,5,65535][65535,65535,65535]
-@Checksum Errors@cdp.checksum_bad==1 || edp.checksum_bad==1 || ip.checksum_bad==1 || tcp.checksum_bad==1 || udp.checksum_bad==1@[0,0,0][65535,24383,24383]
-@SMB@smb || nbss || nbns || nbipx || ipxsap || netbios@[65534,64008,39339][0,0,0]
-@HTTP@http || tcp.port == 80@[36107,65535,32590][0,0,0]
-@IPX@ipx || spx@[65534,58325,58808][0,0,0]
-@DCERPC@dcerpc@[51199,38706,65533][0,0,0]
-@Routing@hsrp || eigrp || ospf || bgp || cdp || vrrp || gvrp || igmp || ismp@[65534,62325,54808][0,0,0]
-@TCP SYN/FIN@tcp.flags & 0x02 || tcp.flags.fin == 1@[41026,41026,41026][0,0,0]
-@TCP@tcp@[59345,58980,65534][0,0,0]
-@UDP@udp@[28834,57427,65533][0,0,0]
-@Broadcast@eth[0] & 1@[65535,65535,65535][32768,32768,32768]
diff --git a/util/doxify.py b/util/doxify.py
deleted file mode 100644
index f9f60ad..0000000
--- a/util/doxify.py
+++ /dev/null
@@ -1,89 +0,0 @@
-#!/usr/bin/python
-
-"""
-Convert simple documentation to epydoc/pydoctor-compatible markup
-"""
-
-from sys import stdin, stdout, argv
-import os
-from tempfile import mkstemp
-from subprocess import call
-
-import re
-
-spaces = re.compile( r'\s+' )
-singleLineExp = re.compile( r'\s+"([^"]+)"' )
-commentStartExp = re.compile( r'\s+"""' )
-commentEndExp = re.compile( r'"""$' )
-returnExp = re.compile( r'\s+(returns:.*)' )
-lastindent = ''
-
-
-comment = False
-
-def fixParam( line ):
- "Change foo: bar to @foo bar"
- result = re.sub( r'(\w+):', r'@param \1', line )
- result = re.sub( r' @', r'@', result)
- return result
-
-def fixReturns( line ):
- "Change returns: foo to @return foo"
- return re.sub( 'returns:', r'@returns', line )
-
-def fixLine( line ):
- global comment
- match = spaces.match( line )
- if not match:
- return line
- else:
- indent = match.group(0)
- if singleLineExp.match( line ):
- return re.sub( '"', '"""', line )
- if commentStartExp.match( line ):
- comment = True
- if comment:
- line = fixReturns( line )
- line = fixParam( line )
- if commentEndExp.search( line ):
- comment = False
- return line
-
-
-def test():
- "Test transformations"
- assert fixLine(' "foo"') == ' """foo"""'
- assert fixParam( 'foo: bar' ) == '@param foo bar'
- assert commentStartExp.match( ' """foo"""')
-
-def funTest():
- testFun = (
- 'def foo():\n'
- ' "Single line comment"\n'
- ' """This is a test"""\n'
- ' bar: int\n'
- ' baz: string\n'
- ' returns: junk"""\n'
- ' if True:\n'
- ' print "OK"\n'
- ).splitlines( True )
-
- fixLines( testFun )
-
-def fixLines( lines, fid ):
- for line in lines:
- os.write( fid, fixLine( line ) )
-
-if __name__ == '__main__':
- if False:
- funTest()
- infile = open( argv[1] )
- outfid, outname = mkstemp()
- fixLines( infile.readlines(), outfid )
- infile.close()
- os.close( outfid )
- call( [ 'doxypy', outname ] )
-
-
-
-
diff --git a/util/install.sh b/util/install.sh
deleted file mode 100755
index c16883d..0000000
--- a/util/install.sh
+++ /dev/null
@@ -1,562 +0,0 @@
-#!/usr/bin/env bash
-
-# Mininet install script for Ubuntu (and Debian Lenny)
-# Brandon Heller (brandonh@stanford.edu)
-
-# Fail on error
-set -e
-
-# Fail on unset var usage
-set -o nounset
-
-# Location of CONFIG_NET_NS-enabled kernel(s)
-KERNEL_LOC=http://www.openflow.org/downloads/mininet
-
-# Attempt to identify Linux release
-
-DIST=Unknown
-RELEASE=Unknown
-CODENAME=Unknown
-ARCH=`uname -m`
-if [ "$ARCH" = "x86_64" ]; then ARCH="amd64"; fi
-if [ "$ARCH" = "i686" ]; then ARCH="i386"; fi
-
-test -e /etc/debian_version && DIST="Debian"
-grep Ubuntu /etc/lsb-release &> /dev/null && DIST="Ubuntu"
-if [ "$DIST" = "Ubuntu" ] || [ "$DIST" = "Debian" ]; then
- install='sudo apt-get -y install'
- remove='sudo apt-get -y remove'
- pkginst='sudo dpkg -i'
- # Prereqs for this script
- if ! which lsb_release &> /dev/null; then
- $install lsb-release
- fi
- if ! which bc &> /dev/null; then
- $install bc
- fi
-fi
-if which lsb_release &> /dev/null; then
- DIST=`lsb_release -is`
- RELEASE=`lsb_release -rs`
- CODENAME=`lsb_release -cs`
-fi
-echo "Detected Linux distribution: $DIST $RELEASE $CODENAME $ARCH"
-
-# Kernel params
-
-if [ "$DIST" = "Ubuntu" ]; then
- if [ "$RELEASE" = "10.04" ]; then
- KERNEL_NAME='3.0.0-15-generic'
- else
- KERNEL_NAME=`uname -r`
- fi
- KERNEL_HEADERS=linux-headers-${KERNEL_NAME}
-elif [ "$DIST" = "Debian" ] && [ "$ARCH" = "i386" ] && [ "$CODENAME" = "lenny" ]; then
- KERNEL_NAME=2.6.33.1-mininet
- KERNEL_HEADERS=linux-headers-${KERNEL_NAME}_${KERNEL_NAME}-10.00.Custom_i386.deb
- KERNEL_IMAGE=linux-image-${KERNEL_NAME}_${KERNEL_NAME}-10.00.Custom_i386.deb
-else
- echo "Install.sh currently only supports Ubuntu and Debian Lenny i386."
- exit 1
-fi
-
-# More distribution info
-DIST_LC=`echo $DIST | tr [A-Z] [a-z]` # as lower case
-
-# Kernel Deb pkg to be removed:
-KERNEL_IMAGE_OLD=linux-image-2.6.26-33-generic
-
-DRIVERS_DIR=/lib/modules/${KERNEL_NAME}/kernel/drivers/net
-
-OVS_RELEASE=1.4.0
-OVS_PACKAGE_LOC=https://github.com/downloads/mininet/mininet
-OVS_BUILDSUFFIX=-ignore # was -2
-OVS_PACKAGE_NAME=ovs-$OVS_RELEASE-core-$DIST_LC-$RELEASE-$ARCH$OVS_BUILDSUFFIX.tar
-OVS_SRC=~/openvswitch
-OVS_TAG=v$OVS_RELEASE
-OVS_BUILD=$OVS_SRC/build-$KERNEL_NAME
-OVS_KMODS=($OVS_BUILD/datapath/linux/{openvswitch_mod.ko,brcompat_mod.ko})
-
-function kernel {
- echo "Install Mininet-compatible kernel if necessary"
- sudo apt-get update
- if [ "$DIST" = "Ubuntu" ] && [ "$RELEASE" = "10.04" ]; then
- $install linux-image-$KERNEL_NAME
- elif [ "$DIST" = "Debian" ]; then
- # The easy approach: download pre-built linux-image and linux-headers packages:
- wget -c $KERNEL_LOC/$KERNEL_HEADERS
- wget -c $KERNEL_LOC/$KERNEL_IMAGE
-
- # Install custom linux headers and image:
- $pkginst $KERNEL_IMAGE $KERNEL_HEADERS
-
- # The next two steps are to work around a bug in newer versions of
- # kernel-package, which fails to add initrd images with the latest kernels.
- # See http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=525032
- # Generate initrd image if the .deb didn't install it:
- if ! test -e /boot/initrd.img-${KERNEL_NAME}; then
- sudo update-initramfs -c -k ${KERNEL_NAME}
- fi
-
- # Ensure /boot/grub/menu.lst boots with initrd image:
- sudo update-grub
-
- # The default should be the new kernel. Otherwise, you may need to modify
- # /boot/grub/menu.lst to set the default to the entry corresponding to the
- # kernel you just installed.
- fi
-}
-
-function kernel_clean {
- echo "Cleaning kernel..."
-
- # To save disk space, remove previous kernel
- if ! $remove $KERNEL_IMAGE_OLD; then
- echo $KERNEL_IMAGE_OLD not installed.
- fi
-
- # Also remove downloaded packages:
- rm -f ~/linux-headers-* ~/linux-image-*
-}
-
-# Install Mininet deps
-function mn_deps {
- echo "Installing Mininet dependencies"
- $install gcc make screen psmisc xterm ssh iperf iproute telnet \
- python-setuptools python-networkx cgroup-bin ethtool help2man \
- pyflakes pylint pep8
-
- if [ "$DIST" = "Ubuntu" ] && [ "$RELEASE" = "10.04" ]; then
- echo "Upgrading networkx to avoid deprecation warning"
- sudo easy_install --upgrade networkx
- fi
-
- # Add sysctl parameters as noted in the INSTALL file to increase kernel
- # limits to support larger setups:
- sudo su -c "cat $HOME/mini-ndn/util/sysctl_addon >> /etc/sysctl.conf"
-
- # Load new sysctl settings:
- sudo sysctl -p
-
- echo "Installing Mininet/Mini-CCNx core"
- pushd ~/mini-ndn
- sudo make install
- popd
-}
-
-# The following will cause a full OF install, covering:
-# -user switch
-# The instructions below are an abbreviated version from
-# http://www.openflowswitch.org/wk/index.php/Debian_Install
-# ... modified to use Debian Lenny rather than unstable.
-function of {
- echo "Installing OpenFlow reference implementation..."
- cd ~/
- $install git-core autoconf automake autotools-dev pkg-config \
- make gcc libtool libc6-dev
- git clone git://openflowswitch.org/openflow.git
- cd ~/openflow
-
- # Patch controller to handle more than 16 switches
- patch -p1 < ~/mini-ndn/util/openflow-patches/controller.patch
-
- # Resume the install:
- ./boot.sh
- ./configure
- make
- sudo make install
-
- # Remove avahi-daemon, which may cause unwanted discovery packets to be
- # sent during tests, near link status changes:
- $remove avahi-daemon
-
- # Disable IPv6. Add to /etc/modprobe.d/blacklist:
- if [ "$DIST" = "Ubuntu" ]; then
- BLACKLIST=/etc/modprobe.d/blacklist.conf
- else
- BLACKLIST=/etc/modprobe.d/blacklist
- fi
- sudo sh -c "echo 'blacklist net-pf-10\nblacklist ipv6' >> $BLACKLIST"
- cd ~
-}
-
-function wireshark {
- echo "Installing Wireshark dissector..."
-
- sudo apt-get install -y wireshark libgtk2.0-dev
-
- if [ "$DIST" = "Ubuntu" ] && [ "$RELEASE" != "10.04" ]; then
- # Install newer version
- sudo apt-get install -y scons mercurial libglib2.0-dev
- sudo apt-get install -y libwiretap-dev libwireshark-dev
- cd ~
- hg clone https://bitbucket.org/barnstorm/of-dissector
- cd of-dissector/src
- export WIRESHARK=/usr/include/wireshark
- scons
- # libwireshark0/ on 11.04; libwireshark1/ on later
- WSDIR=`ls -d /usr/lib/wireshark/libwireshark* | head -1`
- WSPLUGDIR=$WSDIR/plugins/
- sudo cp openflow.so $WSPLUGDIR
- echo "Copied openflow plugin to $WSPLUGDIR"
- else
- # Install older version from reference source
- cd ~/openflow/utilities/wireshark_dissectors/openflow
- make
- sudo make install
- fi
-
- # Copy coloring rules: OF is white-on-blue:
- mkdir -p ~/.wireshark
- cp ~/mini-ndn/util/colorfilters ~/.wireshark
-}
-
-
-# Install Open vSwitch
-# Instructions derived from OVS INSTALL, INSTALL.OpenFlow and README files.
-
-function ovs {
- echo "Installing Open vSwitch..."
-
- # Required for module build/dkms install
- $install $KERNEL_HEADERS
-
- ovspresent=0
-
- # First see if we have packages
- # XXX wget -c seems to fail from github/amazon s3
- cd /tmp
- if wget $OVS_PACKAGE_LOC/$OVS_PACKAGE_NAME 2> /dev/null; then
- $install patch dkms fakeroot python-argparse
- tar xf $OVS_PACKAGE_NAME
- orig=`tar tf $OVS_PACKAGE_NAME`
- # Now install packages in reasonable dependency order
- order='dkms common pki openvswitch-switch brcompat controller'
- pkgs=""
- for p in $order; do
- pkg=`echo "$orig" | grep $p`
- # Annoyingly, things seem to be missing without this flag
- $pkginst --force-confmiss $pkg
- done
- ovspresent=1
- fi
-
- # Otherwise try distribution's OVS packages
- if [ "$DIST" = "Ubuntu" ] && [ `expr $RELEASE '>=' 11.10` = 1 ]; then
- if ! dpkg --get-selections | grep openvswitch-datapath; then
- # If you've already installed a datapath, assume you
- # know what you're doing and don't need dkms datapath.
- # Otherwise, install it.
- $install openvswitch-datapath-dkms
- fi
- if $install openvswitch-switch openvswitch-controller; then
- echo "Ignoring error installing openvswitch-controller"
- fi
- ovspresent=1
- fi
-
- # Switch can run on its own, but
- # Mininet should control the controller
- if [ -e /etc/init.d/openvswitch-controller ]; then
- if sudo service openvswitch-controller stop; then
- echo "Stopped running controller"
- fi
- sudo update-rc.d openvswitch-controller disable
- fi
-
- if [ $ovspresent = 1 ]; then
- echo "Done (hopefully) installing packages"
- cd ~
- return
- fi
-
- # Otherwise attempt to install from source
-
- $install pkg-config gcc make python-dev libssl-dev libtool
-
- if [ "$DIST" = "Debian" ]; then
- if [ "$CODENAME" = "lenny" ]; then
- $install git-core
- # Install Autoconf 2.63+ backport from Debian Backports repo:
- # Instructions from http://backports.org/dokuwiki/doku.php?id=instructions
- sudo su -c "echo 'deb http://www.backports.org/debian lenny-backports main contrib non-free' >> /etc/apt/sources.list"
- sudo apt-get update
- sudo apt-get -y --force-yes install debian-backports-keyring
- sudo apt-get -y --force-yes -t lenny-backports install autoconf
- fi
- else
- $install git
- fi
-
- # Install OVS from release
- cd ~/
- git clone git://openvswitch.org/openvswitch $OVS_SRC
- cd $OVS_SRC
- git checkout $OVS_TAG
- ./boot.sh
- BUILDDIR=/lib/modules/${KERNEL_NAME}/build
- if [ ! -e $BUILDDIR ]; then
- echo "Creating build sdirectory $BUILDDIR"
- sudo mkdir -p $BUILDDIR
- fi
- opts="--with-linux=$BUILDDIR"
- mkdir -p $OVS_BUILD
- cd $OVS_BUILD
- ../configure $opts
- make
- sudo make install
-
- modprobe
-}
-
-function remove_ovs {
- pkgs=`dpkg --get-selections | grep openvswitch | awk '{ print $1;}'`
- echo "Removing existing Open vSwitch packages:"
- echo $pkgs
- if ! $remove $pkgs; then
- echo "Not all packages removed correctly"
- fi
- # For some reason this doesn't happen
- if scripts=`ls /etc/init.d/*openvswitch* 2>/dev/null`; then
- echo $scripts
- for s in $scripts; do
- s=$(basename $s)
- echo SCRIPT $s
- sudo service $s stop
- sudo rm -f /etc/init.d/$s
- sudo update-rc.d -f $s remove
- done
- fi
- echo "Done removing OVS"
-}
-
-# Install NOX with tutorial files
-function nox {
- echo "Installing NOX w/tutorial files..."
-
- # Install NOX deps:
- $install autoconf automake g++ libtool python python-twisted \
- swig libssl-dev make
- if [ "$DIST" = "Debian" ]; then
- $install libboost1.35-dev
- elif [ "$DIST" = "Ubuntu" ]; then
- $install python-dev libboost-dev
- $install libboost-filesystem-dev
- $install libboost-test-dev
- fi
- # Install NOX optional deps:
- $install libsqlite3-dev python-simplejson
-
- # Fetch NOX destiny
- cd ~/
- git clone https://github.com/noxrepo/nox-classic.git noxcore
- cd noxcore
- if ! git checkout -b destiny remotes/origin/destiny ; then
- echo "Did not check out a new destiny branch - assuming current branch is destiny"
- fi
-
- # Apply patches
- git checkout -b tutorial-destiny
- git am ~/mini-ndn/util/nox-patches/*tutorial-port-nox-destiny*.patch
- if [ "$DIST" = "Ubuntu" ] && [ `expr $RELEASE '>=' 12.04` = 1 ]; then
- git am ~/mini-ndn/util/nox-patches/*nox-ubuntu12-hacks.patch
- fi
-
- # Build
- ./boot.sh
- mkdir build
- cd build
- ../configure
- make -j3
- #make check
-
- # Add NOX_CORE_DIR env var:
- sed -i -e 's|# for examples$|&\nexport NOX_CORE_DIR=~/noxcore/build/src|' ~/.bashrc
-
- # To verify this install:
- #cd ~/noxcore/build/src
- #./nox_core -v -i ptcp:
-}
-
-# "Install" POX
-function pox {
- echo "Installing POX into $HOME/pox..."
- cd ~
- git clone https://github.com/noxrepo/pox.git
-}
-
-# Install OFtest
-function oftest {
- echo "Installing oftest..."
-
- # Install deps:
- $install tcpdump python-scapy
-
- # Install oftest:
- cd ~/
- git clone git://github.com/floodlight/oftest
- cd oftest
- cd tools/munger
- sudo make install
-}
-
-# Install cbench
-function cbench {
- echo "Installing cbench..."
-
- $install libsnmp-dev libpcap-dev libconfig-dev
- cd ~/
- git clone git://openflow.org/oflops.git
- cd oflops
- sh boot.sh || true # possible error in autoreconf, so run twice
- sh boot.sh
- ./configure --with-openflow-src-dir=$HOME/openflow
- make
- sudo make install || true # make install fails; force past this
-}
-
-function other {
- echo "Doing other setup tasks..."
-
- # Enable command auto completion using sudo; modify ~/.bashrc:
- sed -i -e 's|# for examples$|&\ncomplete -cf sudo|' ~/.bashrc
-
- # Install tcpdump and tshark, cmd-line packet dump tools. Also install gitk,
- # a graphical git history viewer.
- $install tcpdump tshark gitk
-
- # Install common text editors
- $install vim nano emacs
-
- # Install NTP
- $install ntp
-
- # Set git to colorize everything.
- git config --global color.diff auto
- git config --global color.status auto
- git config --global color.branch auto
-
- # Reduce boot screen opt-out delay. Modify timeout in /boot/grub/menu.lst to 1:
- if [ "$DIST" = "Debian" ]; then
- sudo sed -i -e 's/^timeout.*$/timeout 1/' /boot/grub/menu.lst
- fi
-
- # Clean unneeded debs:
- rm -f ~/linux-headers-* ~/linux-image-*
-}
-
-# Script to copy built OVS kernel module to where modprobe will
-# find them automatically. Removes the need to keep an environment variable
-# for insmod usage, and works nicely with multiple kernel versions.
-#
-# The downside is that after each recompilation of OVS you'll need to
-# re-run this script. If you're using only one kernel version, then it may be
-# a good idea to use a symbolic link in place of the copy below.
-function modprobe {
- echo "Setting up modprobe for OVS kmod..."
-
- sudo cp $OVS_KMODS $DRIVERS_DIR
- sudo depmod -a ${KERNEL_NAME}
-}
-
-function all {
- echo "Running all commands..."
- kernel
- mn_deps
- of
- wireshark
- ovs
- # NOX-classic is deprecated, but you can install it manually if desired.
- # nox
- pox
- oftest
- cbench
- other
- echo "Please reboot, then run ./mininet/util/install.sh -c to remove unneeded packages."
- echo "Enjoy Mininet!"
-}
-
-# Restore disk space and remove sensitive files before shipping a VM.
-function vm_clean {
- echo "Cleaning VM..."
- sudo apt-get clean
- sudo rm -rf /tmp/*
- sudo rm -rf openvswitch*.tar.gz
-
- # Remove sensistive files
- history -c # note this won't work if you have multiple bash sessions
- rm -f ~/.bash_history # need to clear in memory and remove on disk
- rm -f ~/.ssh/id_rsa* ~/.ssh/known_hosts
- sudo rm -f ~/.ssh/authorized_keys*
-
- # Remove Mininet files
- #sudo rm -f /lib/modules/python2.5/site-packages/mininet*
- #sudo rm -f /usr/bin/mnexec
-
- # Clear optional dev script for SSH keychain load on boot
- rm -f ~/.bash_profile
-
- # Clear git changes
- git config --global user.name "None"
- git config --global user.email "None"
-
- # Remove mininet install script
- rm -f install-mininet.sh
-}
-
-function usage {
- printf 'Usage: %s [-acdfhkmntvxy]\n\n' $(basename $0) >&2
-
- printf 'This install script attempts to install useful packages\n' >&2
- printf 'for Mininet. It should (hopefully) work on Ubuntu 10.04, 11.10\n' >&2
- printf 'and Debian 5.0 (Lenny). If you run into trouble, try\n' >&2
- printf 'installing one thing at a time, and looking at the \n' >&2
- printf 'specific installation function in this script.\n\n' >&2
-
- printf 'options:\n' >&2
- printf -- ' -a: (default) install (A)ll packages - good luck!\n' >&2
- printf -- ' -b: install controller (B)enchmark (oflops)\n' >&2
- printf -- ' -c: (C)lean up after kernel install\n' >&2
- printf -- ' -d: (D)elete some sensitive files from a VM image\n' >&2
- printf -- ' -f: install open(F)low\n' >&2
- printf -- ' -h: print this (H)elp message\n' >&2
- printf -- ' -k: install new (K)ernel\n' >&2
- printf -- ' -m: install Open vSwitch kernel (M)odule from source dir\n' >&2
- printf -- ' -n: install mini(N)et dependencies + core files\n' >&2
- printf -- ' -r: remove existing Open vSwitch packages\n' >&2
- printf -- ' -t: install o(T)her stuff\n' >&2
- printf -- ' -v: install open (V)switch\n' >&2
- printf -- ' -w: install OpenFlow (w)ireshark dissector\n' >&2
- printf -- ' -x: install NO(X) OpenFlow controller\n' >&2
- printf -- ' -y: install (A)ll packages\n' >&2
-
- exit 2
-}
-
-if [ $# -eq 0 ]
-then
- all
-else
- while getopts 'abcdfhkmnprtvwx' OPTION
- do
- case $OPTION in
- a) all;;
- b) cbench;;
- c) kernel_clean;;
- d) vm_clean;;
- f) of;;
- h) usage;;
- k) kernel;;
- m) modprobe;;
- n) mn_deps;;
- p) pox;;
- r) remove_ovs;;
- t) other;;
- v) ovs;;
- w) wireshark;;
- x) nox;;
- ?) usage;;
- esac
- done
- shift $(($OPTIND - 1))
-fi
diff --git a/util/kbuild/kbuild b/util/kbuild/kbuild
deleted file mode 100644
index c8d2f96..0000000
--- a/util/kbuild/kbuild
+++ /dev/null
@@ -1,71 +0,0 @@
-#!/bin/bash
-
-# Script to build new Debian kernel packages for 2.6.33.1
-#
-# Caveats:
-#
-# Since kernel-package in debian-stable doesn't work with
-# 2.6.33.1, we attempt to patch it in place. This may not be the
-# right thing to do. A possibly better alternative is to install
-# a later version of kernel-package, although that could potentially
-# cause problems with upgrades, etc..
-#
-# The patch to tun.c is a workaround rather than a real fix.
-#
-# Building a full Debian kernel package with all drivers takes a long
-# time, 60-80 minutes on my laptop.
-#
-# Re-running a make-kpkg may not work without running 'make-kpkg clean'
-
-# Season to taste
-# export PATH=/usr/lib/ccache:$PATH
-export CONCURRENCY_LEVEL=3
-
-debversion=2.6.26-2-686-bigmem
-
-image=linux-image-$debversion
-
-echo "*** Installing $image"
-sudo aptitude install $image
-
-newversion=2.6.33.1
-archive=linux-$newversion.tar.bz2
-location=http://www.kernel.org/pub/linux/kernel/v2.6
-
-echo "*** Fetching $location/$archive"
-wget -c $location/$archive
-
-tree=linux-$newversion
-if [ -e $tree ]; then
- echo "*** $tree already exists"
-else
- echo "*** Extracting $archive"
- tar xjf $archive
-fi
-
-echo "*** Patching tun driver"
-patch $tree/drivers/net/tun.c < tun.patch
-
-echo "*** Patching debian build script"
-sudo patch /usr/share/kernel-package/ruleset/misc/version_vars.mk < version_vars.patch
-
-config=/boot/config-$debversion
-echo "*** Copying $config to $tree/.config"
-cp $config $tree/.config
-
-echo "*** Updating config"
-cd $tree
-yes '' | make oldconfig 1> /dev/null
-sed 's/# CONFIG_NET_NS is not set/CONFIG_NET_NS=y/' .config > .config-new
-mv .config-new .config
-echo "*** Result: " `grep CONFIG_NET_NS .config`
-
-echo "*** Building kernel"
-time fakeroot make-kpkg --initrd --append-to-version=-mininet kernel_image kernel_headers
-
-cd ..
-echo "*** Done - package should be in current directory"
-ls *$newversion*.deb
-
-echo "To install:"
-echo "# dpkg -i " *$newversion*.deb
diff --git a/util/kbuild/tun.patch b/util/kbuild/tun.patch
deleted file mode 100644
index 3c4cc69..0000000
--- a/util/kbuild/tun.patch
+++ /dev/null
@@ -1,13 +0,0 @@
---- linux-2.6.33.1/drivers/net/tun.c 2010-03-24 22:47:32.000000000 -0700
-+++ tun-new.c 2010-03-24 22:45:00.000000000 -0700
-@@ -1006,7 +1006,9 @@
- if (err < 0)
- goto err_free_sk;
-
-- if (device_create_file(&tun->dev->dev, &dev_attr_tun_flags) ||
-+ /* BL hack: check for null parent kobj */
-+ if (!tun->dev->dev.kobj.sd ||
-+ device_create_file(&tun->dev->dev, &dev_attr_tun_flags) ||
- device_create_file(&tun->dev->dev, &dev_attr_owner) ||
- device_create_file(&tun->dev->dev, &dev_attr_group))
- printk(KERN_ERR "Failed to create tun sysfs files\n");
diff --git a/util/kbuild/version_vars.patch b/util/kbuild/version_vars.patch
deleted file mode 100644
index 6f55901..0000000
--- a/util/kbuild/version_vars.patch
+++ /dev/null
@@ -1,18 +0,0 @@
---- /usr/share/kernel-package/ruleset/misc/version_vars.mk 2010-03-25 18:14:41.000000000 -0700
-+++ version_vars.mk 2010-03-03 06:46:59.000000000 -0800
-@@ -138,11 +138,13 @@
- EXTRAV_ARG :=
- endif
-
--UTS_RELEASE_HEADER=$(call doit,if [ -f include/linux/utsrelease.h ]; then \
-+UTS_RELEASE_HEADER=$(call doit, if [ -f include/generated/utsrelease.h ]; then \
-+ echo include/generated/utsrelease.h; \
-+ else if [ -f include/linux/utsrelease.h ]; then \
- echo include/linux/utsrelease.h; \
- else \
- echo include/linux/version.h ; \
-- fi)
-+ fi fi)
- UTS_RELEASE_VERSION=$(call doit,if [ -f $(UTS_RELEASE_HEADER) ]; then \
- grep 'define UTS_RELEASE' $(UTS_RELEASE_HEADER) | \
- perl -nle 'm/^\s*\#define\s+UTS_RELEASE\s+("?)(\S+)\1/g && print $$2;';\
diff --git a/util/nox-patches/0001-OpenFlow-tutorial-port-nox-destiny.patch b/util/nox-patches/0001-OpenFlow-tutorial-port-nox-destiny.patch
deleted file mode 100644
index 881fc55..0000000
--- a/util/nox-patches/0001-OpenFlow-tutorial-port-nox-destiny.patch
+++ /dev/null
@@ -1,298 +0,0 @@
-From 5c9610ffb88c89b0f36359ad3c7547831482a3ff Mon Sep 17 00:00:00 2001
-From: Bob Lantz <rlantz@cs.stanford.edu>
-Date: Fri, 3 Feb 2012 14:48:58 -0800
-Subject: [PATCH] OpenFlow tutorial port nox-destiny.
-
----
- src/nox/coreapps/examples/Makefile.am | 2 +-
- src/nox/coreapps/examples/tutorial/Makefile.am | 25 ++++
- src/nox/coreapps/examples/tutorial/meta.json | 12 ++
- src/nox/coreapps/examples/tutorial/pytutorial.py | 67 +++++++++++
- src/nox/coreapps/examples/tutorial/tutorial.cc | 134 ++++++++++++++++++++++
- 5 files changed, 239 insertions(+), 1 deletions(-)
- create mode 100644 src/nox/coreapps/examples/tutorial/Makefile.am
- create mode 100644 src/nox/coreapps/examples/tutorial/__init__.py
- create mode 100644 src/nox/coreapps/examples/tutorial/meta.json
- create mode 100644 src/nox/coreapps/examples/tutorial/pytutorial.py
- create mode 100644 src/nox/coreapps/examples/tutorial/tutorial.cc
-
-diff --git a/src/nox/coreapps/examples/Makefile.am b/src/nox/coreapps/examples/Makefile.am
-index 126f32e..1a0458c 100644
---- a/src/nox/coreapps/examples/Makefile.am
-+++ b/src/nox/coreapps/examples/Makefile.am
-@@ -1,6 +1,6 @@
- include ../../../Make.vars
-
--SUBDIRS = t
-+SUBDIRS = tutorial t
-
- EXTRA_DIST =\
- meta.json\
-diff --git a/src/nox/coreapps/examples/tutorial/Makefile.am b/src/nox/coreapps/examples/tutorial/Makefile.am
-new file mode 100644
-index 0000000..51cf921
---- /dev/null
-+++ b/src/nox/coreapps/examples/tutorial/Makefile.am
-@@ -0,0 +1,25 @@
-+include ../../../../Make.vars
-+
-+EXTRA_DIST =\
-+ meta.xml \
-+ __init__.py \
-+ pytutorial.py
-+
-+if PY_ENABLED
-+AM_CPPFLAGS += $(PYTHON_CPPFLAGS)
-+endif # PY_ENABLED
-+
-+pkglib_LTLIBRARIES = \
-+ tutorial.la
-+
-+tutorial_la_CPPFLAGS = $(AM_CPPFLAGS) -I $(top_srcdir)/src/nox -I $(top_srcdir)/src/nox/coreapps/
-+tutorial_la_SOURCES = tutorial.cc
-+tutorial_la_LDFLAGS = -module -export-dynamic
-+
-+NOX_RUNTIMEFILES = meta.json \
-+ __init__.py \
-+ pytutorial.py
-+
-+all-local: nox-all-local
-+clean-local: nox-clean-local
-+install-exec-hook: nox-install-local
-diff --git a/src/nox/coreapps/examples/tutorial/__init__.py b/src/nox/coreapps/examples/tutorial/__init__.py
-new file mode 100644
-index 0000000..e69de29
-diff --git a/src/nox/coreapps/examples/tutorial/meta.json b/src/nox/coreapps/examples/tutorial/meta.json
-new file mode 100644
-index 0000000..7a9f227
---- /dev/null
-+++ b/src/nox/coreapps/examples/tutorial/meta.json
-@@ -0,0 +1,12 @@
-+{
-+ "components": [
-+ {
-+ "name": "tutorial",
-+ "library": "tutorial"
-+ },
-+ {
-+ "name": "pytutorial",
-+ "python": "nox.coreapps.examples.tutorial.pytutorial"
-+ }
-+ ]
-+}
-diff --git a/src/nox/coreapps/examples/tutorial/pytutorial.py b/src/nox/coreapps/examples/tutorial/pytutorial.py
-new file mode 100644
-index 0000000..1e21c0b
---- /dev/null
-+++ b/src/nox/coreapps/examples/tutorial/pytutorial.py
-@@ -0,0 +1,67 @@
-+# Tutorial Controller
-+# Starts as a hub, and your job is to turn this into a learning switch.
-+
-+import logging
-+
-+from nox.lib.core import *
-+import nox.lib.openflow as openflow
-+from nox.lib.packet.ethernet import ethernet
-+from nox.lib.packet.packet_utils import mac_to_str, mac_to_int
-+
-+log = logging.getLogger('nox.coreapps.tutorial.pytutorial')
-+
-+
-+class pytutorial(Component):
-+
-+ def __init__(self, ctxt):
-+ Component.__init__(self, ctxt)
-+ # Use this table to store MAC addresses in the format of your choice;
-+ # Functions already imported, including mac_to_str, and mac_to_int,
-+ # should prove useful for converting the byte array provided by NOX
-+ # for packet MAC destination fields.
-+ # This table is initialized to empty when your module starts up.
-+ self.mac_to_port = {} # key: MAC addr; value: port
-+
-+ def learn_and_forward(self, dpid, inport, packet, buf, bufid):
-+ """Learn MAC src port mapping, then flood or send unicast."""
-+
-+ # Initial hub behavior: flood packet out everything but input port.
-+ # Comment out the line below when starting the exercise.
-+ self.send_openflow(dpid, bufid, buf, openflow.OFPP_FLOOD, inport)
-+
-+ # Starter psuedocode for learning switch exercise below: you'll need to
-+ # replace each pseudocode line with more specific Python code.
-+
-+ # Learn the port for the source MAC
-+ #self.mac_to_port = <fill in>
-+ #if (destination MAC of the packet is known):
-+ # Send unicast packet to known output port
-+ #self.send_openflow( <fill in params> )
-+ # Later, only after learning controller works:
-+ # push down flow entry and remove the send_openflow command above.
-+ #self.install_datapath_flow( <fill in params> )
-+ #else:
-+ #flood packet out everything but the input port
-+ #self.send_openflow(dpid, bufid, buf, openflow.OFPP_FLOOD, inport)
-+
-+ def packet_in_callback(self, dpid, inport, reason, len, bufid, packet):
-+ """Packet-in handler"""
-+ if not packet.parsed:
-+ log.debug('Ignoring incomplete packet')
-+ else:
-+ self.learn_and_forward(dpid, inport, packet, packet.arr, bufid)
-+
-+ return CONTINUE
-+
-+ def install(self):
-+ self.register_for_packet_in(self.packet_in_callback)
-+
-+ def getInterface(self):
-+ return str(pytutorial)
-+
-+def getFactory():
-+ class Factory:
-+ def instance(self, ctxt):
-+ return pytutorial(ctxt)
-+
-+ return Factory()
-diff --git a/src/nox/coreapps/examples/tutorial/tutorial.cc b/src/nox/coreapps/examples/tutorial/tutorial.cc
-new file mode 100644
-index 0000000..e7240cc
---- /dev/null
-+++ b/src/nox/coreapps/examples/tutorial/tutorial.cc
-@@ -0,0 +1,134 @@
-+#include "component.hh"
-+#include "config.h"
-+#include "packet-in.hh"
-+#include "flow.hh"
-+#include "assert.hh"
-+#include "netinet++/ethernetaddr.hh"
-+#include "netinet++/ethernet.hh"
-+#include <boost/shared_array.hpp>
-+#include <boost/bind.hpp>
-+#ifdef LOG4CXX_ENABLED
-+#include <boost/format.hpp>
-+#include "log4cxx/logger.h"
-+#else
-+#include "vlog.hh"
-+#endif
-+
-+using namespace std;
-+using namespace vigil;
-+using namespace vigil::container;
-+
-+namespace
-+{
-+ static Vlog_module lg("tutorial");
-+
-+ /** Learning switch.
-+ */
-+ class tutorial
-+ : public Component
-+ {
-+ public:
-+ /** Constructor.
-+ */
-+ tutorial(const Context* c, const json_object* node)
-+ : Component(c)
-+ { }
-+
-+ /** Configuration.
-+ * Add handler for packet-in event.
-+ */
-+ void configure(const Configuration*)
-+ {
-+ register_handler<Packet_in_event>
-+ (boost::bind(&tutorial::handle, this, _1));
-+ }
-+
-+ /** Just simply install.
-+ */
-+ void install()
-+ {
-+ lg.dbg(" Install called ");
-+ }
-+
-+ /** Function to setup flow.
-+ */
-+ void setup_flow(Flow& flow, datapathid datapath_id ,
-+ uint32_t buffer_id, uint16_t out_port)
-+ {
-+ ofp_flow_mod* ofm;
-+ size_t size = sizeof *ofm + sizeof(ofp_action_output);
-+ boost::shared_array<char> raw_of(new char[size]);
-+ ofm = (ofp_flow_mod*) raw_of.get();
-+
-+ ofm->header.version = OFP_VERSION;
-+ ofm->header.type = OFPT_FLOW_MOD;
-+ ofm->header.length = htons(size);
-+ ofm->match.wildcards = htonl(0);
-+ ofm->match.in_port = htons(flow.in_port);
-+ ofm->match.dl_vlan = flow.dl_vlan;
-+ memcpy(ofm->match.dl_src, flow.dl_src.octet, sizeof ofm->match.dl_src);
-+ memcpy(ofm->match.dl_dst, flow.dl_dst.octet, sizeof ofm->match.dl_dst);
-+ ofm->match.dl_type = flow.dl_type;
-+ ofm->match.nw_src = flow.nw_src;
-+ ofm->match.nw_dst = flow.nw_dst;
-+ ofm->match.nw_proto = flow.nw_proto;
-+ ofm->match.tp_src = flow.tp_src;
-+ ofm->match.tp_dst = flow.tp_dst;
-+ ofm->command = htons(OFPFC_ADD);
-+ ofm->buffer_id = htonl(buffer_id);
-+ ofm->idle_timeout = htons(5);
-+ ofm->hard_timeout = htons(OFP_FLOW_PERMANENT);
-+ ofm->priority = htons(OFP_DEFAULT_PRIORITY);
-+ ofp_action_output& action = *((ofp_action_output*)ofm->actions);
-+ memset(&action, 0, sizeof(ofp_action_output));
-+ action.type = htons(OFPAT_OUTPUT);
-+ action.len = htons(sizeof(ofp_action_output));
-+ action.max_len = htons(0);
-+ action.port = htons(out_port);
-+ send_openflow_command(datapath_id, &ofm->header, true);
-+ }
-+
-+ /** Function to handle packets.
-+ * @param datapath_id datapath id of switch
-+ * @param in_port port packet is received
-+ * @param buffer_id buffer id of packet
-+ * @param source source mac address in host order
-+ * @param destination destination mac address in host order
-+ */
-+ void handle_packet(datapathid datapath_id, uint16_t in_port, uint32_t buffer_id,
-+ uint64_t source, uint64_t destination)
-+ {
-+ send_openflow_packet(datapath_id, buffer_id, OFPP_FLOOD,
-+ in_port, true);
-+ }
-+
-+ /** Packet-on handler.
-+ */
-+ Disposition handle(const Event& e)
-+ {
-+ const Packet_in_event& pi = assert_cast<const Packet_in_event&>(e);
-+ uint32_t buffer_id = pi.buffer_id;
-+ Flow flow(pi.in_port, *pi.get_buffer());
-+
-+ // drop LLDP packets
-+ if (flow.dl_type == ethernet::LLDP)
-+ return CONTINUE;
-+
-+ // pass handle of unicast packet, else flood
-+ if (!flow.dl_src.is_multicast())
-+ handle_packet(pi.datapath_id, pi.in_port, buffer_id,
-+ flow.dl_src.hb_long(), flow.dl_dst.hb_long());
-+ else
-+ send_openflow_packet(pi.datapath_id, buffer_id, OFPP_FLOOD,
-+ pi.in_port, true);
-+
-+ return CONTINUE;
-+ }
-+
-+ private:
-+};
-+
-+REGISTER_COMPONENT(container::Simple_component_factory<tutorial>,
-+ tutorial);
-+
-+} // unnamed namespace
---
-1.7.5.4
-
diff --git a/util/nox-patches/0002-nox-ubuntu12-hacks.patch b/util/nox-patches/0002-nox-ubuntu12-hacks.patch
deleted file mode 100644
index 77619bc..0000000
--- a/util/nox-patches/0002-nox-ubuntu12-hacks.patch
+++ /dev/null
@@ -1,175 +0,0 @@
-From 166693d7cb640d4a41251b87e92c52d9c688196b Mon Sep 17 00:00:00 2001
-From: Bob Lantz <rlantz@cs.stanford.edu>
-Date: Mon, 14 May 2012 15:30:44 -0700
-Subject: [PATCH] Hacks to get NOX classic/destiny to compile under Ubuntu
- 12.04
-
-Thanks to Srinivasu R. Kanduru for the initial patch.
-
-Apologies for the hacks - it is my hope that this will be fixed
-upstream eventually.
-
----
- config/ac_pkg_swig.m4 | 7 ++++---
- src/Make.vars | 2 +-
- src/nox/coreapps/pyrt/deferredcallback.cc | 2 +-
- src/nox/coreapps/pyrt/pyglue.cc | 2 +-
- src/nox/coreapps/pyrt/pyrt.cc | 2 +-
- src/nox/netapps/authenticator/auth.i | 2 ++
- src/nox/netapps/authenticator/flow_util.i | 1 +
- src/nox/netapps/routing/routing.i | 2 ++
- .../switch_management/pyswitch_management.i | 2 ++
- src/nox/netapps/tests/tests.cc | 2 +-
- src/nox/netapps/topology/pytopology.i | 2 ++
- 11 files changed, 18 insertions(+), 8 deletions(-)
-
-diff --git a/config/ac_pkg_swig.m4 b/config/ac_pkg_swig.m4
-index d12556e..9b608f2 100644
---- a/config/ac_pkg_swig.m4
-+++ b/config/ac_pkg_swig.m4
-@@ -78,9 +78,10 @@ AC_DEFUN([AC_PROG_SWIG],[
- if test -z "$available_patch" ; then
- [available_patch=0]
- fi
-- if test $available_major -ne $required_major \
-- -o $available_minor -ne $required_minor \
-- -o $available_patch -lt $required_patch ; then
-+ major_done=`test $available_major -gt $required_major`
-+ minor_done=`test $available_minor -gt $required_minor`
-+ if test !$major_done -a !$minor_done \
-+ -a $available_patch -lt $required_patch ; then
- AC_MSG_WARN([SWIG version >= $1 is required. You have $swig_version. You should look at http://www.swig.org])
- SWIG=''
- else
-diff --git a/src/Make.vars b/src/Make.vars
-index d70d6aa..93b2879 100644
---- a/src/Make.vars
-+++ b/src/Make.vars
-@@ -53,7 +53,7 @@ AM_LDFLAGS += -export-dynamic
- endif
-
- # set python runtimefiles to be installed in the same directory as pkg
--pkglib_SCRIPTS = $(NOX_RUNTIMEFILES) $(NOX_PYBUILDFILES)
-+pkgdata_SCRIPTS = $(NOX_RUNTIMEFILES) $(NOX_PYBUILDFILES)
- BUILT_SOURCES = $(NOX_PYBUILDFILES)
-
- # Runtime-files build and clean rules
-diff --git a/src/nox/coreapps/pyrt/deferredcallback.cc b/src/nox/coreapps/pyrt/deferredcallback.cc
-index 3a40fa7..111a586 100644
---- a/src/nox/coreapps/pyrt/deferredcallback.cc
-+++ b/src/nox/coreapps/pyrt/deferredcallback.cc
-@@ -69,7 +69,7 @@ DeferredCallback::get_instance(const Callback& c)
- DeferredCallback* cb = new DeferredCallback(c);
-
- // flag as used in *_wrap.cc....correct?
-- return SWIG_Python_NewPointerObj(cb, s, SWIG_POINTER_OWN | 0);
-+ return SWIG_Python_NewPointerObj(m, cb, s, SWIG_POINTER_OWN | 0);
- }
-
- bool
-diff --git a/src/nox/coreapps/pyrt/pyglue.cc b/src/nox/coreapps/pyrt/pyglue.cc
-index 48b9716..317fd04 100644
---- a/src/nox/coreapps/pyrt/pyglue.cc
-+++ b/src/nox/coreapps/pyrt/pyglue.cc
-@@ -874,7 +874,7 @@ to_python(const Flow& flow)
- if (!s) {
- throw std::runtime_error("Could not find Flow SWIG type_info");
- }
-- return SWIG_Python_NewPointerObj(f, s, SWIG_POINTER_OWN | 0);
-+ return SWIG_Python_NewPointerObj(m, f, s, SWIG_POINTER_OWN | 0);
-
- // PyObject* dict = PyDict_New();
- // if (!dict) {
-diff --git a/src/nox/coreapps/pyrt/pyrt.cc b/src/nox/coreapps/pyrt/pyrt.cc
-index fbda461..8ec05d6 100644
---- a/src/nox/coreapps/pyrt/pyrt.cc
-+++ b/src/nox/coreapps/pyrt/pyrt.cc
-@@ -776,7 +776,7 @@ Python_event_manager::create_python_context(const Context* ctxt,
- pretty_print_python_exception());
- }
-
-- PyObject* pyctxt = SWIG_Python_NewPointerObj(p, s, 0);
-+ PyObject* pyctxt = SWIG_Python_NewPointerObj(m, p, s, 0);
- Py_INCREF(pyctxt); // XXX needed?
-
- //Py_DECREF(m);
-diff --git a/src/nox/netapps/authenticator/auth.i b/src/nox/netapps/authenticator/auth.i
-index 1de1a17..bfa04e2 100644
---- a/src/nox/netapps/authenticator/auth.i
-+++ b/src/nox/netapps/authenticator/auth.i
-@@ -18,6 +18,8 @@
-
- %module "nox.netapps.authenticator.pyauth"
-
-+// Hack to get it to compile -BL
-+%include "std_list.i"
- %{
- #include "core_events.hh"
- #include "pyrt/pycontext.hh"
-diff --git a/src/nox/netapps/authenticator/flow_util.i b/src/nox/netapps/authenticator/flow_util.i
-index f67c3ef..2a314e2 100644
---- a/src/nox/netapps/authenticator/flow_util.i
-+++ b/src/nox/netapps/authenticator/flow_util.i
-@@ -32,6 +32,7 @@ using namespace vigil::applications;
- %}
-
- %include "common-defs.i"
-+%include "std_list.i"
-
- %import "netinet/netinet.i"
- %import "pyrt/event.i"
-diff --git a/src/nox/netapps/routing/routing.i b/src/nox/netapps/routing/routing.i
-index 44ccb3d..f9221a2 100644
---- a/src/nox/netapps/routing/routing.i
-+++ b/src/nox/netapps/routing/routing.i
-@@ -17,6 +17,8 @@
- */
- %module "nox.netapps.routing.pyrouting"
-
-+// Hack to get it to compile -BL
-+%include "std_list.i"
- %{
- #include "pyrouting.hh"
- #include "routing.hh"
-diff --git a/src/nox/netapps/switch_management/pyswitch_management.i b/src/nox/netapps/switch_management/pyswitch_management.i
-index 72bfed4..ad2c90d 100644
---- a/src/nox/netapps/switch_management/pyswitch_management.i
-+++ b/src/nox/netapps/switch_management/pyswitch_management.i
-@@ -18,6 +18,8 @@
-
- %module "nox.netapps.pyswitch_management"
-
-+// Hack to get it to compile -BL
-+%include "std_list.i"
- %{
- #include "switch_management_proxy.hh"
- #include "pyrt/pycontext.hh"
-diff --git a/src/nox/netapps/tests/tests.cc b/src/nox/netapps/tests/tests.cc
-index 20e900d..f027028 100644
---- a/src/nox/netapps/tests/tests.cc
-+++ b/src/nox/netapps/tests/tests.cc
-@@ -306,7 +306,7 @@ private:
- throw runtime_error("Could not find PyContext SWIG type_info.");
- }
-
-- PyObject* pyctxt = SWIG_Python_NewPointerObj(p, s, 0);
-+ PyObject* pyctxt = SWIG_Python_NewPointerObj(m, p, s, 0);
- assert(pyctxt);
-
- Py_DECREF(m);
-diff --git a/src/nox/netapps/topology/pytopology.i b/src/nox/netapps/topology/pytopology.i
-index 94a9f4b..7a8cd94 100644
---- a/src/nox/netapps/topology/pytopology.i
-+++ b/src/nox/netapps/topology/pytopology.i
-@@ -18,6 +18,8 @@
-
- %module "nox.netapps.topology"
-
-+// Hack to get it to compile -BL
-+%include "std_list.i"
- %{
- #include "pytopology.hh"
- #include "pyrt/pycontext.hh"
---
-1.7.5.4
-
diff --git a/util/nox-patches/README b/util/nox-patches/README
deleted file mode 100644
index b74a668..0000000
--- a/util/nox-patches/README
+++ /dev/null
@@ -1,2 +0,0 @@
-0001: This patch adds the OpenFlow tutorial module source code to nox-destiny.
-0002: This patch hacks nox-destiny to compile on Ubuntu 12.04.
diff --git a/util/openflow-patches/README b/util/openflow-patches/README
deleted file mode 100644
index 36f19ad..0000000
--- a/util/openflow-patches/README
+++ /dev/null
@@ -1,5 +0,0 @@
-Patches for OpenFlow Reference Implementation
-
-controller.patch: patch controller to support up to 4096 switches (up from 16!)
-
-datapath.patch: patch to kernel datapath to compile with CONFIG_NET_NS=y
diff --git a/util/openflow-patches/controller.patch b/util/openflow-patches/controller.patch
deleted file mode 100644
index c392fae..0000000
--- a/util/openflow-patches/controller.patch
+++ /dev/null
@@ -1,15 +0,0 @@
-diff --git a/controller/controller.c b/controller/controller.c
-index 41f2547..6eec590 100644
---- a/controller/controller.c
-+++ b/controller/controller.c
-@@ -58,8 +58,8 @@
- #include "vlog.h"
- #define THIS_MODULE VLM_controller
-
--#define MAX_SWITCHES 16
--#define MAX_LISTENERS 16
-+#define MAX_SWITCHES 4096
-+#define MAX_LISTENERS 4096
-
- struct switch_ {
- struct lswitch *lswitch;
diff --git a/util/openflow-patches/datapath.patch b/util/openflow-patches/datapath.patch
deleted file mode 100644
index 13c9df7..0000000
--- a/util/openflow-patches/datapath.patch
+++ /dev/null
@@ -1,26 +0,0 @@
-diff --git a/datapath/datapath.c b/datapath/datapath.c
-index 4a4d3a2..365aa25 100644
---- a/datapath/datapath.c
-+++ b/datapath/datapath.c
-@@ -47,6 +47,9 @@
-
- #include "compat.h"
-
-+#ifdef CONFIG_NET_NS
-+#include <net/net_namespace.h>
-+#endif
-
- /* Strings to describe the manufacturer, hardware, and software. This data
- * is queriable through the switch description stats message. */
-@@ -259,6 +262,10 @@ send_openflow_skb(const struct datapath *dp,
- struct sk_buff *skb, const struct sender *sender)
- {
- return (sender
-- ? genlmsg_unicast(skb, sender->pid)
-+#ifdef CONFIG_NET_NS
-+ ? genlmsg_unicast(&init_net, skb, sender->pid)
-+#else
-+ ? genlmsg_unicast(skb, sender->pid)
-+#endif
- : genlmsg_multicast(skb, 0, dp_mc_group(dp), GFP_ATOMIC));
- }
diff --git a/util/sch_htb-ofbuf/Makefile b/util/sch_htb-ofbuf/Makefile
deleted file mode 100644
index c4d714f..0000000
--- a/util/sch_htb-ofbuf/Makefile
+++ /dev/null
@@ -1,11 +0,0 @@
-obj-m = sch_htb.o
-KVERSION = $(shell uname -r)
-all:
- make -C /lib/modules/$(KVERSION)/build M=$(PWD) modules
-install:
- test -e /lib/modules/$(KVERSION)/kernel/net/sched/sch_htb.ko.bak || mv /lib/modules/$(KVERSION)/kernel/net/sched/sch_htb.ko /lib/modules/$(KVERSION)/kernel/net/sched/sch_htb.ko.bak
- cp sch_htb.ko /lib/modules/$(KVERSION)/kernel/net/sched/sch_htb.ko
- rmmod sch_htb
- modprobe sch_htb
-clean:
- make -C /lib/modules/$(KVERSION)/build M=$(PWD) clean
diff --git a/util/sch_htb-ofbuf/README b/util/sch_htb-ofbuf/README
deleted file mode 100644
index 711ed77..0000000
--- a/util/sch_htb-ofbuf/README
+++ /dev/null
@@ -1,10 +0,0 @@
-Modified sch_htb implementation with ofbuf support.
-
-To compile, just type make. To use this module instead
-of regular sch_htb, do:
-
-0. make
-1. rmmod sch_htb
-2. insmod ./sch_htb.ko
-
-To revert, just rmmod sch_htb.
diff --git a/util/sch_htb-ofbuf/sch_htb.c b/util/sch_htb-ofbuf/sch_htb.c
deleted file mode 100644
index baead1c..0000000
--- a/util/sch_htb-ofbuf/sch_htb.c
+++ /dev/null
@@ -1,1644 +0,0 @@
-#define OFBUF (1)
-/*
- * net/sched/sch_htb.c Hierarchical token bucket, feed tree version
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version
- * 2 of the License, or (at your option) any later version.
- *
- * Authors: Martin Devera, <devik@cdi.cz>
- *
- * Credits (in time order) for older HTB versions:
- * Stef Coene <stef.coene@docum.org>
- * HTB support at LARTC mailing list
- * Ondrej Kraus, <krauso@barr.cz>
- * found missing INIT_QDISC(htb)
- * Vladimir Smelhaus, Aamer Akhter, Bert Hubert
- * helped a lot to locate nasty class stall bug
- * Andi Kleen, Jamal Hadi, Bert Hubert
- * code review and helpful comments on shaping
- * Tomasz Wrona, <tw@eter.tym.pl>
- * created test case so that I was able to fix nasty bug
- * Wilfried Weissmann
- * spotted bug in dequeue code and helped with fix
- * Jiri Fojtasek
- * fixed requeue routine
- * and many others. thanks.
- */
-#include <linux/module.h>
-#include <linux/moduleparam.h>
-#include <linux/types.h>
-#include <linux/kernel.h>
-#include <linux/string.h>
-#include <linux/errno.h>
-#include <linux/skbuff.h>
-#include <linux/list.h>
-#include <linux/compiler.h>
-#include <linux/rbtree.h>
-#include <linux/workqueue.h>
-#include <linux/slab.h>
-#include <net/netlink.h>
-#include <net/pkt_sched.h>
-
-/* HTB algorithm.
- Author: devik@cdi.cz
- ========================================================================
- HTB is like TBF with multiple classes. It is also similar to CBQ because
- it allows to assign priority to each class in hierarchy.
- In fact it is another implementation of Floyd's formal sharing.
-
- Levels:
- Each class is assigned level. Leaf has ALWAYS level 0 and root
- classes have level TC_HTB_MAXDEPTH-1. Interior nodes has level
- one less than their parent.
-*/
-
-static int htb_hysteresis __read_mostly = 0; /* whether to use mode hysteresis for speedup */
-#define HTB_VER 0x30011 /* major must be matched with number suplied by TC as version */
-
-#if HTB_VER >> 16 != TC_HTB_PROTOVER
-#error "Mismatched sch_htb.c and pkt_sch.h"
-#endif
-
-/* Module parameter and sysfs export */
-module_param (htb_hysteresis, int, 0640);
-MODULE_PARM_DESC(htb_hysteresis, "Hysteresis mode, less CPU load, less accurate");
-
-/* used internaly to keep status of single class */
-enum htb_cmode {
- HTB_CANT_SEND, /* class can't send and can't borrow */
- HTB_MAY_BORROW, /* class can't send but may borrow */
- HTB_CAN_SEND /* class can send */
-};
-
-/* interior & leaf nodes; props specific to leaves are marked L: */
-struct htb_class {
- struct Qdisc_class_common common;
- /* general class parameters */
- struct gnet_stats_basic_packed bstats;
- struct gnet_stats_queue qstats;
- struct gnet_stats_rate_est rate_est;
- struct tc_htb_xstats xstats; /* our special stats */
- int refcnt; /* usage count of this class */
-
- /* topology */
- int level; /* our level (see above) */
- unsigned int children;
- struct htb_class *parent; /* parent class */
-
- int prio; /* these two are used only by leaves... */
- int quantum; /* but stored for parent-to-leaf return */
-
- union {
- struct htb_class_leaf {
- struct Qdisc *q;
- int deficit[TC_HTB_MAXDEPTH];
- struct list_head drop_list;
- } leaf;
- struct htb_class_inner {
- struct rb_root feed[TC_HTB_NUMPRIO]; /* feed trees */
- struct rb_node *ptr[TC_HTB_NUMPRIO]; /* current class ptr */
- /* When class changes from state 1->2 and disconnects from
- * parent's feed then we lost ptr value and start from the
- * first child again. Here we store classid of the
- * last valid ptr (used when ptr is NULL).
- */
- u32 last_ptr_id[TC_HTB_NUMPRIO];
- } inner;
- } un;
- struct rb_node node[TC_HTB_NUMPRIO]; /* node for self or feed tree */
- struct rb_node pq_node; /* node for event queue */
- psched_time_t pq_key;
-
- int prio_activity; /* for which prios are we active */
- enum htb_cmode cmode; /* current mode of the class */
-
- /* class attached filters */
- struct tcf_proto *filter_list;
- int filter_cnt;
-
- /* token bucket parameters */
- struct qdisc_rate_table *rate; /* rate table of the class itself */
- struct qdisc_rate_table *ceil; /* ceiling rate (limits borrows too) */
- long buffer, cbuffer; /* token bucket depth/rate */
- psched_tdiff_t mbuffer; /* max wait time */
- long tokens, ctokens; /* current number of tokens */
- psched_time_t t_c; /* checkpoint time */
-};
-
-struct htb_sched {
- struct Qdisc_class_hash clhash;
- struct list_head drops[TC_HTB_NUMPRIO];/* active leaves (for drops) */
-
- /* self list - roots of self generating tree */
- struct rb_root row[TC_HTB_MAXDEPTH][TC_HTB_NUMPRIO];
- int row_mask[TC_HTB_MAXDEPTH];
- struct rb_node *ptr[TC_HTB_MAXDEPTH][TC_HTB_NUMPRIO];
- u32 last_ptr_id[TC_HTB_MAXDEPTH][TC_HTB_NUMPRIO];
-
- /* self wait list - roots of wait PQs per row */
- struct rb_root wait_pq[TC_HTB_MAXDEPTH];
-
- /* time of nearest event per level (row) */
- psched_time_t near_ev_cache[TC_HTB_MAXDEPTH];
-
- int defcls; /* class where unclassified flows go to */
-
- /* filters for qdisc itself */
- struct tcf_proto *filter_list;
-
- int rate2quantum; /* quant = rate / rate2quantum */
- psched_time_t now; /* cached dequeue time */
- struct qdisc_watchdog watchdog;
-
- /* non shaped skbs; let them go directly thru */
- struct sk_buff_head direct_queue;
- int direct_qlen; /* max qlen of above */
-
- long direct_pkts;
-
-#if OFBUF
- /* overflow buffer */
- struct sk_buff_head ofbuf;
- int ofbuf_queued; /* # packets queued in above */
-#endif
-
-#define HTB_WARN_TOOMANYEVENTS 0x1
- unsigned int warned; /* only one warning */
- struct work_struct work;
-};
-
-/* find class in global hash table using given handle */
-static inline struct htb_class *htb_find(u32 handle, struct Qdisc *sch)
-{
- struct htb_sched *q = qdisc_priv(sch);
- struct Qdisc_class_common *clc;
-
- clc = qdisc_class_find(&q->clhash, handle);
- if (clc == NULL)
- return NULL;
- return container_of(clc, struct htb_class, common);
-}
-
-/**
- * htb_classify - classify a packet into class
- *
- * It returns NULL if the packet should be dropped or -1 if the packet
- * should be passed directly thru. In all other cases leaf class is returned.
- * We allow direct class selection by classid in priority. The we examine
- * filters in qdisc and in inner nodes (if higher filter points to the inner
- * node). If we end up with classid MAJOR:0 we enqueue the skb into special
- * internal fifo (direct). These packets then go directly thru. If we still
- * have no valid leaf we try to use MAJOR:default leaf. It still unsuccessful
- * then finish and return direct queue.
- */
-#define HTB_DIRECT ((struct htb_class *)-1L)
-
-static struct htb_class *htb_classify(struct sk_buff *skb, struct Qdisc *sch,
- int *qerr)
-{
- struct htb_sched *q = qdisc_priv(sch);
- struct htb_class *cl;
- struct tcf_result res;
- struct tcf_proto *tcf;
- int result;
-
- /* allow to select class by setting skb->priority to valid classid;
- * note that nfmark can be used too by attaching filter fw with no
- * rules in it
- */
- if (skb->priority == sch->handle)
- return HTB_DIRECT; /* X:0 (direct flow) selected */
- cl = htb_find(skb->priority, sch);
- if (cl && cl->level == 0)
- return cl;
-
- *qerr = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
- tcf = q->filter_list;
- while (tcf && (result = tc_classify(skb, tcf, &res)) >= 0) {
-#ifdef CONFIG_NET_CLS_ACT
- switch (result) {
- case TC_ACT_QUEUED:
- case TC_ACT_STOLEN:
- *qerr = NET_XMIT_SUCCESS | __NET_XMIT_STOLEN;
- case TC_ACT_SHOT:
- return NULL;
- }
-#endif
- cl = (void *)res.class;
- if (!cl) {
- if (res.classid == sch->handle)
- return HTB_DIRECT; /* X:0 (direct flow) */
- cl = htb_find(res.classid, sch);
- if (!cl)
- break; /* filter selected invalid classid */
- }
- if (!cl->level)
- return cl; /* we hit leaf; return it */
-
- /* we have got inner class; apply inner filter chain */
- tcf = cl->filter_list;
- }
- /* classification failed; try to use default class */
- cl = htb_find(TC_H_MAKE(TC_H_MAJ(sch->handle), q->defcls), sch);
- if (!cl || cl->level)
- return HTB_DIRECT; /* bad default .. this is safe bet */
- return cl;
-}
-
-/**
- * htb_add_to_id_tree - adds class to the round robin list
- *
- * Routine adds class to the list (actually tree) sorted by classid.
- * Make sure that class is not already on such list for given prio.
- */
-static void htb_add_to_id_tree(struct rb_root *root,
- struct htb_class *cl, int prio)
-{
- struct rb_node **p = &root->rb_node, *parent = NULL;
-
- while (*p) {
- struct htb_class *c;
- parent = *p;
- c = rb_entry(parent, struct htb_class, node[prio]);
-
- if (cl->common.classid > c->common.classid)
- p = &parent->rb_right;
- else
- p = &parent->rb_left;
- }
- rb_link_node(&cl->node[prio], parent, p);
- rb_insert_color(&cl->node[prio], root);
-}
-
-/**
- * htb_add_to_wait_tree - adds class to the event queue with delay
- *
- * The class is added to priority event queue to indicate that class will
- * change its mode in cl->pq_key microseconds. Make sure that class is not
- * already in the queue.
- */
-static void htb_add_to_wait_tree(struct htb_sched *q,
- struct htb_class *cl, long delay)
-{
- struct rb_node **p = &q->wait_pq[cl->level].rb_node, *parent = NULL;
-
- cl->pq_key = q->now + delay;
- if (cl->pq_key == q->now)
- cl->pq_key++;
-
- /* update the nearest event cache */
- if (q->near_ev_cache[cl->level] > cl->pq_key)
- q->near_ev_cache[cl->level] = cl->pq_key;
-
- while (*p) {
- struct htb_class *c;
- parent = *p;
- c = rb_entry(parent, struct htb_class, pq_node);
- if (cl->pq_key >= c->pq_key)
- p = &parent->rb_right;
- else
- p = &parent->rb_left;
- }
- rb_link_node(&cl->pq_node, parent, p);
- rb_insert_color(&cl->pq_node, &q->wait_pq[cl->level]);
-}
-
-/**
- * htb_next_rb_node - finds next node in binary tree
- *
- * When we are past last key we return NULL.
- * Average complexity is 2 steps per call.
- */
-static inline void htb_next_rb_node(struct rb_node **n)
-{
- *n = rb_next(*n);
-}
-
-/**
- * htb_add_class_to_row - add class to its row
- *
- * The class is added to row at priorities marked in mask.
- * It does nothing if mask == 0.
- */
-static inline void htb_add_class_to_row(struct htb_sched *q,
- struct htb_class *cl, int mask)
-{
- q->row_mask[cl->level] |= mask;
- while (mask) {
- int prio = ffz(~mask);
- mask &= ~(1 << prio);
- htb_add_to_id_tree(q->row[cl->level] + prio, cl, prio);
- }
-}
-
-/* If this triggers, it is a bug in this code, but it need not be fatal */
-static void htb_safe_rb_erase(struct rb_node *rb, struct rb_root *root)
-{
- if (RB_EMPTY_NODE(rb)) {
- WARN_ON(1);
- } else {
- rb_erase(rb, root);
- RB_CLEAR_NODE(rb);
- }
-}
-
-
-/**
- * htb_remove_class_from_row - removes class from its row
- *
- * The class is removed from row at priorities marked in mask.
- * It does nothing if mask == 0.
- */
-static inline void htb_remove_class_from_row(struct htb_sched *q,
- struct htb_class *cl, int mask)
-{
- int m = 0;
-
- while (mask) {
- int prio = ffz(~mask);
-
- mask &= ~(1 << prio);
- if (q->ptr[cl->level][prio] == cl->node + prio)
- htb_next_rb_node(q->ptr[cl->level] + prio);
-
- htb_safe_rb_erase(cl->node + prio, q->row[cl->level] + prio);
- if (!q->row[cl->level][prio].rb_node)
- m |= 1 << prio;
- }
- q->row_mask[cl->level] &= ~m;
-}
-
-/**
- * htb_activate_prios - creates active classe's feed chain
- *
- * The class is connected to ancestors and/or appropriate rows
- * for priorities it is participating on. cl->cmode must be new
- * (activated) mode. It does nothing if cl->prio_activity == 0.
- */
-static void htb_activate_prios(struct htb_sched *q, struct htb_class *cl)
-{
- struct htb_class *p = cl->parent;
- long m, mask = cl->prio_activity;
-
- while (cl->cmode == HTB_MAY_BORROW && p && mask) {
- m = mask;
- while (m) {
- int prio = ffz(~m);
- m &= ~(1 << prio);
-
- if (p->un.inner.feed[prio].rb_node)
- /* parent already has its feed in use so that
- * reset bit in mask as parent is already ok
- */
- mask &= ~(1 << prio);
-
- htb_add_to_id_tree(p->un.inner.feed + prio, cl, prio);
- }
- p->prio_activity |= mask;
- cl = p;
- p = cl->parent;
-
- }
- if (cl->cmode == HTB_CAN_SEND && mask)
- htb_add_class_to_row(q, cl, mask);
-}
-
-/**
- * htb_deactivate_prios - remove class from feed chain
- *
- * cl->cmode must represent old mode (before deactivation). It does
- * nothing if cl->prio_activity == 0. Class is removed from all feed
- * chains and rows.
- */
-static void htb_deactivate_prios(struct htb_sched *q, struct htb_class *cl)
-{
- struct htb_class *p = cl->parent;
- long m, mask = cl->prio_activity;
-
- while (cl->cmode == HTB_MAY_BORROW && p && mask) {
- m = mask;
- mask = 0;
- while (m) {
- int prio = ffz(~m);
- m &= ~(1 << prio);
-
- if (p->un.inner.ptr[prio] == cl->node + prio) {
- /* we are removing child which is pointed to from
- * parent feed - forget the pointer but remember
- * classid
- */
- p->un.inner.last_ptr_id[prio] = cl->common.classid;
- p->un.inner.ptr[prio] = NULL;
- }
-
- htb_safe_rb_erase(cl->node + prio, p->un.inner.feed + prio);
-
- if (!p->un.inner.feed[prio].rb_node)
- mask |= 1 << prio;
- }
-
- p->prio_activity &= ~mask;
- cl = p;
- p = cl->parent;
-
- }
- if (cl->cmode == HTB_CAN_SEND && mask)
- htb_remove_class_from_row(q, cl, mask);
-}
-
-static inline long htb_lowater(const struct htb_class *cl)
-{
- if (htb_hysteresis)
- return cl->cmode != HTB_CANT_SEND ? -cl->cbuffer : 0;
- else
- return 0;
-}
-static inline long htb_hiwater(const struct htb_class *cl)
-{
- if (htb_hysteresis)
- return cl->cmode == HTB_CAN_SEND ? -cl->buffer : 0;
- else
- return 0;
-}
-
-
-/**
- * htb_class_mode - computes and returns current class mode
- *
- * It computes cl's mode at time cl->t_c+diff and returns it. If mode
- * is not HTB_CAN_SEND then cl->pq_key is updated to time difference
- * from now to time when cl will change its state.
- * Also it is worth to note that class mode doesn't change simply
- * at cl->{c,}tokens == 0 but there can rather be hysteresis of
- * 0 .. -cl->{c,}buffer range. It is meant to limit number of
- * mode transitions per time unit. The speed gain is about 1/6.
- */
-static inline enum htb_cmode
-htb_class_mode(struct htb_class *cl, long *diff)
-{
- long toks;
-
- if ((toks = (cl->ctokens + *diff)) < htb_lowater(cl)) {
- *diff = -toks;
- return HTB_CANT_SEND;
- }
-
- if ((toks = (cl->tokens + *diff)) >= htb_hiwater(cl))
- return HTB_CAN_SEND;
-
- *diff = -toks;
- return HTB_MAY_BORROW;
-}
-
-/**
- * htb_change_class_mode - changes classe's mode
- *
- * This should be the only way how to change classe's mode under normal
- * cirsumstances. Routine will update feed lists linkage, change mode
- * and add class to the wait event queue if appropriate. New mode should
- * be different from old one and cl->pq_key has to be valid if changing
- * to mode other than HTB_CAN_SEND (see htb_add_to_wait_tree).
- */
-static void
-htb_change_class_mode(struct htb_sched *q, struct htb_class *cl, long *diff)
-{
- enum htb_cmode new_mode = htb_class_mode(cl, diff);
-
- if (new_mode == cl->cmode)
- return;
-
- if (cl->prio_activity) { /* not necessary: speed optimization */
- if (cl->cmode != HTB_CANT_SEND)
- htb_deactivate_prios(q, cl);
- cl->cmode = new_mode;
- if (new_mode != HTB_CANT_SEND)
- htb_activate_prios(q, cl);
- } else
- cl->cmode = new_mode;
-}
-
-/**
- * htb_activate - inserts leaf cl into appropriate active feeds
- *
- * Routine learns (new) priority of leaf and activates feed chain
- * for the prio. It can be called on already active leaf safely.
- * It also adds leaf into droplist.
- */
-static inline void htb_activate(struct htb_sched *q, struct htb_class *cl)
-{
- WARN_ON(cl->level || !cl->un.leaf.q || !cl->un.leaf.q->q.qlen);
-
- if (!cl->prio_activity) {
- cl->prio_activity = 1 << cl->prio;
- htb_activate_prios(q, cl);
- list_add_tail(&cl->un.leaf.drop_list,
- q->drops + cl->prio);
- }
-}
-
-/**
- * htb_deactivate - remove leaf cl from active feeds
- *
- * Make sure that leaf is active. In the other words it can't be called
- * with non-active leaf. It also removes class from the drop list.
- */
-static inline void htb_deactivate(struct htb_sched *q, struct htb_class *cl)
-{
- WARN_ON(!cl->prio_activity);
-
- htb_deactivate_prios(q, cl);
- cl->prio_activity = 0;
- list_del_init(&cl->un.leaf.drop_list);
-}
-
-static int htb_enqueue(struct sk_buff *skb, struct Qdisc *sch)
-{
- int uninitialized_var(ret);
- struct htb_sched *q = qdisc_priv(sch);
- struct htb_class *cl = htb_classify(skb, sch, &ret);
-
-#if OFBUF
- if(cl != HTB_DIRECT && cl)
- skb_get(skb);
-#endif
-
- if (cl == HTB_DIRECT) {
- /* enqueue to helper queue */
- if (q->direct_queue.qlen < q->direct_qlen) {
- __skb_queue_tail(&q->direct_queue, skb);
- q->direct_pkts++;
- } else {
- kfree_skb(skb);
- sch->qstats.drops++;
- return NET_XMIT_DROP;
- }
-#ifdef CONFIG_NET_CLS_ACT
- } else if (!cl) {
- if (ret & __NET_XMIT_BYPASS)
- sch->qstats.drops++;
- kfree_skb(skb);
- return ret;
-#endif
- } else if ((ret = qdisc_enqueue(skb, cl->un.leaf.q)) != NET_XMIT_SUCCESS) {
- /* We shouldn't drop this, but enqueue it into ofbuf */
- // TODO: is skb actually valid?
- // Ans: looks like qdisc_enqueue will end up freeing the packet
- // if enqueue failed. So we should incr refcnt before calling qdisc_enqueue...
-#if OFBUF
- __skb_queue_tail(&q->ofbuf, skb);
- q->ofbuf_queued++;
-#else
- if (net_xmit_drop_count(ret)) {
- sch->qstats.drops++;
- cl->qstats.drops++;
- }
- return ret;
-#endif
- } else {
- bstats_update(&cl->bstats, skb);
- htb_activate(q, cl);
-#if OFBUF
- kfree_skb(skb);
-#endif
- }
-
- sch->q.qlen++;
- return NET_XMIT_SUCCESS;
-}
-
-static inline void htb_accnt_tokens(struct htb_class *cl, int bytes, long diff)
-{
- long toks = diff + cl->tokens;
-
- if (toks > cl->buffer)
- toks = cl->buffer;
- toks -= (long) qdisc_l2t(cl->rate, bytes);
- if (toks <= -cl->mbuffer)
- toks = 1 - cl->mbuffer;
-
- cl->tokens = toks;
-}
-
-static inline void htb_accnt_ctokens(struct htb_class *cl, int bytes, long diff)
-{
- long toks = diff + cl->ctokens;
-
- if (toks > cl->cbuffer)
- toks = cl->cbuffer;
- toks -= (long) qdisc_l2t(cl->ceil, bytes);
- if (toks <= -cl->mbuffer)
- toks = 1 - cl->mbuffer;
-
- cl->ctokens = toks;
-}
-
-/**
- * htb_charge_class - charges amount "bytes" to leaf and ancestors
- *
- * Routine assumes that packet "bytes" long was dequeued from leaf cl
- * borrowing from "level". It accounts bytes to ceil leaky bucket for
- * leaf and all ancestors and to rate bucket for ancestors at levels
- * "level" and higher. It also handles possible change of mode resulting
- * from the update. Note that mode can also increase here (MAY_BORROW to
- * CAN_SEND) because we can use more precise clock that event queue here.
- * In such case we remove class from event queue first.
- */
-static void htb_charge_class(struct htb_sched *q, struct htb_class *cl,
- int level, struct sk_buff *skb)
-{
- int bytes = qdisc_pkt_len(skb);
- enum htb_cmode old_mode;
- long diff;
-
- while (cl) {
- diff = psched_tdiff_bounded(q->now, cl->t_c, cl->mbuffer);
- if (cl->level >= level) {
- if (cl->level == level)
- cl->xstats.lends++;
- htb_accnt_tokens(cl, bytes, diff);
- } else {
- cl->xstats.borrows++;
- cl->tokens += diff; /* we moved t_c; update tokens */
- }
- htb_accnt_ctokens(cl, bytes, diff);
- cl->t_c = q->now;
-
- old_mode = cl->cmode;
- diff = 0;
- htb_change_class_mode(q, cl, &diff);
- if (old_mode != cl->cmode) {
- if (old_mode != HTB_CAN_SEND)
- htb_safe_rb_erase(&cl->pq_node, q->wait_pq + cl->level);
- if (cl->cmode != HTB_CAN_SEND)
- htb_add_to_wait_tree(q, cl, diff);
- }
-
- /* update basic stats except for leaves which are already updated */
- if (cl->level)
- bstats_update(&cl->bstats, skb);
-
- cl = cl->parent;
- }
-}
-
-/**
- * htb_do_events - make mode changes to classes at the level
- *
- * Scans event queue for pending events and applies them. Returns time of
- * next pending event (0 for no event in pq, q->now for too many events).
- * Note: Applied are events whose have cl->pq_key <= q->now.
- */
-static psched_time_t htb_do_events(struct htb_sched *q, int level,
- unsigned long start)
-{
- /* don't run for longer than 2 jiffies; 2 is used instead of
- * 1 to simplify things when jiffy is going to be incremented
- * too soon
- */
- unsigned long stop_at = start + 2;
- while (time_before(jiffies, stop_at)) {
- struct htb_class *cl;
- long diff;
- struct rb_node *p = rb_first(&q->wait_pq[level]);
-
- if (!p)
- return 0;
-
- cl = rb_entry(p, struct htb_class, pq_node);
- if (cl->pq_key > q->now)
- return cl->pq_key;
-
- htb_safe_rb_erase(p, q->wait_pq + level);
- diff = psched_tdiff_bounded(q->now, cl->t_c, cl->mbuffer);
- htb_change_class_mode(q, cl, &diff);
- if (cl->cmode != HTB_CAN_SEND)
- htb_add_to_wait_tree(q, cl, diff);
- }
-
- /* too much load - let's continue after a break for scheduling */
- if (!(q->warned & HTB_WARN_TOOMANYEVENTS)) {
- pr_warning("htb: too many events!\n");
- q->warned |= HTB_WARN_TOOMANYEVENTS;
- }
-
- return q->now;
-}
-
-/* Returns class->node+prio from id-tree where classe's id is >= id. NULL
- * is no such one exists.
- */
-static struct rb_node *htb_id_find_next_upper(int prio, struct rb_node *n,
- u32 id)
-{
- struct rb_node *r = NULL;
- while (n) {
- struct htb_class *cl =
- rb_entry(n, struct htb_class, node[prio]);
-
- if (id > cl->common.classid) {
- n = n->rb_right;
- } else if (id < cl->common.classid) {
- r = n;
- n = n->rb_left;
- } else {
- return n;
- }
- }
- return r;
-}
-
-/**
- * htb_lookup_leaf - returns next leaf class in DRR order
- *
- * Find leaf where current feed pointers points to.
- */
-static struct htb_class *htb_lookup_leaf(struct rb_root *tree, int prio,
- struct rb_node **pptr, u32 * pid)
-{
- int i;
- struct {
- struct rb_node *root;
- struct rb_node **pptr;
- u32 *pid;
- } stk[TC_HTB_MAXDEPTH], *sp = stk;
-
- BUG_ON(!tree->rb_node);
- sp->root = tree->rb_node;
- sp->pptr = pptr;
- sp->pid = pid;
-
- for (i = 0; i < 65535; i++) {
- if (!*sp->pptr && *sp->pid) {
- /* ptr was invalidated but id is valid - try to recover
- * the original or next ptr
- */
- *sp->pptr =
- htb_id_find_next_upper(prio, sp->root, *sp->pid);
- }
- *sp->pid = 0; /* ptr is valid now so that remove this hint as it
- * can become out of date quickly
- */
- if (!*sp->pptr) { /* we are at right end; rewind & go up */
- *sp->pptr = sp->root;
- while ((*sp->pptr)->rb_left)
- *sp->pptr = (*sp->pptr)->rb_left;
- if (sp > stk) {
- sp--;
- if (!*sp->pptr) {
- WARN_ON(1);
- return NULL;
- }
- htb_next_rb_node(sp->pptr);
- }
- } else {
- struct htb_class *cl;
- cl = rb_entry(*sp->pptr, struct htb_class, node[prio]);
- if (!cl->level)
- return cl;
- (++sp)->root = cl->un.inner.feed[prio].rb_node;
- sp->pptr = cl->un.inner.ptr + prio;
- sp->pid = cl->un.inner.last_ptr_id + prio;
- }
- }
- WARN_ON(1);
- return NULL;
-}
-
-/* dequeues packet at given priority and level; call only if
- * you are sure that there is active class at prio/level
- */
-static struct sk_buff *htb_dequeue_tree(struct htb_sched *q, int prio,
- int level)
-{
- struct sk_buff *skb = NULL;
- struct htb_class *cl, *start;
- /* look initial class up in the row */
- start = cl = htb_lookup_leaf(q->row[level] + prio, prio,
- q->ptr[level] + prio,
- q->last_ptr_id[level] + prio);
-
- do {
-next:
- if (unlikely(!cl))
- return NULL;
-
- /* class can be empty - it is unlikely but can be true if leaf
- * qdisc drops packets in enqueue routine or if someone used
- * graft operation on the leaf since last dequeue;
- * simply deactivate and skip such class
- */
- if (unlikely(cl->un.leaf.q->q.qlen == 0)) {
- struct htb_class *next;
- htb_deactivate(q, cl);
-
- /* row/level might become empty */
- if ((q->row_mask[level] & (1 << prio)) == 0)
- return NULL;
-
- next = htb_lookup_leaf(q->row[level] + prio,
- prio, q->ptr[level] + prio,
- q->last_ptr_id[level] + prio);
-
- if (cl == start) /* fix start if we just deleted it */
- start = next;
- cl = next;
- goto next;
- }
-
- skb = cl->un.leaf.q->dequeue(cl->un.leaf.q);
- if (likely(skb != NULL))
- break;
-
- qdisc_warn_nonwc("htb", cl->un.leaf.q);
- htb_next_rb_node((level ? cl->parent->un.inner.ptr : q->
- ptr[0]) + prio);
- cl = htb_lookup_leaf(q->row[level] + prio, prio,
- q->ptr[level] + prio,
- q->last_ptr_id[level] + prio);
-
- } while (cl != start);
-
- if (likely(skb != NULL)) {
- cl->un.leaf.deficit[level] -= qdisc_pkt_len(skb);
- if (cl->un.leaf.deficit[level] < 0) {
- cl->un.leaf.deficit[level] += cl->quantum;
- htb_next_rb_node((level ? cl->parent->un.inner.ptr : q->
- ptr[0]) + prio);
- }
- /* this used to be after charge_class but this constelation
- * gives us slightly better performance
- */
- if (!cl->un.leaf.q->q.qlen)
- htb_deactivate(q, cl);
- htb_charge_class(q, cl, level, skb);
- }
- return skb;
-}
-
-static struct sk_buff *htb_dequeue(struct Qdisc *sch)
-{
- struct sk_buff *skb;
- struct htb_sched *q = qdisc_priv(sch);
- int level;
- psched_time_t next_event;
- unsigned long start_at;
- u32 r, i;
- struct sk_buff *pkt;
-
- /* try to dequeue direct packets as high prio (!) to minimize cpu work */
- skb = __skb_dequeue(&q->direct_queue);
- if (skb != NULL) {
-ok:
- qdisc_bstats_update(sch, skb);
- qdisc_unthrottled(sch);
- sch->q.qlen--;
-#if OFBUF
- if(q->ofbuf_queued > 0) {
- i = 0;
- r = net_random() % q->ofbuf_queued;
- // enqueue the rth packet and drop the rest
- while((pkt = __skb_dequeue(&q->ofbuf)) != NULL) {
- if(i == r) {
- // the chosen one
- htb_enqueue(pkt, sch);
- } else {
- kfree_skb(pkt);
- }
- i++;
- }
- q->ofbuf_queued = 0;
- }
-#endif
- return skb;
- }
-
- if (!sch->q.qlen)
- goto fin;
- q->now = psched_get_time();
- start_at = jiffies;
-
- next_event = q->now + 5 * PSCHED_TICKS_PER_SEC;
-
- for (level = 0; level < TC_HTB_MAXDEPTH; level++) {
- /* common case optimization - skip event handler quickly */
- int m;
- psched_time_t event;
-
- if (q->now >= q->near_ev_cache[level]) {
- event = htb_do_events(q, level, start_at);
- if (!event)
- event = q->now + PSCHED_TICKS_PER_SEC;
- q->near_ev_cache[level] = event;
- } else
- event = q->near_ev_cache[level];
-
- if (next_event > event)
- next_event = event;
-
- m = ~q->row_mask[level];
- while (m != (int)(-1)) {
- int prio = ffz(m);
-
- m |= 1 << prio;
- skb = htb_dequeue_tree(q, prio, level);
- if (likely(skb != NULL))
- goto ok;
- }
- }
- sch->qstats.overlimits++;
- if (likely(next_event > q->now))
- qdisc_watchdog_schedule(&q->watchdog, next_event);
- else
- schedule_work(&q->work);
-fin:
- return skb;
-}
-
-/* try to drop from each class (by prio) until one succeed */
-static unsigned int htb_drop(struct Qdisc *sch)
-{
- struct htb_sched *q = qdisc_priv(sch);
- int prio;
-
- for (prio = TC_HTB_NUMPRIO - 1; prio >= 0; prio--) {
- struct list_head *p;
- list_for_each(p, q->drops + prio) {
- struct htb_class *cl = list_entry(p, struct htb_class,
- un.leaf.drop_list);
- unsigned int len;
- if (cl->un.leaf.q->ops->drop &&
- (len = cl->un.leaf.q->ops->drop(cl->un.leaf.q))) {
- sch->q.qlen--;
- if (!cl->un.leaf.q->q.qlen)
- htb_deactivate(q, cl);
- return len;
- }
- }
- }
- return 0;
-}
-
-/* reset all classes */
-/* always caled under BH & queue lock */
-static void htb_reset(struct Qdisc *sch)
-{
- struct htb_sched *q = qdisc_priv(sch);
- struct htb_class *cl;
- struct hlist_node *n;
- unsigned int i;
-
- for (i = 0; i < q->clhash.hashsize; i++) {
- hlist_for_each_entry(cl, n, &q->clhash.hash[i], common.hnode) {
- if (cl->level)
- memset(&cl->un.inner, 0, sizeof(cl->un.inner));
- else {
- if (cl->un.leaf.q)
- qdisc_reset(cl->un.leaf.q);
- INIT_LIST_HEAD(&cl->un.leaf.drop_list);
- }
- cl->prio_activity = 0;
- cl->cmode = HTB_CAN_SEND;
-
- }
- }
- qdisc_watchdog_cancel(&q->watchdog);
- __skb_queue_purge(&q->direct_queue);
- sch->q.qlen = 0;
-#if OFBUF
- __skb_queue_purge(&q->ofbuf);
- q->ofbuf_queued = 0;
-#endif
- memset(q->row, 0, sizeof(q->row));
- memset(q->row_mask, 0, sizeof(q->row_mask));
- memset(q->wait_pq, 0, sizeof(q->wait_pq));
- memset(q->ptr, 0, sizeof(q->ptr));
- for (i = 0; i < TC_HTB_NUMPRIO; i++)
- INIT_LIST_HEAD(q->drops + i);
-}
-
-static const struct nla_policy htb_policy[TCA_HTB_MAX + 1] = {
- [TCA_HTB_PARMS] = { .len = sizeof(struct tc_htb_opt) },
- [TCA_HTB_INIT] = { .len = sizeof(struct tc_htb_glob) },
- [TCA_HTB_CTAB] = { .type = NLA_BINARY, .len = TC_RTAB_SIZE },
- [TCA_HTB_RTAB] = { .type = NLA_BINARY, .len = TC_RTAB_SIZE },
-};
-
-static void htb_work_func(struct work_struct *work)
-{
- struct htb_sched *q = container_of(work, struct htb_sched, work);
- struct Qdisc *sch = q->watchdog.qdisc;
-
- __netif_schedule(qdisc_root(sch));
-}
-
-static int htb_init(struct Qdisc *sch, struct nlattr *opt)
-{
- struct htb_sched *q = qdisc_priv(sch);
- struct nlattr *tb[TCA_HTB_INIT + 1];
- struct tc_htb_glob *gopt;
- int err;
- int i;
-
- if (!opt)
- return -EINVAL;
-
- err = nla_parse_nested(tb, TCA_HTB_INIT, opt, htb_policy);
- if (err < 0)
- return err;
-
- if (tb[TCA_HTB_INIT] == NULL) {
- pr_err("HTB: hey probably you have bad tc tool ?\n");
- return -EINVAL;
- }
- gopt = nla_data(tb[TCA_HTB_INIT]);
- if (gopt->version != HTB_VER >> 16) {
- pr_err("HTB: need tc/htb version %d (minor is %d), you have %d\n",
- HTB_VER >> 16, HTB_VER & 0xffff, gopt->version);
- return -EINVAL;
- }
-
- err = qdisc_class_hash_init(&q->clhash);
- if (err < 0)
- return err;
- for (i = 0; i < TC_HTB_NUMPRIO; i++)
- INIT_LIST_HEAD(q->drops + i);
-
- qdisc_watchdog_init(&q->watchdog, sch);
- INIT_WORK(&q->work, htb_work_func);
- skb_queue_head_init(&q->direct_queue);
-
-#if OFBUF
- skb_queue_head_init(&q->ofbuf);
- q->ofbuf_queued = 0;
-#endif
-
- q->direct_qlen = qdisc_dev(sch)->tx_queue_len;
-
- if (q->direct_qlen < 2) /* some devices have zero tx_queue_len */
- q->direct_qlen = 2;
-
- if ((q->rate2quantum = gopt->rate2quantum) < 1)
- q->rate2quantum = 1;
- q->defcls = gopt->defcls;
-
- return 0;
-}
-
-static int htb_dump(struct Qdisc *sch, struct sk_buff *skb)
-{
- spinlock_t *root_lock = qdisc_root_sleeping_lock(sch);
- struct htb_sched *q = qdisc_priv(sch);
- struct nlattr *nest;
- struct tc_htb_glob gopt;
-
- spin_lock_bh(root_lock);
-
- gopt.direct_pkts = q->direct_pkts;
- gopt.version = HTB_VER;
- gopt.rate2quantum = q->rate2quantum;
- gopt.defcls = q->defcls;
- gopt.debug = 0;
-
- nest = nla_nest_start(skb, TCA_OPTIONS);
- if (nest == NULL)
- goto nla_put_failure;
- NLA_PUT(skb, TCA_HTB_INIT, sizeof(gopt), &gopt);
- nla_nest_end(skb, nest);
-
- spin_unlock_bh(root_lock);
- return skb->len;
-
-nla_put_failure:
- spin_unlock_bh(root_lock);
- nla_nest_cancel(skb, nest);
- return -1;
-}
-
-static int htb_dump_class(struct Qdisc *sch, unsigned long arg,
- struct sk_buff *skb, struct tcmsg *tcm)
-{
- struct htb_class *cl = (struct htb_class *)arg;
- spinlock_t *root_lock = qdisc_root_sleeping_lock(sch);
- struct nlattr *nest;
- struct tc_htb_opt opt;
-
- spin_lock_bh(root_lock);
- tcm->tcm_parent = cl->parent ? cl->parent->common.classid : TC_H_ROOT;
- tcm->tcm_handle = cl->common.classid;
- if (!cl->level && cl->un.leaf.q)
- tcm->tcm_info = cl->un.leaf.q->handle;
-
- nest = nla_nest_start(skb, TCA_OPTIONS);
- if (nest == NULL)
- goto nla_put_failure;
-
- memset(&opt, 0, sizeof(opt));
-
- opt.rate = cl->rate->rate;
- opt.buffer = cl->buffer;
- opt.ceil = cl->ceil->rate;
- opt.cbuffer = cl->cbuffer;
- opt.quantum = cl->quantum;
- opt.prio = cl->prio;
- opt.level = cl->level;
- NLA_PUT(skb, TCA_HTB_PARMS, sizeof(opt), &opt);
-
- nla_nest_end(skb, nest);
- spin_unlock_bh(root_lock);
- return skb->len;
-
-nla_put_failure:
- spin_unlock_bh(root_lock);
- nla_nest_cancel(skb, nest);
- return -1;
-}
-
-static int
-htb_dump_class_stats(struct Qdisc *sch, unsigned long arg, struct gnet_dump *d)
-{
- struct htb_class *cl = (struct htb_class *)arg;
-
- if (!cl->level && cl->un.leaf.q)
- cl->qstats.qlen = cl->un.leaf.q->q.qlen;
- cl->xstats.tokens = cl->tokens;
- cl->xstats.ctokens = cl->ctokens;
-
- if (gnet_stats_copy_basic(d, &cl->bstats) < 0 ||
- gnet_stats_copy_rate_est(d, NULL, &cl->rate_est) < 0 ||
- gnet_stats_copy_queue(d, &cl->qstats) < 0)
- return -1;
-
- return gnet_stats_copy_app(d, &cl->xstats, sizeof(cl->xstats));
-}
-
-static int htb_graft(struct Qdisc *sch, unsigned long arg, struct Qdisc *new,
- struct Qdisc **old)
-{
- struct htb_class *cl = (struct htb_class *)arg;
-
- if (cl->level)
- return -EINVAL;
- if (new == NULL &&
- (new = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops,
- cl->common.classid)) == NULL)
- return -ENOBUFS;
-
- sch_tree_lock(sch);
- *old = cl->un.leaf.q;
- cl->un.leaf.q = new;
- if (*old != NULL) {
- qdisc_tree_decrease_qlen(*old, (*old)->q.qlen);
- qdisc_reset(*old);
- }
- sch_tree_unlock(sch);
- return 0;
-}
-
-static struct Qdisc *htb_leaf(struct Qdisc *sch, unsigned long arg)
-{
- struct htb_class *cl = (struct htb_class *)arg;
- return !cl->level ? cl->un.leaf.q : NULL;
-}
-
-static void htb_qlen_notify(struct Qdisc *sch, unsigned long arg)
-{
- struct htb_class *cl = (struct htb_class *)arg;
-
- if (cl->un.leaf.q->q.qlen == 0)
- htb_deactivate(qdisc_priv(sch), cl);
-}
-
-static unsigned long htb_get(struct Qdisc *sch, u32 classid)
-{
- struct htb_class *cl = htb_find(classid, sch);
- if (cl)
- cl->refcnt++;
- return (unsigned long)cl;
-}
-
-static inline int htb_parent_last_child(struct htb_class *cl)
-{
- if (!cl->parent)
- /* the root class */
- return 0;
- if (cl->parent->children > 1)
- /* not the last child */
- return 0;
- return 1;
-}
-
-static void htb_parent_to_leaf(struct htb_sched *q, struct htb_class *cl,
- struct Qdisc *new_q)
-{
- struct htb_class *parent = cl->parent;
-
- WARN_ON(cl->level || !cl->un.leaf.q || cl->prio_activity);
-
- if (parent->cmode != HTB_CAN_SEND)
- htb_safe_rb_erase(&parent->pq_node, q->wait_pq + parent->level);
-
- parent->level = 0;
- memset(&parent->un.inner, 0, sizeof(parent->un.inner));
- INIT_LIST_HEAD(&parent->un.leaf.drop_list);
- parent->un.leaf.q = new_q ? new_q : &noop_qdisc;
- parent->tokens = parent->buffer;
- parent->ctokens = parent->cbuffer;
- parent->t_c = psched_get_time();
- parent->cmode = HTB_CAN_SEND;
-}
-
-static void htb_destroy_class(struct Qdisc *sch, struct htb_class *cl)
-{
- if (!cl->level) {
- WARN_ON(!cl->un.leaf.q);
- qdisc_destroy(cl->un.leaf.q);
- }
- gen_kill_estimator(&cl->bstats, &cl->rate_est);
- qdisc_put_rtab(cl->rate);
- qdisc_put_rtab(cl->ceil);
-
- tcf_destroy_chain(&cl->filter_list);
- kfree(cl);
-}
-
-static void htb_destroy(struct Qdisc *sch)
-{
- struct htb_sched *q = qdisc_priv(sch);
- struct hlist_node *n, *next;
- struct htb_class *cl;
- unsigned int i;
-
- cancel_work_sync(&q->work);
- qdisc_watchdog_cancel(&q->watchdog);
- /* This line used to be after htb_destroy_class call below
- * and surprisingly it worked in 2.4. But it must precede it
- * because filter need its target class alive to be able to call
- * unbind_filter on it (without Oops).
- */
- tcf_destroy_chain(&q->filter_list);
-
- for (i = 0; i < q->clhash.hashsize; i++) {
- hlist_for_each_entry(cl, n, &q->clhash.hash[i], common.hnode)
- tcf_destroy_chain(&cl->filter_list);
- }
- for (i = 0; i < q->clhash.hashsize; i++) {
- hlist_for_each_entry_safe(cl, n, next, &q->clhash.hash[i],
- common.hnode)
- htb_destroy_class(sch, cl);
- }
- qdisc_class_hash_destroy(&q->clhash);
- __skb_queue_purge(&q->direct_queue);
-#if OFBUF
- __skb_queue_purge(&q->ofbuf);
- q->ofbuf_queued = 0;
-#endif
-}
-
-static int htb_delete(struct Qdisc *sch, unsigned long arg)
-{
- struct htb_sched *q = qdisc_priv(sch);
- struct htb_class *cl = (struct htb_class *)arg;
- unsigned int qlen;
- struct Qdisc *new_q = NULL;
- int last_child = 0;
-
- // TODO: why don't allow to delete subtree ? references ? does
- // tc subsys quarantee us that in htb_destroy it holds no class
- // refs so that we can remove children safely there ?
- if (cl->children || cl->filter_cnt)
- return -EBUSY;
-
- if (!cl->level && htb_parent_last_child(cl)) {
- new_q = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops,
- cl->parent->common.classid);
- last_child = 1;
- }
-
- sch_tree_lock(sch);
-
- if (!cl->level) {
- qlen = cl->un.leaf.q->q.qlen;
- qdisc_reset(cl->un.leaf.q);
- qdisc_tree_decrease_qlen(cl->un.leaf.q, qlen);
- }
-
- /* delete from hash and active; remainder in destroy_class */
- qdisc_class_hash_remove(&q->clhash, &cl->common);
- if (cl->parent)
- cl->parent->children--;
-
- if (cl->prio_activity)
- htb_deactivate(q, cl);
-
- if (cl->cmode != HTB_CAN_SEND)
- htb_safe_rb_erase(&cl->pq_node, q->wait_pq + cl->level);
-
- if (last_child)
- htb_parent_to_leaf(q, cl, new_q);
-
- BUG_ON(--cl->refcnt == 0);
- /*
- * This shouldn't happen: we "hold" one cops->get() when called
- * from tc_ctl_tclass; the destroy method is done from cops->put().
- */
-
- sch_tree_unlock(sch);
- return 0;
-}
-
-static void htb_put(struct Qdisc *sch, unsigned long arg)
-{
- struct htb_class *cl = (struct htb_class *)arg;
-
- if (--cl->refcnt == 0)
- htb_destroy_class(sch, cl);
-}
-
-static int htb_change_class(struct Qdisc *sch, u32 classid,
- u32 parentid, struct nlattr **tca,
- unsigned long *arg)
-{
- int err = -EINVAL;
- struct htb_sched *q = qdisc_priv(sch);
- struct htb_class *cl = (struct htb_class *)*arg, *parent;
- struct nlattr *opt = tca[TCA_OPTIONS];
- struct qdisc_rate_table *rtab = NULL, *ctab = NULL;
- struct nlattr *tb[__TCA_HTB_MAX];
- struct tc_htb_opt *hopt;
-
- /* extract all subattrs from opt attr */
- if (!opt)
- goto failure;
-
- err = nla_parse_nested(tb, TCA_HTB_MAX, opt, htb_policy);
- if (err < 0)
- goto failure;
-
- err = -EINVAL;
- if (tb[TCA_HTB_PARMS] == NULL)
- goto failure;
-
- parent = parentid == TC_H_ROOT ? NULL : htb_find(parentid, sch);
-
- hopt = nla_data(tb[TCA_HTB_PARMS]);
-
- rtab = qdisc_get_rtab(&hopt->rate, tb[TCA_HTB_RTAB]);
- ctab = qdisc_get_rtab(&hopt->ceil, tb[TCA_HTB_CTAB]);
- if (!rtab || !ctab)
- goto failure;
-
- if (!cl) { /* new class */
- struct Qdisc *new_q;
- int prio;
- struct {
- struct nlattr nla;
- struct gnet_estimator opt;
- } est = {
- .nla = {
- .nla_len = nla_attr_size(sizeof(est.opt)),
- .nla_type = TCA_RATE,
- },
- .opt = {
- /* 4s interval, 16s averaging constant */
- .interval = 2,
- .ewma_log = 2,
- },
- };
-
- /* check for valid classid */
- if (!classid || TC_H_MAJ(classid ^ sch->handle) ||
- htb_find(classid, sch))
- goto failure;
-
- /* check maximal depth */
- if (parent && parent->parent && parent->parent->level < 2) {
- pr_err("htb: tree is too deep\n");
- goto failure;
- }
- err = -ENOBUFS;
- cl = kzalloc(sizeof(*cl), GFP_KERNEL);
- if (!cl)
- goto failure;
-
- err = gen_new_estimator(&cl->bstats, &cl->rate_est,
- qdisc_root_sleeping_lock(sch),
- tca[TCA_RATE] ? : &est.nla);
- if (err) {
- kfree(cl);
- goto failure;
- }
-
- cl->refcnt = 1;
- cl->children = 0;
- INIT_LIST_HEAD(&cl->un.leaf.drop_list);
- RB_CLEAR_NODE(&cl->pq_node);
-
- for (prio = 0; prio < TC_HTB_NUMPRIO; prio++)
- RB_CLEAR_NODE(&cl->node[prio]);
-
- /* create leaf qdisc early because it uses kmalloc(GFP_KERNEL)
- * so that can't be used inside of sch_tree_lock
- * -- thanks to Karlis Peisenieks
- */
- new_q = qdisc_create_dflt(sch->dev_queue,
- &pfifo_qdisc_ops, classid);
- sch_tree_lock(sch);
- if (parent && !parent->level) {
- unsigned int qlen = parent->un.leaf.q->q.qlen;
-
- /* turn parent into inner node */
- qdisc_reset(parent->un.leaf.q);
- qdisc_tree_decrease_qlen(parent->un.leaf.q, qlen);
- qdisc_destroy(parent->un.leaf.q);
- if (parent->prio_activity)
- htb_deactivate(q, parent);
-
- /* remove from evt list because of level change */
- if (parent->cmode != HTB_CAN_SEND) {
- htb_safe_rb_erase(&parent->pq_node, q->wait_pq);
- parent->cmode = HTB_CAN_SEND;
- }
- parent->level = (parent->parent ? parent->parent->level
- : TC_HTB_MAXDEPTH) - 1;
- memset(&parent->un.inner, 0, sizeof(parent->un.inner));
- }
- /* leaf (we) needs elementary qdisc */
- cl->un.leaf.q = new_q ? new_q : &noop_qdisc;
-
- cl->common.classid = classid;
- cl->parent = parent;
-
- /* set class to be in HTB_CAN_SEND state */
- cl->tokens = hopt->buffer;
- cl->ctokens = hopt->cbuffer;
- cl->mbuffer = 60 * PSCHED_TICKS_PER_SEC; /* 1min */
- cl->t_c = psched_get_time();
- cl->cmode = HTB_CAN_SEND;
-
- /* attach to the hash list and parent's family */
- qdisc_class_hash_insert(&q->clhash, &cl->common);
- if (parent)
- parent->children++;
- } else {
- if (tca[TCA_RATE]) {
- err = gen_replace_estimator(&cl->bstats, &cl->rate_est,
- qdisc_root_sleeping_lock(sch),
- tca[TCA_RATE]);
- if (err)
- return err;
- }
- sch_tree_lock(sch);
- }
-
- /* it used to be a nasty bug here, we have to check that node
- * is really leaf before changing cl->un.leaf !
- */
- if (!cl->level) {
- cl->quantum = rtab->rate.rate / q->rate2quantum;
- if (!hopt->quantum && cl->quantum < 1000) {
- pr_warning(
- "HTB: quantum of class %X is small. Consider r2q change.\n",
- cl->common.classid);
- cl->quantum = 1000;
- }
- if (!hopt->quantum && cl->quantum > 200000) {
- pr_warning(
- "HTB: quantum of class %X is big. Consider r2q change.\n",
- cl->common.classid);
- cl->quantum = 200000;
- }
- if (hopt->quantum)
- cl->quantum = hopt->quantum;
- if ((cl->prio = hopt->prio) >= TC_HTB_NUMPRIO)
- cl->prio = TC_HTB_NUMPRIO - 1;
- }
-
- cl->buffer = hopt->buffer;
- cl->cbuffer = hopt->cbuffer;
- if (cl->rate)
- qdisc_put_rtab(cl->rate);
- cl->rate = rtab;
- if (cl->ceil)
- qdisc_put_rtab(cl->ceil);
- cl->ceil = ctab;
- sch_tree_unlock(sch);
-
- qdisc_class_hash_grow(sch, &q->clhash);
-
- *arg = (unsigned long)cl;
- return 0;
-
-failure:
- if (rtab)
- qdisc_put_rtab(rtab);
- if (ctab)
- qdisc_put_rtab(ctab);
- return err;
-}
-
-static struct tcf_proto **htb_find_tcf(struct Qdisc *sch, unsigned long arg)
-{
- struct htb_sched *q = qdisc_priv(sch);
- struct htb_class *cl = (struct htb_class *)arg;
- struct tcf_proto **fl = cl ? &cl->filter_list : &q->filter_list;
-
- return fl;
-}
-
-static unsigned long htb_bind_filter(struct Qdisc *sch, unsigned long parent,
- u32 classid)
-{
- struct htb_class *cl = htb_find(classid, sch);
-
- /*if (cl && !cl->level) return 0;
- * The line above used to be there to prevent attaching filters to
- * leaves. But at least tc_index filter uses this just to get class
- * for other reasons so that we have to allow for it.
- * ----
- * 19.6.2002 As Werner explained it is ok - bind filter is just
- * another way to "lock" the class - unlike "get" this lock can
- * be broken by class during destroy IIUC.
- */
- if (cl)
- cl->filter_cnt++;
- return (unsigned long)cl;
-}
-
-static void htb_unbind_filter(struct Qdisc *sch, unsigned long arg)
-{
- struct htb_class *cl = (struct htb_class *)arg;
-
- if (cl)
- cl->filter_cnt--;
-}
-
-static void htb_walk(struct Qdisc *sch, struct qdisc_walker *arg)
-{
- struct htb_sched *q = qdisc_priv(sch);
- struct htb_class *cl;
- struct hlist_node *n;
- unsigned int i;
-
- if (arg->stop)
- return;
-
- for (i = 0; i < q->clhash.hashsize; i++) {
- hlist_for_each_entry(cl, n, &q->clhash.hash[i], common.hnode) {
- if (arg->count < arg->skip) {
- arg->count++;
- continue;
- }
- if (arg->fn(sch, (unsigned long)cl, arg) < 0) {
- arg->stop = 1;
- return;
- }
- arg->count++;
- }
- }
-}
-
-static const struct Qdisc_class_ops htb_class_ops = {
- .graft = htb_graft,
- .leaf = htb_leaf,
- .qlen_notify = htb_qlen_notify,
- .get = htb_get,
- .put = htb_put,
- .change = htb_change_class,
- .delete = htb_delete,
- .walk = htb_walk,
- .tcf_chain = htb_find_tcf,
- .bind_tcf = htb_bind_filter,
- .unbind_tcf = htb_unbind_filter,
- .dump = htb_dump_class,
- .dump_stats = htb_dump_class_stats,
-};
-
-static struct Qdisc_ops htb_qdisc_ops __read_mostly = {
- .cl_ops = &htb_class_ops,
- .id = "htb",
- .priv_size = sizeof(struct htb_sched),
- .enqueue = htb_enqueue,
- .dequeue = htb_dequeue,
- .peek = qdisc_peek_dequeued,
- .drop = htb_drop,
- .init = htb_init,
- .reset = htb_reset,
- .destroy = htb_destroy,
- .dump = htb_dump,
- .owner = THIS_MODULE,
-};
-
-static int __init htb_module_init(void)
-{
- return register_qdisc(&htb_qdisc_ops);
-}
-static void __exit htb_module_exit(void)
-{
- unregister_qdisc(&htb_qdisc_ops);
-}
-
-module_init(htb_module_init)
-module_exit(htb_module_exit)
-MODULE_LICENSE("GPL");
diff --git a/util/sysctl_addon b/util/sysctl_addon
deleted file mode 100644
index e26eecc..0000000
--- a/util/sysctl_addon
+++ /dev/null
@@ -1,17 +0,0 @@
-# Mininet: Increase open file limit
-fs.file-max = 100000
-
-# Mininet: increase network buffer space
-net.core.wmem_max = 16777216
-net.core.rmem_max = 16777216
-net.ipv4.tcp_rmem = 10240 87380 16777216
-net.ipv4.tcp_rmem = 10240 87380 16777216
-net.core.netdev_max_backlog = 5000
-
-# Mininet: increase arp cache size
-net.ipv4.neigh.default.gc_thresh1 = 4096
-net.ipv4.neigh.default.gc_thresh2 = 8192
-net.ipv4.neigh.default.gc_thresh3 = 16384
-
-# Mininet: increase routing table size
-net.ipv4.route.max_size=32768
diff --git a/util/unpep8 b/util/unpep8
deleted file mode 100644
index 931b217..0000000
--- a/util/unpep8
+++ /dev/null
@@ -1,198 +0,0 @@
-#!/usr/bin/python
-
-"""
-Translate from PEP8 Python style to Mininet (i.e. Arista-like)
-Python style
-
-usage: unpep8 < old.py > new.py
-
-- Reinstates CapWords for methods and instance variables
-- Gets rid of triple single quotes
-- Eliminates triple quotes on single lines
-- Inserts extra spaces to improve readability
-- Fixes Doxygen (or doxypy) ugliness
-
-Does the following translations:
-
-ClassName.method_name(foo = bar) -> ClassName.methodName( foo=bar )
-
-Triple-single-quotes -> triple-double-quotes
-
-@param foo description -> foo: description
-@return description -> returns: description
-@author me -> author: me
-@todo(me) -> TODO(me)
-
-Bugs/Limitations:
-
-- Hack to restore strings is ugly
-- Multiline strings get mangled
-- Comments are mangled (which is arguably the "right thing" to do, except
- that, for example, the left hand sides of the above would get translated!)
-- Doesn't eliminate unnecessary backslashes
-- Has no opinion on tab size
-- complicated indented docstrings get flattened
-- We don't (yet) have a filter to generate Doxygen/Doxypy
-- Currently leaves indents on blank comment lines
-- May lead to namespace collisions (e.g. some_thing and someThing)
-
-Bob Lantz, rlantz@cs.stanford.edu
-1/24/2010
-"""
-
-import re, sys
-
-def fixUnderscoreTriplet( match ):
- "Translate a matched triplet of the form a_b to aB."
- triplet = match.group()
- return triplet[ :-2 ] + triplet[ -1 ].capitalize()
-
-def reinstateCapWords( text ):
- underscoreTriplet = re.compile( r'[A-Za-z0-9]_[A-Za-z0-9]' )
- return underscoreTriplet.sub( fixUnderscoreTriplet, text )
-
-def replaceTripleApostrophes( text ):
- "Replace triple apostrophes with triple quotes."
- return text.replace( "'''", '"""')
-
-def simplifyTripleQuotes( text ):
- "Fix single-line doc strings."
- r = re.compile( r'"""([^\"\n]+)"""' )
- return r.sub( r'"\1"', text )
-
-def insertExtraSpaces( text ):
- "Insert extra spaces inside of parentheses and brackets/curly braces."
- lparen = re.compile( r'\((?![\s\)])' )
- text = lparen.sub( r'( ', text )
- rparen = re.compile( r'([^\s\(])(?=\))' )
- text = rparen.sub( r'\1 ', text)
- # brackets
- lbrack = re.compile( r'\[(?![\s\]])' )
- text = lbrack.sub( r'[ ', text )
- rbrack = re.compile( r'([^\s\[])(?=\])' )
- text = rbrack.sub( r'\1 ', text)
- # curly braces
- lcurly = re.compile( r'\{(?![\s\}])' )
- text = lcurly.sub( r'{ ', text )
- rcurly = re.compile( r'([^\s\{])(?=\})' )
- text = rcurly.sub( r'\1 ', text)
- return text
-
-def fixDoxygen( text ):
- """Translate @param foo to foo:, @return bar to returns: bar, and
- @author me to author: me"""
- param = re.compile( r'@param (\w+)' )
- text = param.sub( r'\1:', text )
- returns = re.compile( r'@return' )
- text = returns.sub( r'returns:', text )
- author = re.compile( r'@author' )
- text = author.sub( r'author:', text)
- # @todo -> TODO
- text = text.replace( '@todo', 'TODO' )
- return text
-
-def removeCommentFirstBlankLine( text ):
- "Remove annoying blank lines after first line in comments."
- line = re.compile( r'("""[^\n]*\n)\s*\n', re.MULTILINE )
- return line.sub( r'\1', text )
-
-def fixArgs( match, kwarg = re.compile( r'(\w+) = ' ) ):
- "Replace foo = bar with foo=bar."
- return kwarg.sub( r'\1=', match.group() )
-
-def fixKeywords( text ):
- "Change keyword argumentsfrom foo = bar to foo=bar."
- args = re.compile( r'\(([^\)]+)\)', re.MULTILINE )
- return args.sub( fixArgs, text )
-
-# Unfortunately, Python doesn't natively support balanced or recursive
-# regular expressions. We could use PyParsing, but that opens another can
-# of worms. For now, we just have a cheap hack to restore strings,
-# so we don't end up accidentally mangling things like messages, search strings,
-# and regular expressions.
-
-def lineIter( text ):
- "Simple iterator over lines in text."
- for line in text.splitlines(): yield line
-
-def stringIter( strList ):
- "Yield strings in strList."
- for s in strList: yield s
-
-def restoreRegex( regex, old, new ):
- "Find regexes in old and restore them into new."
- oldStrs = regex.findall( old )
- # Sanity check - count should be the same!
- newStrs = regex.findall( new )
- assert len( oldStrs ) == len( newStrs )
- # Replace newStrs with oldStrs
- siter = stringIter( oldStrs )
- reps = lambda dummy: siter.next()
- return regex.sub( reps, new )
-
-# This is a cheap hack, and it may not work 100%, since
-# it doesn't handle multiline strings.
-# However, it should be mostly harmless...
-
-def restoreStrings( oldText, newText ):
- "Restore strings from oldText into newText, returning result."
- oldLines, newLines = lineIter( oldText ), lineIter( newText )
- quoteStrings = re.compile( r'("[^"]*")' )
- tickStrings = re.compile( r"('[^']*')" )
- result = ''
- # It would be nice if we could blast the whole file, but for
- # now it seems to work line-by-line
- for newLine in newLines:
- oldLine = oldLines.next()
- newLine = restoreRegex( quoteStrings, oldLine, newLine )
- newLine = restoreRegex( tickStrings, oldLine, newLine )
- result += newLine + '\n'
- return result
-
-# This might be slightly controversial, since it uses
-# three spaces to line up multiline comments. However,
-# I much prefer it. Limitations: if you have deeper
-# indents in comments, they will be eliminated. ;-(
-
-def fixComment( match,
- indentExp=re.compile( r'\n([ ]*)(?=[^/s])', re.MULTILINE ),
- trailingQuotes=re.compile( r'\s+"""' ) ):
- "Re-indent comment, and join trailing quotes."
- originalIndent = match.group( 1 )
- comment = match.group( 2 )
- indent = '\n' + originalIndent
- # Exception: leave unindented things unindented!
- if len( originalIndent ) is not 0: indent += ' '
- comment = indentExp.sub( indent, comment )
- return originalIndent + trailingQuotes.sub( '"""', comment )
-
-def fixCommentIndents( text ):
- "Fix multiline comment indentation."
- comments = re.compile( r'^([ ]*)("""[^"]*""")$', re.MULTILINE )
- return comments.sub( fixComment, text )
-
-def removeBogusLinefeeds( text ):
- "Remove extra linefeeds at the end of single-line comments."
- bogusLfs = re.compile( r'"([^"\n]*)\n"', re.MULTILINE )
- return bogusLfs.sub( '"\1"', text)
-
-def convertFromPep8( program ):
- oldProgram = program
- # Program text transforms
- program = reinstateCapWords( program )
- program = fixKeywords( program )
- program = insertExtraSpaces( program )
- # Undo string damage
- program = restoreStrings( oldProgram, program )
- # Docstring transforms
- program = replaceTripleApostrophes( program )
- program = simplifyTripleQuotes( program )
- program = fixDoxygen( program )
- program = fixCommentIndents( program )
- program = removeBogusLinefeeds( program )
- # Destructive transforms (these can delete lines)
- program = removeCommentFirstBlankLine( program )
- return program
-
-if __name__ == '__main__':
- print convertFromPep8( sys.stdin.read() )
\ No newline at end of file
diff --git a/util/versioncheck.py b/util/versioncheck.py
deleted file mode 100644
index d9e5483..0000000
--- a/util/versioncheck.py
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/usr/bin/python
-
-from subprocess import check_output as co
-from sys import exit
-
-# Actually run bin/mn rather than importing via python path
-version = 'Mininet ' + co( 'PYTHONPATH=. bin/mn --version', shell=True )
-version = version.strip()
-
-# Find all Mininet path references
-lines = co( "grep -or 'Mininet \w\.\w\.\w\w*' *", shell=True )
-
-error = False
-
-for line in lines.split( '\n' ):
- if line and 'Binary' not in line:
- fname, fversion = line.split( ':' )
- if version != fversion:
- print "%s: incorrect version '%s' (should be '%s')" % (
- fname, fversion, version )
- error = True
-
-if error:
- exit( 1 )
diff --git a/util/vm/.bash_profile b/util/vm/.bash_profile
deleted file mode 100644
index 9935934..0000000
--- a/util/vm/.bash_profile
+++ /dev/null
@@ -1,24 +0,0 @@
-SSH_ENV="$HOME/.ssh/environment"
-
-function start_agent {
- echo "Initialising new SSH agent..."
- /usr/bin/ssh-agent | sed 's/^echo/#echo/' > "${SSH_ENV}"
- echo succeeded
- chmod 600 "${SSH_ENV}"
- . "${SSH_ENV}" > /dev/null
- /usr/bin/ssh-add;
-}
-
-# Source SSH settings, if applicable
-
-if [ -f "${SSH_ENV}" ]; then
- . "${SSH_ENV}" > /dev/null
- #ps ${SSH_AGENT_PID} doesn't work under cywgin
- ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || {
- start_agent;
- }
-else
- start_agent;
-fi
-
-source ~/.bashrc
diff --git a/util/vm/install-mininet-vm.sh b/util/vm/install-mininet-vm.sh
deleted file mode 100644
index 382945e..0000000
--- a/util/vm/install-mininet-vm.sh
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/bin/bash
-
-# This script is intended to install Mininet into
-# a brand-new Ubuntu virtual machine,
-# to create a fully usable "tutorial" VM.
-set -e
-echo `whoami` ALL=NOPASSWD: ALL | sudo tee -a /etc/sudoers
-sudo sed -i -e 's/Default/#Default/' /etc/sudoers
-sudo sed -i -e 's/ubuntu/mininet-vm/' /etc/hostname
-sudo sed -i -e 's/ubuntu/mininet-vm/g' /etc/hosts
-sudo hostname `cat /etc/hostname`
-sudo sed -i -e 's/quiet splash/text/' /etc/default/grub
-sudo update-grub
-sudo sed -i -e 's/us.archive.ubuntu.com/mirrors.kernel.org/' \
- /etc/apt/sources.list
-sudo apt-get update
-# Clean up vmware easy install junk if present
-if [ -e /etc/issue.backup ]; then
- sudo mv /etc/issue.backup /etc/issue
-fi
-if [ -e /etc/rc.local.backup ]; then
- sudo mv /etc/rc.local.backup /etc/rc.local
-fi
-# Install Mininet
-sudo apt-get -y install git-core openssh-server
-git clone git://github.com/mininet/mininet
-cd mininet
-cd
-time mininet/util/install.sh
-# Ignoring this since NOX classic is deprecated
-#if ! grep NOX_CORE_DIR .bashrc; then
-# echo "export NOX_CORE_DIR=~/noxcore/build/src/" >> .bashrc
-#fi
-echo <<EOF
-You may need to reboot and then:
-sudo dpkg-reconfigure openvswitch-datapath-dkms
-sudo service openvswitch-switch start
-EOF
-
-