Experimental results and performance evaluations (1)
To evaluate the feasibility
and effectiveness of our approach, we have deployed the ubiQoS infrastructure
over a set of geographically distributed networks, with very different
bandwidths and interconnected via GARR, i.e., the Italian Academic and
Research Network. Any local network is modeled by one ubiQoS locality and includes
heterogeneous hosts (SUN Ultra5 400MHz workstations with Solaris
7, 128MB PentiumIII700 PCs with Microsoft WindowsNT and 128MB PentiumIII700
PCs with SuSE Linux 7.1). In this deployment scenario, we have measured several
performance figures, listed in the following, to estimate the overhead and the
reaction time of the ubiQoS middleware.
The initial phase of
path establishment and negotiation in
ubiQoS involves not only the client and the dynamically retrieved server, but
also some active intermediate nodes. The establishment of any active path segment
requires the interrogation of the local discovery service, the creation of an
RTP connection, the migration of one processor MA, the resource admission con-trol/reservation,
the negotiation of the tailored QoS, and, when needed, the migration of one
ubiQoS proxy MA. Figure 3 shows an
almost linear dependence of the path setup time
on the number of intermediate active nodes.
Let us specify that the migration of ubiQoS components is not required
by any service request. In fact, ubiQoS proxies are infrastructure components
that dynamically install where needed in response to a service request and that
can persist there to serve future requests for VoD flows. For this reason, the
figure also reports how the number of needed proxy migrations affects the path
setup time: the delay significantly reduces in the usual case of a service request
that triggers the installation of ubiQoS proxies only over a small set of new
active nodes.
Let us additionally observe that,
even in case of large-scale networks involved in Internet-wide service distribution,
the number of active nodes along the server-to-clients path tends to be
very small because it does not coincide with the number of traversed
routers/gateways. In fact, ubiQoS proxies and processors need to operate only
where there are either strong bandwidth discontinuities or forking of the VoD
multicast tree. In any experimented usage scenario, with some dozens of geographically
and randomly distributed clients, the dynamically determined ubiQoS active paths
never included more than 4 intermediate active nodes; we needed to force the
infrastructure decisions by adding ad-hoc bottlenecks to have longer active
paths in order to measure the setup times, shown in Figure
3, for the cases with 5 and 6 active nodes. However, the strong dependence
between setup time and number of active nodes pointed out by the experimental
results motivated a slight modification of the ubiQoS path creation algorithm.
In the current version, the middleware does not terminate the research of alternative
paths by ubiQoS processors until a path with less than 6 active nodes is found.
This obviously does not impact on path creation time but significantly reduces
the following proxy negotiation time, which has demonstrated to be the slowest
phase of the overall setup process.
The path setup time usually does not
exceed 5s, with an average number of active nodes less than 5 and an average
number of required migrations less than 3. This interval is significantly larger
than the one necessary to establish a single RTP connection between one client
and one server, but is acceptable in most categories of multimedia services,
i.e., non-interactive VoD services, since it affects only the delay with
which the visualization starts at the client side.
Figure 3. ubiQoS path setup time.
Page updated on In case of problems, or if you find any bug, please contact us.