School of Computing - Research - Working Papers - 2001

Working Papers for 2001

Modelling 2D Froth; Focus on Coding,
B. Zhu, H. J. Ruskin and Y. Feng

Froth networks as typical disordered cellular structures, with spatial and temporal evolution, have been extensively investigated over recent years using both direct and indirect methods. Our work is concentrated on modelling 2D froth on coding.
We have investigated and improved 2D froth simulation programme of direct method. We have studied complexities of the coding and its modification for simulation of froth systems of larger size and including special geometries. We have also implemented programme automatically creation of both T1-type and T2-type defects in uniform hexagonal froths. As examples of computational improvements, we have looked at the persistence behaviour of Voronoi froths with sizes up to 2500 bubbles.

Improving Professional Software Skills in Industry - A Training Experiment,
Rory O'Connor, Howard Duncan, Gerry Coleman, Maurizio Morisio, Cliona McGowan, Christophe Mercier, Yingxu Wang

With over a decade of SPI in large organisations, the awareness of its importance has propagated to Small to Medium Enterprises (SMEs). The most well known models for SPI are primarily suited for large- or medium-sized organisations, but with some tailoring they can provide substantial support for SPI in SMEs. The IPSSI project addresses these issues and provide a process improvement framework for use by individual software engineers. This paper describes the background and motivation behind the IPSSI framework and describes the IPSSI approach to supporting the individual software engineer and reports on a series of case studies using the IPSSI method.

The World-Wide-Mind: Draft Proposal,
Mark Humphrys

In the first part of this paper, a change in methodology for the future of AI and Adaptive Behavior research is proposed. It is proposed that researchers construct their agent minds and their agent worlds as servers on the Internet. 3rd parties will use these servers as components in larger systems. In this scheme, any user on the Internet will be able to (a) select multiple minds from different remote "mind servers", (b) select a remote "Action Selection server" to resolve the (inevitable) conflicts between these minds, and (c) run the resulting constructed "society of mind" in the world provided on another "world server". All this without necessarily having to consult with the server authors. This constructed society may now also be presented as just another primitive mind server, ready for reuse by others as a component in a larger system.
From the current situation of isolated experiments we will move to a situation where not only can researchers use each other's agent worlds, but they can also use each other's agent minds as components in larger systems. Servers may call other servers, and it is expected that 3rd parties will continuously write wrappers and filters for existing mind servers, overriding and modifying their default behaviour (to produce new, co-existing mind servers). None of this necessarily means that the mind being used ever leaves its server (or that its insides are even made public). Hence the term, the "World-Wide-Mind" (WWM), referring to the fact that the mind may be physically distributed across the world, with parts of the mind at different remote servers.
Part of the motivation for the WWM is that if the AI project is to be successful, it may be too big for any single laboratory to complete. So it will be necessary both to decentralise the work and to allow a massive and ongoing experiment with different combinations of components (so that we are not locked into any particular layout of decentralisation). Central to the WWM scheme is the expectation that researchers will not agree on how to divide up the AI work, and so components will overlap and be duplicated.
Previous work by this author [Humphrys, 1997] introduced models of mind where competition took place between extremely incompatible components, and where the mind could survive communications failure with or even complete loss of a number of such components. The WWM idea grew out of this work, and this paper shows how these previous models are the type of models we need in the WWM.
In the second part of this paper, we move towards an implementation of the WWM by trying to define the set of queries and responses that the servers should implement. Clients (including other servers) may then implement any general-purpose algorithm to drive the servers through repeated use of these queries. In our initial implementation, we consider schemes of very low-bandwidth communication. For instance, schemes where the competition among multiple minds is resolved across the network using numeric weights, rather than by explicit reasoning and negotiation. It is possible that this low-bandwidth protocol may be more suitable to sub-symbolic AI than to other branches of AI, and that other protocols may be needed for other branches of AI. It is suggested that it may be premature in some areas of AI to attempt to formulate a "mind network protocol", but that in the sub-symbolic domain it could at least be attempted now. Whether the protocol presented here is adopted or not, the first part of this paper (the need for a protocol) stands on its own.
Finally, we suggest a lowest-common-denominator approach to actually implementing these queries, so that current AI researchers have to learn almost nothing in order to put their servers online. As the lowest-common-denominator approach we suggest the transmission across ordinary CGI of queries and responses written in XML.
Keywords - World-Wide-Mind, WWM, Network Minds, Distributed Models of Mind, Society of Mind, Distributed AI, autonomous agent architectures, Action Selection, Internet, client-server, HTTP, CGI, XML, AIML.

Turnover of HIV infected cells and Helper cells in a Discrete Immune Model,
R. Mannion and H.J. Ruskin

The population growth of Helper cells and decline of Viral cells in a discrete immune system model are studied. The model consists of four cell types, Macrophages, (M), Helper cells, (H), Viral-infected cells, (V) and Cytotoxic killer cells (C) with binary concentrations (1, high, 0, low). A 2-D lattice is used to represent the immune system and Boolean equations represent the interaction between differing cell types. Mutation of the virus is considered via a probabilistic parameter, Pmut. Below a critical mutation level, Pcrit  , there is immune dominance and above this level  there is  immuno-deficiency. We investigate the growth of Helper cells and the decline of  Viral cells  for the interval,  Pmut < Pcrit.  This model shows Viral decay following an exponential decline while Helper growth follows a logarithmic function which then slows to a slightly oscillating equilibrium. We find that increasing  Pmut has a larger effect on the equilibrium Helper population than on the viral one. This would  indicate that the decline in Helper cells  is as a result of a higher proportion of mutated virions rather than an actual increase in the viral population. The half-life of the virus is also investigated and we see longer half-lives for increasing Pmut .

A Monetary Unit Selection Method with Fixed Sample Size,
J. M. Horgan and Y. Bimpeh

A selection method suggested by Sunter (1977) is adapted for substantive testing. It overcomes most of the practical disadvantages associated with the commonly used strategies. It selects a fixed sample size of distinct items with probability proportional to size for large items. Empirical investigations show that it provides reliable estimates of the total error amount which are more precise than those obtained with unrestricted random sampling.

Keywords: Monetary unit sampling; Probability proportional to size; Substantive testing.

Modelling Traffic Flow at a Roundabout in Urban Networks,
Ruili Wang and Heather J. Ruskin

Traffic flow at a single-lane roundabout in an urban network is modelled using cellular automata (CA). Roundabouts without traffic lights are controlled through driver self-organisation and by the offside-priority rule (by which a vehicle entering gives way to one already on the roundabout).
The roundabout has been modelled by a multi-state CA ring. Each vehicle entering the roundabout was randomly characterised by a destined exit with specified turning rates. Driver behaviour at roundabout entrances was randomly grouped into four categories based on gaps required to enter the roundabout.
Three aspects of roundabout performance in particular have been studied. The first, looks at overall throughput (the number of vehicles, which navigate the roundabout in a given time), for different geometries, arrival and turning rates (the probability that a car turns to an exit point). The second investigates changes in queue-length, delay-time and vehicle density for an individual road. The third considers the impact of driver choice on throughput and operation of the roundabout.
The roundabout is modelled as a ring of n cells, where each cell may be either occupied or vacant, with deterministic update rule. Before driving onto the roundabout, vehicles are randomly assigned turning directions, with specified probabilities. Car arrivals are random with arrival rates (the probability that a car arrives at an entrance in a given time step), in the range 0.05 to 0.55. The geometry of the roundabout is also varied to include from three to five entry/exit roads and two different sizes, n = 16 and 32 cells respectively.
If the number of cells of the roundabouts is even, then throughput does not depend on roundabout size, equal-spacing (distances between exits are equal) or non-equal-spacing, given similar topology and other parameters held constant. If the number of cells of the roundabouts is odd, throughput than increase when size of roundabout increase. Throughput levels in general are different from different topologies. Clearly, the entrances are bottlenecks in terms of smooth operation.
In general, throughput increases with arrival rate and reaches a maximum when the arrival rate reaches a critical value on one or more roads. Critical arrival rates for all roads depend on roundabout topology and direction of travel when leaving the roundabout. The throughput decreases as right-turning rate increases (for cars driving on the left).
The queue-length of an individual road rapidly achieves maximum as its arrival rate is increased, with critical arrival rate again dependent on topology and the arrival rates at other entry points. Over 100,000 time steps, the maximum queue-length or saturation of a given entry road was observed to occur within a few hundred time-steps for arrival rates >= critical arrival rate on the given road. The delay for any car seeking entry onto the roundabout was found to depend not only on the queue-length but also on factors, such as available opportunities for drive to enter the roundabout. Queue formation also occurred at car densities lower than the maximum density for free flow (Chopard, 1998).
Driver behaviour at roundabout entrances was randomly categorised as rational, (when optimum conditions of entry are realised), conservative, urgent and reckless, with specified probabilities. Rational, conservative and urgent behaviour leads to free-flow on the roundabout for all arrival/turning rates considered, whereas reckless decisions lead rapidly to roundabout congestion and conservative decisions to decrease in throughput and increase in queue-lengths. Assigned probabilities are clearly subjective and would benefit from calibration on real data, but equally are unlikely to be standard for real traffic systems.

Finite size effect and variation in Monte Carlo simulations of an Immune-response Model,
Y. Liu and H. J. Ruskin

In this paper, we examine lattice scale effects on Monte Carlo simulations of the immune system response to HIV. In our studies, a large number of Monte Carlo simulations are performed in order to investigate the variability in output obtained with the same input parameters. From these data, we were interested in the sample-to-sample variation. In those cases where the infection grows, i.e. leads to viral domination of the immune system, the variation becomes more obvious. We know that changes in viral activity, which is the effect of viral mutation and increased mobility, cause oscillations of cell populations in any one sample and variability between samples. With increasing Pmut or Pmob, viral activity is enhanced, so oscillations and variability between samples are larger. Unsurprisingly, an increase in sample size, e.g. the scale of the lattice on which simulations are performed, can improve the stability, and does not affect other properties of the model, such as the critical mutation value (Pcrit) and latent period, which is the number of time steps taken by the viral population to dominate the helper population (Mannion at al. 2000b). This is investigated for a 3-D lattice, for values of Pmut close to Pcrit.

Excommunication: Transforming Pi-Calculus Specifications to Remove Internal Communication,
G.W. Hamilton, B. Aziz, D. Gray, J. Power, and D. Sinclair

In this paper, we present a new automatic transformation algorithm which can automatically transform Pi-calculus processes to remove all expilicit internal communications. We prove that the transformation preserves full bisimulation equivalence, and also that the transformation always terminates.

Design and Management of Educational Content for the Web using the eXtensible Markup Language XML,
Claus Pahl and Declan Gallagher

New developments in Internet technologies such as the emergence of XML, the extensible markup language, allow new approaches to the design and management of Web-based courseware. The XML technologies allow us to create a design notation, which can directly be incorporated into the management structures for educational content for Web-based courseware. We discuss authoring of course material using XML and the benefits arising from the technology for its maintenance.

Efficient Short-Password key exchange and Log-in Protocols,
Michael Scott

Here we propose a new method for using an easily remembered short password to facilitate secure key exchange, and secure remote server log-in by a client. The method is based on the Diffie-Hellman key exchange and El Gamal public key encryption algorithms. The new methods have substantial performance advantages with respect to existing techniques such as SPEKE and PAK.