We are living in a time where the future of computing is uncertain. Every day, computers take a more aggressive role in our lives. Surprisingly, with this ubiquity comes anonymity. We know less and less of the existence of the computers that surround us, just that they seem to be running things behind the scenes. This paradigm brings with it an evolution in the design of computer systems. In the not too distant future, we are going to have to design computers unlike anything we’ve seen before. These systems will consist of collections of simple processors attached to sensors and actuators, wirelessly connected and haphazardly deployed in such a way that is outside direct human control. In some cases, the number of interconnected devices could grow into the millions and must therefore be programmed as self-configuring, self-sufficient devices. The networks these devices form must be robust and impervious to failure. Although the list of novel features and (potentially devastating) concerns is extensive, today’s research attempts to take a first stab at solving these problems of tomorrow.
While we are just beginning to understand how the technology of today can be applied to change how we confront the world, researchers are hypothesizing and beginning to develop the technology of tomorrow. Although computing has many evolving faces, one new paradigm for computing being researched at the University Of Virginia, Berkeley, the University of Illinois, Carnegie Melon University, and many other institutions throughout the country and world is the concept of Amorphous and Ubiquitous Computing.
Webster defines ubiquitous as “Existing or being everywhere at the same time” and amorphous as “Shapeless, formless,” exact properties of the proposed networks. These systems will consist of collections of decentralized devices with simple processors residing throughout the environment or area in which they are deployed. Interaction with the environment will take place solely through sensors and actuators, and communication between devices will be wireless. At present, it is unknown just how complex these individual devices and networks will be. Researchers are looking at many different feasible device architectures, deployment scenarios, applications, assumptions, and limitations. Consequently. this under-specification of the problem has allowed researchers to explore various design protocols and consider environments with a multitude of parameter settings, deployment conditions, network topologies, and hardware assumptions.
The devices under consideration are assumed small enough and cheap enough such that deployment by random scattering will be feasible. A network will not consist of entities placed or plugged into the environment in specified locations, but will consist of devices randomly strewn out into an area of interest. Devices may be disposable and subject to short life spans. Applications of these systems could include fire detection and control, military tracking, smart environments, disaster relief, data collection, search and rescue systems, agricultural tools, reactive surfaces, or virtually anything where data aggregation through sensors and coordinated communication without human intervention proves beneficial.
Current work in the “motes” or physical devices is underway and is producing devices with 128K of program memory, 4K of system RAM, hardware timers for clock interrupts, and RF Monolithic transceivers capable of up to 115Kbps communication. These devices are equipped with I/O lines for connecting sensors, are system reprogrammable but not self reprogrammable, and run on two AA batteries. Additionally, the devices are capable of several modes of operation that include levels of “sleep” for energy conservation. Berkeley’s Mica platform1 running an event-driven operating system called TinyOS is one such implementation. Work in progress is often conceived and first tested in wireless simulators such as NS-2 or GloMoSim, and then transported to these motes for further testing.
With an array of potential applications being considered across a gamut of possible architectures and technologies, research has taken many directions when considering the future of these sensor networks. The following is a high-level discussion of some of these design considerations, including basic strengths/weaknesses of both and consideration as to the direction research is currently leaning toward.
Homogenous versus Heterogeneous Networks
Will the networks consist of one type of device where each contains the same code base or will there be a multitude of devices cooperating to achieve some task? In a homogeneous network, developers will benefit from a more simplistic design. Deployment need not be concerned with the distribution of different devices, only that the single type of device amply covers the area of interest. Homogeneous networks also mean simpler and more understandable, and therefore debuggable networks. On the other hand, heterogeneous networks provide many benefits that a homogeneous network cannot fulfill. Different types of devices mean different agents are performing varied tasks. This could include devices with different types of sensors, communication radii, and processing capabilities. In a network of varied devices, it is easy to see how much more complex deployments can solve more difficult problems. Cheaper, low-power devices could be used for data collection and local reporting while more expensive and powerful devices could be responsible for data aggregation, decision-making, or network organization. Different types of devices could work together to accomplish a task. For example, in fighting forest fires, large mobile robots could communicate with distributed sensors deployed throughout their environment to coordinate their strategy and stay out of extreme areas.
To date, most work is being done under the assumption that networks consist of two types of devices: simple agents capable of sensing or actuating, but limited to simple processing tasks; and complex agents, often referred to as Base Stations, responsible for initiating services, requesting data, monitoring and aggregating sensor reports, and making high-level decisions. Often the Base Stations are thought to incorporate human interaction while the simple agents, or “motes,” will simply follow pre-programmed behavior and respond when appropriate to commands from their base station.
Mobile versus Immobile Entities
The question of whether or not devices will be mobile will most likely be decided by the desired application. It is easy to see both scenarios develop in parallel. The case where deployment of thousands of marble-size devices from an airplane in a hostile environment will probably result in non-mobile devices self-organizing to form a static network. A non-mobile scenario such as this would require the handling of network formation, new or failed nodes dynamically transforming the network topology, energy conservation, routing around congestion, communication reliability, and many other issues. Another possibility could be a smart environment where a heterogeneous collection of devices is embedded in clothing, walls, stop lights, and other components of the world around. In this situation, the devices would most likely be mobile and must dynamically update routing tables and state in accordance with the environment and devices within communication range.
While research assuming mobility seems to be the ideal case, most research has assumed that once a network of devices is deployed, their locations remain relatively static. This easily justifiable assumption of a fixed infrastructure seems to simplify many problems and allow researchers to begin to understand the complexities of the proposed networks.
Centralized versus De-centralized Hierarchy
While the technology of networking arose in a time of centralized control, distributed technology has moved towards the notion of de-centralization. With sensor networks this concept seems to take center stage. The centralized model is seen as a weaker model because centralization brings with it points of failure and the idea of a controlling entity. Although sensor networks are built with de-centralization in mind, one must remember that the applications of this developing technology are unbounded. As new ideas come about, developers must not get stuck in the mindset of complete decentralization. Benefits of both scenarios should be considered because the centralized model also brings an organizing entity and common behavior throughout the system.
Location Aware
Knowledge of a device’s location significantly extends the types of feasible applications being considered. With location, interests can be expressed in terms of geographic location or region of interest. A single device no longer needs a unique identifier, as decisions to respond or supply data could be made based on device location or membership in a group. Even routing becomes optimized as more direct paths are established between the querier and the queried. Much of the work being done in Sensor Networks falls under the assumption that this location information is available. With small low-power devices, a technology such as GPS may consume too much power or not provide enough accuracy for the application. In this case, other solutions must come into play such as triangulation, signal strength, or the use of base-station or reference devices to establish either approximate global or even relative local positions.
Not knowing whether a global location will be available means designing solutions that are robust enough to handle both cases. To date, most research has assumed that information regarding global location is available although a fair amount of work in devising mechanisms to establish location is being pursued.
Clock Synchronization
With literally thousands of devices dispersed at random across a field, it remains questionable whether or not clock synchronization is feasible. Methods of synchronization could include initialization broadcasts to all nodes in the network, or node-node communication and consensus to synchronize clocks. Problems with the first fall back to the idea of de-centralization, whereas problems with the latter stem from the bandwidth and energy requirements to achieve synchronization. Even then, a network may only achieve approximate synchronization as clock skew and communication delays are bound to effect results.
Although synchronization allows events to be time stamped, the notion of relative time or time-in-network also may suffice in many situations. Again, the question comes down to the intent and requirements of the application in question. While clock synchronization is convenient and has therefore been assumed to exist in some work, other researches are looking into solving similar problems without it.
Communication Bounds
At present, it is difficult to say what sort of limitations or capabilities deployed devices in a sensor network will possess. It is feasible that devices will become so cheap and powerful that there could be hundreds to thousands of these devices within communication radius of one another. On the other extreme, one may desire a network (such as a smart environment) such that a minimal number of devices is deployed to reduce communication collisions. As the device hardware matures, it is feasible to see devices available from one extreme to the other. Additionally, devices should be considered that allow self-adaptation to enhance or optimize functionality.
At present, research is and will continue to explore a variety of feasible alternatives. With low-density networks, research is looking at methods to optimize communication and ensure complete network coverage. In high-density networks, research is looking into handling communication collisions and using redundancy to conserve energy. As the physical device capabilities develop, it seems reasonable to assume that the communication bounds of the device will be specified in accordance with the application as opposed to developing applications around these limitations.
Energy Bounds
Unlike communication bounds, energy constraints on devices seem to entail a less predictable future. Although improvements in processor speed, communication capabilities, and device size seem to follow Moore’s law, decreasing device energy consumption has not been so successful. In fact, as processors become more powerful, energy consumption seems to suffer. For this reason, it is difficult to predict what the energy requirements of these devices will be. With smaller and smaller devices come smaller and smaller energy sources. This becomes a serious limiting factor in the design of some of the networks being researched. In the worst-case scenario, research must push designs that conserve energy at all costs. In the best-case scenario, we can only hope that technology for self-powered devices will becomes a reality.
Aside from these major issues, there are many additional aspects of ubiquitous computing systems that must be addressed. How does node density affect a network? Will systems incorporate self-adaptive code? How does security play into these systems? Will/must systems be extensible? What types of sensors exist or are available? Can communication take place over multiple channels? How will validation and debugging take place? How will new technologies handle bandwidth collisions with existing wireless technology? Will motes be self-programmable or re-programmable through communication?
The most exciting aspect of sensor networks is that at this point in their development, the possibilities seem endless. Researchers will continue to consider and attempt to solve hypothetical and interesting problems that they feel may one day represent reality. As these hypothetical problems are solved, new problems will emerge that bring about more challenging and interesting ways of looking at system design and programming. These new paradigms for computing in a distributed and often de-centralized style will evoke new ways of looking at computing in general, and could one day lead a revolution in computing that at present cannot even be found in science fiction.
1 Refer to Berkeley’s TOS site.
About the author
Brian Blum is currently a graduate student working on a PhD in Computer Science at the University of Virginia. He graduated in 2000 from the Systems Engineering Department of the same school and have worked a year in Northern VA for a small web consulting firm that is on the verge of collapse. His experience to date includes work in various programming languages on a wide array of operating systems and machines. His current work involves researching new paradigms for computing including distributed systems and complex coordination in sensor networks. Specifically Blum is looking into communication problems on these networks although distributed and aggregate behavior is also an area of interest.
He hopes to graduate within the next few years and continue a life long pursuit of research and teaching.