September 20, 2014
Hot Topics:
RSS RSS feed Download our iPhone app

Amorphous and Ubiquitous Computing

  • July 16, 2002
  • By Brian Blum
  • Send Email »
  • More Articles »

We are living in a time where the future of computing is uncertain. Every day, computers take a more aggressive role in our lives. Surprisingly, with this ubiquity comes anonymity. We know less and less of the existence of the computers that surround us, just that they seem to be running things behind the scenes. This paradigm brings with it an evolution in the design of computer systems. In the not too distant future, we are going to have to design computers unlike anything we've seen before. These systems will consist of collections of simple processors attached to sensors and actuators, wirelessly connected and haphazardly deployed in such a way that is outside direct human control. In some cases, the number of interconnected devices could grow into the millions and must therefore be programmed as self-configuring, self-sufficient devices. The networks these devices form must be robust and impervious to failure. Although the list of novel features and (potentially devastating) concerns is extensive, today's research attempts to take a first stab at solving these problems of tomorrow.

While we are just beginning to understand how the technology of today can be applied to change how we confront the world, researchers are hypothesizing and beginning to develop the technology of tomorrow. Although computing has many evolving faces, one new paradigm for computing being researched at the University Of Virginia, Berkeley, the University of Illinois, Carnegie Melon University, and many other institutions throughout the country and world is the concept of Amorphous and Ubiquitous Computing.

Webster defines ubiquitous as "Existing or being everywhere at the same time" and amorphous as "Shapeless, formless," exact properties of the proposed networks. These systems will consist of collections of decentralized devices with simple processors residing throughout the environment or area in which they are deployed. Interaction with the environment will take place solely through sensors and actuators, and communication between devices will be wireless. At present, it is unknown just how complex these individual devices and networks will be. Researchers are looking at many different feasible device architectures, deployment scenarios, applications, assumptions, and limitations. Consequently. this under-specification of the problem has allowed researchers to explore various design protocols and consider environments with a multitude of parameter settings, deployment conditions, network topologies, and hardware assumptions.

The devices under consideration are assumed small enough and cheap enough such that deployment by random scattering will be feasible. A network will not consist of entities placed or plugged into the environment in specified locations, but will consist of devices randomly strewn out into an area of interest. Devices may be disposable and subject to short life spans. Applications of these systems could include fire detection and control, military tracking, smart environments, disaster relief, data collection, search and rescue systems, agricultural tools, reactive surfaces, or virtually anything where data aggregation through sensors and coordinated communication without human intervention proves beneficial.

Current work in the "motes" or physical devices is underway and is producing devices with 128K of program memory, 4K of system RAM, hardware timers for clock interrupts, and RF Monolithic transceivers capable of up to 115Kbps communication. These devices are equipped with I/O lines for connecting sensors, are system reprogrammable but not self reprogrammable, and run on two AA batteries. Additionally, the devices are capable of several modes of operation that include levels of "sleep" for energy conservation. Berkeley's Mica platform1 running an event-driven operating system called TinyOS is one such implementation. Work in progress is often conceived and first tested in wireless simulators such as NS-2 or GloMoSim, and then transported to these motes for further testing.

With an array of potential applications being considered across a gamut of possible architectures and technologies, research has taken many directions when considering the future of these sensor networks. The following is a high-level discussion of some of these design considerations, including basic strengths/weaknesses of both and consideration as to the direction research is currently leaning toward.

Homogenous versus Heterogeneous Networks

Will the networks consist of one type of device where each contains the same code base or will there be a multitude of devices cooperating to achieve some task? In a homogeneous network, developers will benefit from a more simplistic design. Deployment need not be concerned with the distribution of different devices, only that the single type of device amply covers the area of interest. Homogeneous networks also mean simpler and more understandable, and therefore debuggable networks. On the other hand, heterogeneous networks provide many benefits that a homogeneous network cannot fulfill. Different types of devices mean different agents are performing varied tasks. This could include devices with different types of sensors, communication radii, and processing capabilities. In a network of varied devices, it is easy to see how much more complex deployments can solve more difficult problems. Cheaper, low-power devices could be used for data collection and local reporting while more expensive and powerful devices could be responsible for data aggregation, decision-making, or network organization. Different types of devices could work together to accomplish a task. For example, in fighting forest fires, large mobile robots could communicate with distributed sensors deployed throughout their environment to coordinate their strategy and stay out of extreme areas.

To date, most work is being done under the assumption that networks consist of two types of devices: simple agents capable of sensing or actuating, but limited to simple processing tasks; and complex agents, often referred to as Base Stations, responsible for initiating services, requesting data, monitoring and aggregating sensor reports, and making high-level decisions. Often the Base Stations are thought to incorporate human interaction while the simple agents, or "motes," will simply follow pre-programmed behavior and respond when appropriate to commands from their base station.

Mobile versus Immobile Entities

The question of whether or not devices will be mobile will most likely be decided by the desired application. It is easy to see both scenarios develop in parallel. The case where deployment of thousands of marble-size devices from an airplane in a hostile environment will probably result in non-mobile devices self-organizing to form a static network. A non-mobile scenario such as this would require the handling of network formation, new or failed nodes dynamically transforming the network topology, energy conservation, routing around congestion, communication reliability, and many other issues. Another possibility could be a smart environment where a heterogeneous collection of devices is embedded in clothing, walls, stop lights, and other components of the world around. In this situation, the devices would most likely be mobile and must dynamically update routing tables and state in accordance with the environment and devices within communication range.

While research assuming mobility seems to be the ideal case, most research has assumed that once a network of devices is deployed, their locations remain relatively static. This easily justifiable assumption of a fixed infrastructure seems to simplify many problems and allow researchers to begin to understand the complexities of the proposed networks.

Centralized versus De-centralized Hierarchy

While the technology of networking arose in a time of centralized control, distributed technology has moved towards the notion of de-centralization. With sensor networks this concept seems to take center stage. The centralized model is seen as a weaker model because centralization brings with it points of failure and the idea of a controlling entity. Although sensor networks are built with de-centralization in mind, one must remember that the applications of this developing technology are unbounded. As new ideas come about, developers must not get stuck in the mindset of complete decentralization. Benefits of both scenarios should be considered because the centralized model also brings an organizing entity and common behavior throughout the system.





Page 1 of 2



Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 


Sitemap | Contact Us

Rocket Fuel