### Can control theory make software better?

##### March 20, 2013

Researchers from MIT’s Laboratory for Information and Decision Systems (LIDS) and a colleague at Georgia Tech have developed a method for applying principles from control theory — which analyzes dynamical systems ranging from robots to power grids — to formal verification, a set of methods for mathematically proving that a computer program does what it’s supposed to do.

The result could help computer scientists expand their repertoire of formal-verification techniques, and it could be particularly useful in the area of approximate computation, in which designers of computer systems trade a little bit of computational accuracy for large gains in speed or power efficiency.

In particular, the researchers adapted something called a Lyapunov function, which is a mainstay of control theory. The graph of a standard Lyapunov function slopes everywhere toward its minimum value: It can be thought of as looking kind of like a bowl. If the function characterizes the dynamics of a physical system, and the minimum value represents a stable state of the system, then the curve of the graph guarantees that the system will move toward greater stability.

“The most basic example of a Lyapunov function is a pendulum swinging and its energy decaying,” says Mardavij Roozbehani, a principal research scientist in LIDS and lead author on the new paper. “This decay of energy becomes a certificate of stability, or ‘good behavior,’ of the pendulum system.”

Of course, most dynamical systems are more complex than pendulums, and finding Lyapunov functions that characterize them can be difficult. But there’s a large literature on Lyapunov functions in control theory, and Roozbehani and his colleagues are hopeful that much of it will prove applicable to software verification.

**Skirting dangers**

In their new paper, Roozbehani and his coauthors — MIT professor of electrical engineering Alexandre Megretski and Eric Feron, a professor of aerospace software engineering at Georgia Tech — envision a computer program as a set of rules for navigating a space defined by the variables in the program and the memory locations of the program instructions. Any state of the program — any values for the variables during execution of a particular instruction — constitutes a point in that space. Problems with a program’s execution, such as dividing by zero or overloading the memory, can be thought of as regions in the space.

In this context, formal verification is a matter of demonstrating that the program will never steer its variables into any of these danger zones. To do that, the researchers introduce an analogue of Lyapunov functions that they call Lyapunov invariants. If the graph of a Lyapunov invariant is in some sense bowl shaped, then the task is to find a Lyapunov invariant such that the initial values of the program’s variables lie in the basin of the bowl, and all of the danger zones lie farther up the bowl’s walls. Veering toward the danger zones would then be analogous to a pendulum’s suddenly swinging out farther than it did on its previous swing.

In practice, finding a Lyapunov invariant with the desired properties means systematically investigating different classes of functions. There’s no general way to predict in advance what type of function it will be — or even that it exists. But Roozbehani imagines that, if his and his colleagues’ approach catches on, researchers will begin to identify algorithms that lend themselves to particular types of Lyapunov invariants, as has happened with control problems and Lyapunov functions.

**Fuzzy thinking**

Moreover, many of the critical software systems that require formal verification implement control systems designed using Lyapunov functions. “So there are intuitive reasons to believe that, at least for control-system software, these methods will work well,” Roozbehani says.

Roozbehani is also enthusiastic about possible applications in approximate computation. As he explains, many control systems are based on mathematical models that can’t capture all of the complexity of real dynamical systems. So control theorists have developed analytic methods that can account for model inaccuracies and provide guarantees of stability even in the presence of uncertainty. Those techniques, Roozbehani argues, could be perfectly suited for verifying code that exploits approximate computation.

“Computer scientists are not used to thinking about robustness of software,” says George Pappas, chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania. “This is the first work that is formalizing the notion of robustness for software. It’s a paradigm shift, from the more exhaustive, combinatorial view of checking for bugs in software, to a view where you try to see how robust your software is to changes in the input or the internal state of computation and so on.”

“The idea may not apply in all possible kinds of software,” Pappas cautions. “But if you’re thinking about software that implements, say, controller or sensor functionality, I think there’s no question that these types of ideas will have a lot of impact.”

### References:

- Roozbehani, M., Megretski, A., Feron, E., Optimization of Lyapunov Invariants in Verification of Software Systems,
*IEEE Transactions on Automatic Control*, 2013, DOI: 10.1109/TAC.2013.2241472 - Roozbehani, M., Megretski, A., Feron, E., Optimization of Lyapunov Invariants in Verification of Software Systems,
*arXiv*, 2011, arxiv.org/abs/1108.0170

## Comments (5)

March 20, 2013by Chimera

mwahahaha hysterical!

who tagged this as “physics/cosmology” ?!?!

should tag as Humor. Nothing to do with physics, much less cosmology.

March 20, 2013by Ralph Dratman

@Zagralin – It is, in some cases, possible to prove that the future output of a particular, pre-specified piece of software will always be correct. What is known to be impossible is to predict the correctness of some arbitrary, as yet unspecified program. This is true because an arbitrary program, though of finite length, may go on running for an unlimited amount of time, in the process producing a sequence of results which cannot always (in every possible case) be predicted by any shortcut method. Turing proved a general form of this assertion in the 1930s.

Turing’s result is not really so surprising in light of the fact that his conceptual computer is equipped with an unlimited amount of memory. An arbitrary program might construct an unlimited-length pattern in that memory space. But any such pattern is associated with a minimal algorithm, a shortest method, to produce it. If the arbitrary program in fact constructs an unknown-length pattern using the minimal algorithm, there is no way one can produce that pattern any faster than the program itself calculates it. So if I try to prove that some arbitrary program, to be specified later, is correct, I may have to contend with an unknown-length, unpredictable, possibly infinite output, with the result that I may never be able to finish off my proof. A proof, in this context, is required to be of finite length.

March 20, 2013by Zagralin

Well, one cannot mathamatically prove that a piece of software is bug free. In fact, it is mathamatically possible to prove that you cannot mathamatically prove that your software is bug free.

This is still a step in a positive direction though it would seem, the more tools to reduce bugs the better.

March 21, 2013by Pete

There is some problems that human beings can intuitively see a solution but are “theoretically unprovable” by computers.

The solution? Incorporate brain tissues in a computer.

March 20, 2013by raducu427

Brilliant idea, why it took so long. Was it that “Computer scientists are not used to thinking about robustness of software”? Or that experts in general prefer to work with more ad-hockish approaches instead of formalized powerful methods because makes them feel more experts.