Real-Time Updated on 06 January 2019 at 7:34 pm

Predictability Theory and Models

There are a great many formal predictability theories and models widespread in research and practice for almost countless contexts [ ].

The few models for predictability in conventional static real-time computing systems (such as rate monotonic analysis) are almost all limited to the special case of whether all deadlines will be met, given actual or presumed a’ priori knowledge of action properties.

The most commonly thought of predictability model in a great many different contexts is based on probability—specifically frequentist probability theory. Common intuition suggests that if a fair coin is tossed many times in the absence of influences, then roughly half of the time it will turn up heads, and the other half it will turn up tails. Furthermore, the more often the coin is tossed, the more likely it should be that the ratio of the number of heads to the number of tails will approach unity. Modern frequentist (more generally measure-theoretic) probability theory provides a formal version of this intuitive idea, known as the law of large numbers [ ].

Obviously there are many predictions that cannot be made with that model. Individual instances, such as forecasting tomorrow’s weather is one. Unlike the frequentist model and coin flips, tomorrow’s weather and dynamic real-time actions and systems do not exhibit a sequence of events—action completion times and satisfactions—having a probability function that lies between zero and one, and the sum of those functions over all events is equal to 1. The limited applicability of the frequentist theory led to the development of other, more widely applicable, theories of probability [ ].

A predictability model better matched to real-time and especially dynamic real-time actions is based on the Bayesian interpretation of probability. In contrast to interpreting probability as the frequency of some phenomenon, Bayesian probability is a subjective quantity that we assign to represent a state of knowledge, or a state of belief. In the Bayesian theory, a probability is assigned to a hypothesis (prediction) based on that state. To evaluate the probability of a hypothesis, the theory specifies some prior probability using relevant expertise or previous data, which also includes uncertainty resulting from lack of information. The hypothesis is then updated to a posterior probability in the light of new, relevant data (evidence) [ ].

The frequentist based prediction model results in the rejection or non-rejection of the original hypothesis (prediction) with a particular degree of confidence. The Bayesian theory yields statements that one hypothesis is more probable than the other, or that the expected gain (utility, benefit, etc.) associated with one was less than the expected gain of the other.

The Bayesian theory of probability is very widely used in many contexts, since so often there is prior knowledge or beliefs that can be effectively exploited [ ]. Clearly most real-time actions and systems lend themselves to realistic initial prior probabilities of satisfactory action completion times, and to providing updated data for revised probabilities. Conventional static (“hard”) real-time systems are an obvious special case which presumes complete prior knowledge.

An even more realistic model for making predictions about the completion times and satisfactions of dynamic real-time actions is based on the Dempster-Shafer theory of probability (DST) [ ].

Dempster–Shafer theory is a generalization of the Bayesian theory. Belief functions base degrees of belief (or confidence, or trust) on:

  • obtaining degrees of belief for one question from subjective probabilities for other related questions;
  • Dempster’s rule for combining such degrees of belief when they are based on independent items of evidence.

The degree of belief in a hypothesis depends primarily upon the number of answers (to the related questions) containing the hypothesis, and the subjective probability of each answer. Probability values are assigned to sets of possibilities rather than single datums—their appeal rests on the fact they naturally encode evidence in favor of hypotheses. The degrees of belief themselves may or may not have the mathematical properties of probabilities; how much they differ depends on how closely the questions are related.

The Dempster-Shafer theory of probability is very widely used (as is the Bayesian theory), to exploit prior beliefs—but it better accommodates multiple sources and kinds of prior information (such as for performing sensor fusion for warfare, self-driving vehicles, etc.).

The Dezert-Smarandache Theory (DSmT) of plausible and paradoxical reasoning is a natural extension of the classical Dempster-Shafer Theory but includes fundamental differences with the DST. DSmT allows for formally combining any types of independent sources of information represented in term of belief functions, but is mainly focused on the fusion of uncertain, highly conflicting  and imprecise quantitative or qualitative sources of evidence. DSmT  is able to solve complex, static or dynamic fusion problems beyond the limits of the DST framework, especially when conflicts between sources become large and when the refinement of the frame of the problem under consideration becomes inaccessible because of vague, relative and imprecise nature of elements of it [ ]. 

Both DST and DSmT are particularly well-suited for predictability of dynamic real-time action completion times and satisfactions, because such systems do have multiple sources and kinds of prior information and hypotheses to utilize [ ]. Chapter 4 illustrates the use of DST and DMsT for resolving dynamically contending access to shared resources—from computations to battle management.

There remain other useful predictability models omitted here for brevity, given that three well-known epistemological models well-suited to dynamic real-time actions and systems have been introduced. They include, but are not limited to, Imprecise Probability theory, Possibility theory, Plausibility theory, Certainty Factors, Quantum theory, etc. [ ].

It must be noted that some dynamic real-time actions and systems are so dynamic that they are non-stochastic. They have properties that are so

  • intermittent
  • irregular
  • interdependent
  • co-evolving
  • competitive
  • non-linear

that stochastic and other probability theories of predictability for them are either unknown or computationally intractable. Reasoning about the timeliness of such systems is typically performed using simulation models or extensional (rule-based) and other models from fields such machine learning, decision theory, etc. [ ]. These cases are outside the scope of this preview from Chapter 3.