Introduction Updated on 02 September 2018 at 9:58 am

Introduction (to the Preview)


“Sometimes shifting your perspective is much better than being smart.”
Astro Teller, TED Talk


This is a work-in-progress, highly condensed and less formal, preview of my work-in-progress book An Introduction to a Foundation for Timeliness and Predictability in General Real-Time Systems [Jensen 201x].

The purpose of this preview is to provide early access to some abbreviated and simplified content of the book, with the hope of sooner fostering new research on its topics.

This Introduction first briefly summarizes the keystone ideas that the book and its preview discuss, and why I am writing about them. Then it describes the topic of each chapter. It is prerequisite reading for the rest of the preview.

The primary objective of the book and this preview is to introduce a scholarly yet practical foundation for timeliness and predictability of timeliness. Those are the core properties of all real-time systems. Here they are the primary ones which distinguish the general case of dynamic real-time systems from the conventional subset of static ones—and thus are the basis for this foundation.

Such a foundation is important (and this one has been proven to be invaluable) for generalizing conventional real-time system models and practices from their historical focus on the narrow special case of predominately static periodic-based (informally termed “hard”) real-time systems.

That generalization encompasses all real-time systems—defined in this Introduction (temporarily informally) to be systems for which timeliness and predictability of timeliness are part of the logic of the system.

Although the real-time computing field erroneously dismisses non-hard systems as “soft” in various ad hoc and ambiguous ways, this foundation reveals non-hard (soft, as will be precisely defined herein) to be not just the general, but also the most common, case of real-time systems.

Informally, timeliness of a “happening”—an event (e.g., task completion) or information (e.g., data change)—is a measure of the extent to which its occurrence time can be related to the happening’s usefulness in specific circumstances.

Timeliness is orthogonal to relative importance. For example: an action can have a short deadline but be relatively unimportant because missing that deadline can be compensated for (e.g., missed input data can be extrapolated, missed outputs can be idempotent); an action can have a long deadline but be relatively important because missing the deadline is a critical mission failure (e.g., mission management phases for defense from hostile missiles can be computationally intensive for up to O(10^[1,2]) minutes with minimal slack time); the other two cases are often erroneously assumed to be the normal ones.

This also reveals that the practice by some in the real-time computing community of designating actions’ (e.g., tasks’) criticality by their relative worst case execution times or their deadlines is a special case at best and misleading in general. The appropriate metric for mixed criticality real-time systems is most generally application-specific.

In the conventional real-time computing system special case, timeliness metrics are ordinarily binary: whether a task completion deadline is met; whether the latency of an interrupt response or system call response can be less than a least upper bound.

However, in principle (cf. scheduling theory) and real world practice, the logic of a real-time system includes relationships between: happenings’ lateness (earliness and tardiness) with respect to a deadline or least upper bound; and the consequent usefulness of the happening when it is manifest.

The book’s foundation furthermore provides for the frequent cases of even richer expressive relationships evident in an application—e.g.: whose lateness is not necessarily linear; and whose completion time constraints are not deadlines or least upper bounds—using Jensen’s paradigm of time/utility functions and utility accrual optimality criteria. That paradigm’s major challenge has been predictability of non-deterministic timeliness, which is addressed with this foundation.

Continuing informally, something (notably here, timeliness) is predictable in the manner and to the extent that it can be known á priori. In conventional (i.e., “hard”) real-time computing system models, predictability is static and almost exclusively binary—i.e., it is presumed that all (especially task) completion time constraints will always be met (unless there is a failure). In that field, predictability is often mistakenly thought to mean “deterministic.”

That mistake can be recognized and remedied by considering predictability to be a continuum. In that type of model, the maximum predictability end-point is deterministic á priori knowledge, and the minimum predictability end-point is total absence of á priori knowledge. Everywhere on the predictability continuum in-between those end points is degrees of non-determinism. Typically, happenings in dynamic real-time systems have predictability between the two end points of that continuum, according to some formal model- and context-specific metric.

A simplified example, using a statistical continuum for predictability is: the maximum predictability (deterministic) end-point is represented by the constant distribution; the minimum predictability (maximum non-determinism) end-point is represented by the uniform distribution; the metric is in terms of some model instantiation properties of probability distributions, such as shape, dispersion, etc.

Predictability is a vastly deeper and more complex topic than that intuitive continuum model.

Predictability consists of reasoning about kinds and degrees of uncertainty. That usually first brings to mind probability theory, but there are multiple different interpretations and theories of “probability.”

The familiar frequentist interpretation may seem applicable to predicting timeliness (e.g., task completion times) in static periodic-based real-time systems, but even there it can be misleading—and it is not at all applicable to dynamic real-time systems.

The term “dynamic” in the context of the book encompasses two distinct departures from almost all conventional static real-time systems. Both departures are common for timeliness and predictability of timeliness in the real world.

First, the system models’ defaults include that parameters such as task arrivals, task execution durations, task time constraints, task resource conflicts, etc. are not necessarily static. Instead, parameters may be aleatoric—i.e., statistically uncertain ones which may differ (e.g., stochastically) during system execution and for each time the system is run. In most static real-time computing systems, modern processor and computer architectures force task execution times to be a dynamic parameter. Fitting that as “worst-case execution time” into the conventional static orthodoxy is very challenging.

Second, there may be inherent epistemic uncertainties (i.e., ignorance) about aspects of the system and its environment. They are not objectively statistical in nature, in the sense that statistical characterizations do not capture their values and meaning. Reasoning about these uncertainties requires employing an appropriate, often even application-specific, theory of uncertainty from the various extant ones.

The Bayesian theory of probability and inference for reasoning about uncertainty is popular in many fields. Bayesian probability can be assigned to any statement whatsoever, even when no random process is involved, to represent its subjective plausibility or the degree to which the statement is supported by the available evidence. However, it has major limitations—most notably that it cannot model ignorance, uncertain data, conflicting data, and scarce data. Those notable limitations make it unsuitable for predictability of timeliness in many dynamic real-time systems (and for various other applications). They are overcome by several more expressive theories to which the book broadly applies the term belief function theories.

Very simply: Belief function theories for reasoning about the uncertainty of a hypothesis combine evidence—usually from different sources—to arrive at an updated degree of belief represented by a mathematical object called a belief function. Different rules for combining evidence and updating beliefs have been devised to better accommodate varied uncertainty characterizations (e.g., conflicting and contradictory evidence, number of evidence sources). Belief function theories began with Dempster-Shafer theory. Evolved theories currently include (in roughly increasing order of expressiveness and generality) the Theory of Hints, the Transferable Belief Model, and Dezert-Smarandache theory. Each of those theories also has its own limitations.

Belief function theories can be natively expensive in computation time and memory space, especially in comparison with some of the (simplest) reasoning based on classical probability theory, and even with Bayesian inference. Depending on the uncertainties and computational resources used, belief function reasoning time frames may be much longer than those in predominately static systems (recall that time frame magnitudes are not part of the definition of real-time). For example, very complex belief function reasoning for uncertainty-laden applications (e.g., military defense) takes place in sub-second time frames using supercomputer class machines—in some cases a commercial ground-based product, and in other cases custom-made mobile (e.g., surveillance aircraft) products.

Fortunately, the attractiveness of belief function theories for more comprehensively reasoning about uncertainty has engendered an active field of research and development on efficient algorithms, data structures, and even hardware (GPU, FPGA) implementations, for reducing those costs. Even so, the theories are usually best instantiated in application-specific ways for maximal cost-effectiveness.

Any specific application may have kinds and degrees of uncertainties (e.g., of dynamic system model parameters) that outreach all resource-feasible formal reasoning approaches. Ultimately there will always be cases for which the only viable approach is to restructure an application so that uncertainty is manageable—just as is true for simple static real-time systems (e.g., worst-case execution times may be infeasible to ascertain or tolerate).

Belief function theories are increasingly used in a broad range of different fields, despite initial misunderstandings and mistakes in both research and practice.

The foundation in the book is novel in accommodating Bayesian and belief function theories, in addition to classical probability theory, for predicting timeliness (i.e., schedules’ outcomes) of dynamic real-time systems. In particular, it provides for innovative approaches to the historical challenge of predicting the individual and collective non-deterministic completion times and consequent utilities for time/utility functions.

Generically, scheduling employs system model parameters which may be numerical constants (in static systems), or numeric variables (whether random or not). The scheduling algorithm produces a schedule (before or during system operation) according to one or more optimality criteria, while respecting constraints such as dependencies (e.g., task precedence, resource access conflict resolution rules). The optimality criteria for real-time scheduling explicitly or implicitly express the desired timeliness and predictability of timeliness of the scheduled entities (e.g., tasks) individually and collectively. In general, sequencing (scheduling, dispatching) is an FNP-complete problem, so application-specific heuristics are normally used, which are sufficiently feasible and acceptably suboptimal.

Real-time scheduling with belief functions involves algorithms which formulate mathematical beliefs about timeliness and predictability of timeliness based on combining beliefs about uncertain system model parameters with any available certain parameters.  The beliefs about those uncertain scheduling parameters are based on combining evidence from one or more pertinent (application, system) sources. Beliefs are combined and updated according to a (usually) application-specific belief updating rule. Beliefs are updated as evidence is updated.

When belief functions are used with time/utility functions, the utility functions may be belief functions, and the utility accrual procedure is normally a belief function.

The book includes dynamic real-time application case studies with examples of employing and comparing Bayesian and belief function theories, in addition to classical probability theory, in that application’s context.

For case studies (and in practice), the choice of application contexts should balance their need—e.g., relative (e.g., societal, economic, etc.) importance and timeliness predictability difficulty due to uncertainties—against their time frames’ adequacy for the complexity of reasoning about schedules’ timeliness. Available application subject matter expertise is also essential.

The book’s most extensive example case study’s context is a military combat system one, namely battle management (e.g., missile defense). These have a variety of real-time behaviors with dynamic kinds and degrees of uncertainty, plus low level subsystem static behaviors. Despite that, the missions can be as human safety-critical as imaginable, having constituent real-time functionalities whose individual criticalities vary innately and circumstantially in terms of mission operational effectiveness. The time frames for dynamic real-time scheduling include ones on the order of seconds and minutes. Subject matter expertise is gained from the book author and other (collaborator and independent) persons’ work in that application context—including from the successful use of the book’s foundation in that context.


To make this preview readily accessible to an intelligent non-specialist audience, three things reduce its breadth and depth.

First, the preview length is drastically limited from the book length—e.g., from an estimated 350 to approximately 50 pages. That entails judicious selection, and concise re-writing, of only the most fundamental and easily understandable topics.

Second, Chapter 1 of the preview is relatively comprehensive and self-contained, given that it is just the first part of a preview (which may lead readers to a sense of déjà vu in subsequent chapters). The intent is that for some readers, Chapter 1 alone might suffice, at least initially.

Thirdly, the prerequisite mathematical and other knowledge is reduced by using language that is a compromise between informality and accuracy.

The book itself is somewhat more demanding in parts than this preview. It requires a modest amount of cognitive maturity for abilities such as: learning unfamiliar concepts; reasoning about abstractions; problem solving; and making both conceptual and engineering trade-offs between algorithms and efficient implementations of them.

Some of the book’s references provide remedial coverage of formal topics that readers may need, such as for optimization (e.g., non-deterministic scheduling algorithms) and subjective probability theories. Other references provide optional additional background outside the main scope of the book. In this preview (but not the book), no-cost on-line references are chosen as much as suitable.

The readers of the preview and the book are expected to have experience with some non-trivial systems which are considered to be real-time ones according to some plausible definition. Examples include: real-time applications such as industrial automation control systems, robotics, military combat platform management, etc.; and real-time operating systems, including substantive ones such as Linux kernel based or UNIX/POSIX based (not just minimal embedded ones), at the level of their internals.

Many years of experience teaching and applying the material in this book have shown that: in some cases, the less that people know about traditional static real-time computing systems, the more natural and easy they find much of this material; while in other cases, the more that people know about traditional static real-time computing systems, the more foreign and difficult it is for them to assimilate this material. Chapter 1 discusses that dichotomy, and the sociology and psychology behind it—but that is not in this preview.

The book’s Chapters 1, 2, and 3 focus on dynamically real-time at the level of individual actions, such as tasks or threads executing in computing systems, and physical behaviors performed by exogenous non-computational (e.g., mechanical) devices.

Chapter 1 is about the general necessity for the concept of real-time actions and their properties, especially dynamic ones, to be based on productive thinkingfirst principlesmental models, and conceptual frameworks. As noted above, Chapter 1 also deliberately seeks to serve as an extended abstract for the first three Chapters. (Attempted inclusion of Chapter 4 in Chapter 1’s abstract role has thus far made Chapter 1 too long and difficult, but an eventual revision of Chapter 1 may yet accomplish that inclusion.)

Chapter 2 discusses action completion time constraints, such as deadlines and time/utility functions, and expressing those in ubiquitous priority-based execution environments (e.g., operating systems).

Most of the concepts and techniques in Chapters 3 and 4 drawn on a vast body of deep mathematical and philosophy of probability theory, and a great deal of successful use in a variety of applications outside of real-time systems. The point of these chapters is to briefly introduce readers to using those concepts and techniques in dynamically real-time systems. Despite omitting the book’s mathematics, innate complexity, divergent viewpoints by probability theory and uncertainty researchers, and unsolved problems make these chapters somewhat difficult.

Chapter 3 is focused primarily on the very complex topic of predictability, for dynamic resolution of contention for shared resources (e.g., scheduling) and predicting the outcomes, particularly timeliness—in the presence of uncertainties.

The book provides an overview of some important probability interpretations for that purpose, and some predictability theories and metrics—especially for acceptably satisfactory predictions of acceptably satisfactory action and schedule completion times. It begins with the familiar viewpoint of traditional probability theory. Then it re-focuses on alternative predictability theories unfamiliar to the book’s presumed readership. These are needed for reasoning about richer (e.g., epistemic) uncertainties. They include mathematical theories of evidence (i.e., belief theories), which use application-specific rules for combining imprecise and potentially contradictory multi-source beliefs about the system and its environment to form new beliefs.

As a side effect, Chapter 3 provides the correct definitions of the essential terms predictable and deterministic that are ubiquitously misunderstood in the real-time computing practitioner and even research communities.

Chapter 4 addresses dynamically real-time systems, including computing systems and operating systems, that are based on dynamically real-time actions under uncertainties.

A system is characterized in large part by its system model. That defines properties such as of its actions, and of how the system’s resource management agents (such as a scheduler) handle actions according to specified optimality criteria including (but not limited to) actions’ timeliness and predictability of timeliness.

The essence of system resource management (especially by operating systems) is acceptably satisfactory resolution of concurrent contending action requests to access shared hardware and software resources (e.g., scheduling actions’ use of processor cycles and synchronizers) according to application-specific optimality criteria.

In dynamically real-time systems, that resolution must be acceptably satisfactory despite the number and kinds and degrees of epistemic uncertainties. Uncertainties must be accommodated by the system resource management agent(s) (e.g., scheduler) obtaining and refining credence of beliefs about the current and time-evolution state of the system, using an appropriate mathematical theory. On that basis, resource management decision (e.g., action schedules) are made.

Because almost no operation environments (e.g., operating systems) have first order abstractions for action time constraints (e.g., deadlines) and timeliness predictability, the scheduled actions must then be mapped onto the operational environment priority mechanism.

Embedded systems also are briefly described in Chapter 4 because of their close relationship to real-time systems: most real-time computing systems are embedded; and many embedded systems are real-time to some degree.

Chapter 5 summarizes research on dynamically real-time computing systems by my Ph.D. students and co-advisor colleagues and I, and identifies some of the future work needed to expand on that. Those research results in their published form may be analytically challenging for some readers, but the chapter summarizes their important contributions.

Chapter 6 illustrates dynamically real-time computing systems and applications with examples inspired by my professional work, using material (re-)introduced in this book. It also shows some examples of how the paradigm is being employed outside the field of real-time systems (e.g., high performance computing, clouds, web server farms, transaction systems, etc.).

Chapter 7 has a brief survey of some important open problems to be solved for dynamically real-time systems. It also notes relevant work by researchers inside and outside the field of real-time systems.

Chapter 8 describes the need for software tools appropriate for thinking about, demonstrating, and designing dynamically real-time computing systems. It describes one open source program for doing so using material from this book [ ]. It mentions generic tools, such as MATLAB [ ], R [ ], Stan [ ], JASP [ ], etc., which can be very useful.

Chapter 9 is a annotated list of selected references, primarily by members of my research teams, plus relevant papers and books by other authors.

A brief autobiography relevant to the topic of this web site is included.

Lastly is a Blog, both for my asynchronous thoughts on real-time systems, and for readers to interact with me.

Next: Real-Time