diff --git a/paper/figures/fig_simple_example.tex b/paper/figures/fig_simple_example.tex index e04d637a2c572a6d96b378f164c0db1d556d7de8..6bad29a4261ccea34c85ad71d93a09a233e19642 100644 --- a/paper/figures/fig_simple_example.tex +++ b/paper/figures/fig_simple_example.tex @@ -23,7 +23,7 @@ xlabel near ticks, \nextgroupplot[ legend columns=4, - legend style={draw=none,fill=none,at={(2.6,1.3)}}, + legend style={draw=none,at={(2.6,1.3)}}, xlabel = {$T_d$ -- Kill}, ] \addplot[thick, purple, mark=pentagon*] table[x index=0, y index=1, col sep=comma] {\dataaa}; diff --git a/paper/sec/analysis.tex b/paper/sec/analysis.tex index 6e130681da2f1c8313a2c32720d38f1376496842..3e85df5041de94c79be9aae0845ed920a8192927 100755 --- a/paper/sec/analysis.tex +++ b/paper/sec/analysis.tex @@ -49,8 +49,8 @@ process $1-\beta$. \textcolor{red}{Scenario theory has for example found use in \subsection{Scenario Theory} \label{sec:analysis:scenario} -The Scenario Theory has been developed in the field of robust -control~\cite{calafiore2006scenario} to provide robustness guarantees +The scenario theory has been developed in the field of robust +control to provide robustness guarantees for convex optimization problems in presence of probabilistic uncertainty. In these problems, @@ -58,7 +58,7 @@ In these problems, accounting for all the possible uncertainty realization might be achieved analytically, but is computationally too heavy or results in -pessimistic bounds. The Scenario Theory proposes an empirical method +pessimistic bounds. The scenario theory proposes an empirical method in which samples are drawn from the possible realizations of uncertainty, \textcolor{red}{finding a lower bound on the number of samples}. It provides statistical @@ -70,14 +70,14 @@ One of the advantages of this approach is that there is no need to enumerate the uncertainty sources, the only requirement being the possibility to draw representative samples. This eliminates the need to make assumptions on the correlation between the probability of -deadline miss in subsequent jobs. If interference is happening +deadline misses in subsequent jobs. If interference is happening between the jobs, this interference empirically appears when the system behavior is sampled. While there is no requirement on subsequent jobs interfering with one another, there is a requirement that different sequences are independent (i.e., each sequence represents an execution of the entire taskset of a given length, in the same or possibly different conditions). Taking the worst observed -case in a set of experiments, the Scenario Theory allows us to +case in a set of experiments, the scenario theory allows us to estimate the probability that something worse than what is observed can happen during the execution of the system. @@ -88,7 +88,8 @@ Denoting with $\mu_{\text{tot}}(\omega)$ the total number of job skips and deadline misses that the control task experienced in $\omega$, and with $\mu_{\text{seq}}(\omega)$ the maximum number of consecutive deadline misses or -skipped jobs in $\omega$, we use as a cost function +skipped jobs in $\omega$, we chose to use as a cost function the following +expression: \begin{equation}\label{eq:Jseq} J_{seq}(\omega) = \mu_{\text{tot}}(\omega)\,\mu_{\text{seq}}(\omega) \end{equation} @@ -110,7 +111,7 @@ deadline misses would be other viable choices. \subsection{Formal Guarantees} \label{sec:analysis:guarantees} -The Scenario Theory allows us to compute the number $n_{\text{sim}}$ +The scenario theory allows us to compute the number $n_{\text{sim}}$ of simulations that we need to conduct to reach the required robustness $\varepsilon$ and confidence $1-\beta$. The parameter $\varepsilon$ is a bound on the probability of the obtained result @@ -141,7 +142,7 @@ controller with high confidence. \label{sec:analysis:application} Similarly to any other empirical approach, the validity of the -Scenario Theory depends on the representativeness of the sampling +scenario theory depends on the representativeness of the sampling set. In our case, for example the validity of our results depends on the significance of the probabilistic execution time distributions for all the tasks. @@ -156,5 +157,5 @@ probabilistic nature of the execution times of the tasks. Some samplings could in fact make the utilization of instances of the taskset greater than one. For this reason} simulations that include several hyperperiods should be performed. On top of that significancy with -respect the controlled of the physical system is required \textcolor{red}{(since the existance of the hyperperiod is not always guaranteed)}, hence +respect the controlled of the physical system is required \textcolor{red}{(since the existence of the hyperperiod is not always guaranteed)}, hence the length of the simulated sequences should cover its dynamics. diff --git a/paper/sec/behavior.tex b/paper/sec/behavior.tex index 75bf6fb5571f36772b8c36cf8ee57a98f8fd20dd..38f1e883dce59a3134ce684dba5b0a0dec257faa 100644 --- a/paper/sec/behavior.tex +++ b/paper/sec/behavior.tex @@ -27,7 +27,7 @@ theory, which is the periodicity of the output pattern~\cite{pazzaglia2018beyond}. In this work, we exploit the knowledge of deadline misses directly in the control design step. For this purpose, -we need to characterize the effect of deadline misses on the control +we need to characterize how deadline misses affect the control performance. We fully describe the effect of deadline misses of LET-based controllers with two parameters, named respectively \emph{delay} and \emph{hold} interval. @@ -56,7 +56,7 @@ $\mathbf{x}(t_k)$ and the control output(s) active in that time span. Given a control output computed by $J_{d,k}$ and available at the actuator for the first time at $t_k + \sigma_k$, the \emph{hold interval} $h_k$ is the time interval between $t_k + \sigma_k$ and - the first instant where a new control output is written. + the first instant where a new control output is made available. \end{definition} In other words, the hold interval $h_k$ indicates the lifetime of the @@ -239,7 +239,7 @@ the job has an overrun. The hold value $h_{k+1}$ is equal to the delay of the next completed job $J_{d,k+3}$, i.e., $h_{k+1} = \sigma_{k+3} = T_d$. The values that $\lambda_{k,\text{Skip-Next}}$ may assume -are upperbounded by the maximum delay $\bar{\sigma}$. +are upperbounded by $\lceil R_d^W / T_d \rceil - 1 $. \subsubsection{Hold Interval with Queue(1) Strategy} @@ -264,4 +264,4 @@ and the $k+1$-th control jobs complete before during the $k+1$-th period---then $\sigma_{k+1} - T_d = 0$, and the control signal produced by $J_{d,k}$ is never actuated. Finally, values of $\lambda_{k,\text{Queue(1)}}$ are upperbounded by -$\bar{\sigma} - T_d$. +$\lceil (R_d^W - T_d) / T_d \rceil - 1 $. diff --git a/paper/sec/design.tex b/paper/sec/design.tex index 45a42b381d80d9c6dc5f6b7d90a938a0ac0b3c8a..e88b22123bce417c85015509a2cc7740c1fc3a5c 100755 --- a/paper/sec/design.tex +++ b/paper/sec/design.tex @@ -185,8 +185,8 @@ ranging between $1$ and $2$, we compare the resulting performance under the Kill, Skip-Next, and Queue(1) strategies in Figure~\ref{fig:onetask_results}. Since $J_\text{ctl}$ is defined as a cost, lower values in the graph mean better performance. For each -configuration, a standard controller (assuming no missed deadlines), -a robust controller, and a clairvoyant controller is designed, and +configuration, a standard controller (designed assuming no missed deadlines), +a robust controller, and a clairvoyant controller are designed, and the performance of each controller, measured in terms of the cost function~\eqref{eq:cost}, is evaluated using JitterTime~\cite{cervin2019jittertime} in a simulation of 100,000 @@ -196,7 +196,7 @@ clairvoyant control, as expected. This means that designing control strategies that take into account deadline misses is beneficial in all cases. The \our\ design does not achieve the optimal cost that the clairvoyant design is able to achieve, but systematically beats -classical control design, even when there are no deadline misses, due to its delay and hold compensation. +classical control design due to its delay and hold compensation. As the period is decreased from 2 to lower values, the Kill and Queue(1) strategies initially behave similarly, with decreasing cost. diff --git a/paper/sec/intro.tex b/paper/sec/intro.tex index 6c7952dd164b01dc2aade5f4ee4b483a906dedca..2b7007c75abe7af053ce7b0e85f3028f8a530273 100755 --- a/paper/sec/intro.tex +++ b/paper/sec/intro.tex @@ -18,8 +18,8 @@ structure. In general, adding a new control task to a given taskset implies combining requirements that come from both control theory and real-time implementation. These requirements are different and often conflicting. As an example, selecting a high execution rate for the -controller improves control performance, but at the same time limits -the guarantees on the completion of the control task code and forces +controller improves the control performance, but at the same time limits +the guarantees on the timely completion of the control task code and forces the engineers to take into account overruns~\cite{cervin2005analysis, pazzaglia2018beyond}. Moreover, minimizing the monetary cost of the final system is an ever-present priority and over-provisioning @@ -27,7 +27,7 @@ resources is usually not a viable solution. Timing constraints in real-time systems are modeled as \emph{deadlines}, i.e., a threshold that the execution time of each -task instance (\emph{job}) must respect. We refer to a job +task instance (\emph{job}) should respect. We refer to a job that successfully completes its execution before the corresponding deadline as a \emph{deadline hit} event. If the job could not terminate its execution before that deadline instant, we say that it @@ -72,7 +72,7 @@ overcome this limitation, we obtain an estimate of deadline miss occurrence simulating the schedule execution, drawing execution times (for all the tasks) from the corresponding probability distributions. A robust control tool, the -\emph{scenario-theory}~\cite{calafiore2006scenario}, provides the +\emph{scenario theory}~\cite{calafiore2006scenario}, provides the means to select the worst-case sequence of misses and hits from the simulations. Leveraging the scenario theory, our approach allows us to provide probabilistic guarantees for worst-case conditions both in diff --git a/paper/sec/method.tex b/paper/sec/method.tex index efb970bfb0a51040d0a82e9ee07ba0c8194446bc..11e753758b46e7e8b85cb360e895cd38eb538158 100755 --- a/paper/sec/method.tex +++ b/paper/sec/method.tex @@ -4,8 +4,8 @@ The aim of \our{} is to provide the first control synthesis method that is \emph{robust} both with respect to deadline misses and with respect to the strategy used to handle them. Our control design -leverages knowledge of the probability of occurrence of different -sequences of deadline hits and misses and produces a fixed controller +leverages knowledge of the probability that different +sequences of deadline hits and misses may occur, and produces a fixed controller that is (on average) optimal with respect to a defined cost function. We obtain such knowledge by formulating a chance constrained optimization problem in a probabilistic framework, and obtaining @@ -53,7 +53,7 @@ behavior from simulations of a certain number of control jobs. produced sequences to select the worst-case sequence for the controller design. \item $\xi$: The strategy used to handle a deadline miss. We consider - three different strategies for how to handle a deadline miss: + three different strategies: killing the job that missed the deadline, letting it continue and skipping the next job, or letting it continue and enqueuing the next job (up to a maximum of one enqueued job at any point in time). diff --git a/paper/sec/model.tex b/paper/sec/model.tex index b579da36a7d1832f9c0171f222559d694032d430..78fcd18c560da6d4d0bf3513a88de31ac64aba7e 100644 --- a/paper/sec/model.tex +++ b/paper/sec/model.tex @@ -72,7 +72,7 @@ task experiences its WCET. Similarly, the Best Case Response Time considering that every job executes with its BCET. Finally, in this work all tasks $\tau_i$ in $\Gamma'$ are \emph{schedulable}, i.e. -$R_i^W < D_i$ for each $\tau_i$. However, this +$R_i^W \leq D_i$ for each $\tau_i$. However, this hypothesis will not be required for $\tau_d$. We will only assume that at least one job of $\tau_d$ respects its deadline, i.e. $R_d^B \leq D_d$. @@ -144,8 +144,9 @@ as shown in Figure~\ref{fig:pandc}. \pp{The behavior of these devices can be mod % The plant state is sampled every $T_d$ time units, implying $\mathbf{x}(t_k) = \mathbf{x}(kT_d)$. \pp{The control job $J_{d,k}$ is released at the same instant, i.e. $a_{d,k} = kT_d$, and the sensor data $\mathbf{x}(t_k)$ is immediately available to it.} Based on the state measurement, the controller computes the feedback control action $\mathbf{u}(t_{k})$. + As an hypothesis, our control task $\tau_d$ -executes under the Logical Execution Time (LET) paradigm. +executes under the Logical Execution Time paradigm. Indeed, the job $J_{d,k}$ %released at time $a_{d,k}$ % reads the measurement of the plant state available at time @@ -185,7 +186,7 @@ In this paper, we work under the assumption that $\tau_d$ is the task with the lowest priority. If other tasks with priority lower than $\tau_d$ do exist, the design proposed hereafter is still valid in principle, since those tasks cannot interfere with $\tau_d$. -However, if this is the case, the choice on the values of $T_d$ +However, if this is the case, the range of possible values of $T_d$ should be tied with schedulability guarantees for the lower priority tasks. Due to space constraints, we reserve to analyze @@ -367,9 +368,10 @@ $\nu$ is a job that successfully completes its execution and whose generated output is not overwritten before the next deadline instant. \end{definition} -For each time interval $[0,t)$, we show that is possible to extract the sequence -of $v$ valid jobs, defined as $S = \{\nu_1,\nu_2,...,\nu_{v}\}$, where -the index does not count the passing of time, and the relation $v +For each time interval $[0,t)$, we show that is possible to extract the +ordered sequence +of $v$ valid jobs, defined as $S = \{\nu_1,\nu_2,...,\nu_{v}\}$ (where +the index does not count the passing of time) and the relation $v \leq \lceil t/T_d \rceil $ trivially holds. The sequence of valid jobs depends on the strategy used to handle deadline misses, and will be described in Section~\ref{sec:behavior}. Our control design should