diff --git a/paper/figures/fig_simple_example.tex b/paper/figures/fig_simple_example.tex
index 94d4d6747e604932938bd6205f67dfcb1870302e..f03a3181e966f6f5626c9aeabc9e2e1c533ae704 100644
--- a/paper/figures/fig_simple_example.tex
+++ b/paper/figures/fig_simple_example.tex
@@ -8,7 +8,7 @@ group style = {group size = 3 by 1,
                yticklabels at=edge left,
                xticklabels at=edge bottom,
                horizontal sep=10mm},
-height = 0.3\textwidth,
+height = 0.22\textwidth,
 width = 0.38\textwidth,
 xmin=1,
 xmax=2,
diff --git a/paper/sec/design.tex b/paper/sec/design.tex
index b05aae2e924539e048c9dd13df40d56beb3fedb2..db429facf365663432a9d10e4b38b948c7dfd139 100755
--- a/paper/sec/design.tex
+++ b/paper/sec/design.tex
@@ -1,48 +1,75 @@
 \section{Synthesis of Deadline-Miss-Aware Controllers}
 \label{sec:design}
 
-Standard digital control design assumes that samples are taken regularly and that there is a known, constant delay from sampling to actuation \cite{aastrom2013computer}. When deadlines are missed, the actual hold and delay intervals will deviate from the assumed values, as explained in the previous section. This \emph{control jitter} leads to degraded performance, and, in extreme cases, even to instability of the control loop \cite{cervin2004jitter}. With some knowledge about the jitter, however, it is possible to synthesize a controller that partially compensates for the timing irregularities. We outline two variants of our Deadline-Miss-Aware Control designs below.
-
-%\subsection{Discretization of the system}
-%
-%Viewing the plant state only in the sample points $t_k = kT_d$, the discrete-time system equation is given by
-%\begin{equation}
-%\mathbf{x}_{k+1} = A(T_d) \mathbf{x}_k + B(T_d) \mathbf{u}_k + \mathbf{v}_k
-%\end{equation}
-%where $A(t) = e^{A_ct}$ and $B(t) = \int_0^t e^{A_cs} B_c ds$, and  where $\mathbf{v}$ is a mean-zero Gaussian random variable with covariance $R = \int_0^t e^{A_cs}{R_c}e^{A_c^{\T}s} ds$. The cost function is also discretized to yield
-%\[
-%V = \E_k \left\{ x^{\T}_k Q_1(T_d) x_k + 2 x^{\T}_k Q_{12}(T_d) u_k + u^{\T}_k Q_{2}(T_d) u_k \right\} + J_\mathit{const}(T_d)
-%\]
-%where $Q_1(t) = \int_0^t A_c^{\T}(s)Q_{1c}A_c(s) ds$, etc.
+Standard digital control design assumes that samples are taken
+regularly and that there is a (most likely known and constant) delay
+from sampling to actuation~\cite{aastrom2013computer}. When deadlines
+are missed, the actual hold and delay intervals will deviate from the
+assumed values, as explained in the previous section. This
+\emph{control jitter} leads to degraded performance, and, in extreme
+cases, even to instability of the control
+loop~\cite{cervin2004jitter}. With some knowledge about the jitter,
+however, it is possible to synthesize a controller that partially
+compensates for the timing irregularities. We outline two variants of
+our Deadline-Miss-Aware Control designs below.
 
 \subsection{Clairvoyant Controller Synthesis}
 \label{sec:design:ideal}
 
-The controlled system evolution can be derived by sampling the plant only at the update instants of each valid job $\nu_n$, i.e. at the time where the control output produced by $\nu_n$ is provided to the actuator. 
-With a slight abuse of notation we will refer hereafter to the pair of delay and hold relative to $\nu_n$ as $(\sigma_n,h_n)$, while its activation instant is $a_n$. The update instant of the control output produced by $\nu_n$ can then be defined as $t_n = a_n+\sigma_n$. Moreover, the relation $t_{n+1} = t_n + h_n$ trivially holds.
-For each valid control job $\nu_n$ in sequence $S = \{\nu_1,\nu_2,...,\nu_{v}\}$, the state evolution can be calculated as
+The controlled system evolution can be derived by sampling the plant
+only at the update instants of each valid job $\nu_n$, i.e. at the
+time where the control output produced by $\nu_n$ is provided to the
+actuator. With a slight abuse of notation we will refer hereafter to
+the pair of delay and hold relative to $\nu_n$ as $(\sigma_n,h_n)$,
+while its activation instant is $a_n$. The update instant of the
+control output produced by $\nu_n$ can then be defined as $t_n =
+a_n+\sigma_n$. Moreover, the relation $t_{n+1} = t_n + h_n$ trivially
+holds. For each valid control job $\nu_n$ in sequence $S =
+\{\nu_1,\nu_2,...,\nu_{v}\}$, the state evolution can be calculated as
 \begin{equation}
 \label{eq:sampled}
 \mathbf{x}(t_{n+1})= \mathbf{x}(t_n+h_n) = A(h_n) \mathbf{x}(t_n) + B(h_n)\mathbf{u}(t_n) + \mathbf{v}(t_n),
 \end{equation}
-where $\mathbf{x}(t_n)$ is the state measurement sampled at time $t_n$, $\mathbf{u}(t_n)$ the control output released at time $t_n$, and $ \mathbf{v}(t_n)$ a discrete-time model of the plant disturbance. The discrete matrices $A$ and $B$ are sampled from $A_c$ and $B_c$ of \eqref{eq:plant}, respectively, with the step $h_n$. 
-It is worth noting that different matrices $A(h_n)$ and $B(h_n)$ are created, depending on the possible values of $h_n$. In fact, a system described in this way behaves as a \emph{switched-linear} system~\cite{sun2006switched}.
-Computing the matrices can be done with standard procedures for sampled-data systems~\cite{aastrom2013computer}. 
-
-If the timing behavior of all jobs was completely known in advance, we would be able to design, by looking offline at the schedule, an optimal time-varying controller that minimizes the cost function~\eqref{eq:cost}. We call this a \emph{clairvoyant} controller.
-The optimal control signal to be applied in the hold interval $h_n$ is given by
+where $\mathbf{x}(t_n)$ is the state measurement sampled at time
+$t_n$, $\mathbf{u}(t_n)$ the control output released at time $t_n$,
+and $ \mathbf{v}(t_n)$ a discrete-time model of the plant
+disturbance. The discrete matrices $A$ and $B$ are sampled from $A_c$
+and $B_c$ of \eqref{eq:plant}, respectively, with the step $h_n$. It
+is worth noting that different matrices $A(h_n)$ and $B(h_n)$ are
+created, depending on the possible values of $h_n$. In fact, a system
+described in this way behaves as a \emph{switched-linear}
+system~\cite{sun2006switched}. Computing the matrices can be done
+with standard procedures for sampled-data
+systems~\cite{aastrom2013computer}.
+
+If the timing behavior of all jobs was completely known in advance,
+we would be able to design, by looking offline at the schedule, an
+optimal time-varying controller that minimizes the cost
+function~\eqref{eq:cost}. We call this a \emph{clairvoyant}
+controller. The optimal control signal to be applied in the hold
+interval $h_n$ is given by
 \begin{equation}
 \label{eq:optfb}
 \mathbf{u}(t_n) = -L_n \mathbf{x}(t_n) ,
 \end{equation}
-where the sequence of feedback gain matrices  $\bigl\{ L_n \bigr\}$  are obtained as the solution to a time-varying Riccati equation involving the sequences $\bigl\{A(h_n)\bigr\}$, $\bigl\{B(h_n)\bigr\}$, and the sampled equivalents of the cost matrices $Q_{1c}$ and $Q_{2c}$. The feedback matrices can be calculated off-line and stored in a table for on-line use.
-
-The control law \eqref{eq:optfb} cannot be implemented as it stands, though. The control action must be computed based on a state measurement that is $\sigma_n$ time units old. Hence the controller must also predict the state from time $t_n-\sigma_n$ to $t_n$. 
-Note however that in the time interval between $t_n-\sigma_n$ and $t_n$, the control actuation may not be constant, thus a slightly different modeling is needed.
-We will refer to the estimate of the state as $\mathbf{\hat x}$, which is computed as
+where the sequence of feedback gain matrices $\bigl\{ L_n \bigr\}$
+are obtained as the solution to a time-varying Riccati equation
+involving the sequences $\bigl\{A(h_n)\bigr\}$,
+$\bigl\{B(h_n)\bigr\}$, and the sampled equivalents of the cost
+matrices $Q_{1c}$ and $Q_{2c}$. The feedback matrices can be
+calculated off-line and stored in a table for on-line use.
+
+The control law \eqref{eq:optfb} cannot be implemented as it stands,
+though. The control action must be computed based on a state
+measurement that is $\sigma_n$ time units old. Hence the controller
+must also predict the state from time $t_n-\sigma_n$ to $t_n$. Note
+however that in the time interval between $t_n-\sigma_n$ and $t_n$,
+the control actuation may not be constant, thus a slightly different
+modeling is needed. We will refer to the estimate of the state as
+$\mathbf{\hat x}$, which is computed as
 \begin{equation}
 \label{eq:pred}
-\mathbf{\hat x}(t_n) = A(\sigma_n) \mathbf{x}(t_n-\sigma_n) + A(\sigma_{1n}) B(\sigma_{2n}) \mathbf{u}(t_{n-2}) +  B(\sigma_{1n}) \mathbf{u}(t_{n-1}).
+\mathbf{\hat x}(t_n) = A(\sigma_n) \mathbf{x}(t_n-\sigma_n) + A(\psi_{1n}) B(\psi_{2n}) \mathbf{u}(t_{n-2}) +  B(\psi_{1n}) \mathbf{u}(t_{n-1}).
 \end{equation}
 
 %%%%%%%%%%%%%%%
@@ -74,104 +101,116 @@ We will refer to the estimate of the state as $\mathbf{\hat x}$, which is comput
   \TaskEndFail{1}{16}
   \TaskEndSuccess{1}{20}
   
-  \Interval[linecolor=blue,labelpos=above]{1}{1.75}{8}{16}{$\sigma_n$}
-  \Interval[linecolor=red,labelpos=below]{1}{-0.75}{8}{12}{$\sigma_{2n}$}
-  \Interval[linecolor=red,labelpos=below]{1}{-0.75}{12}{16}{$\sigma_{1n}$}
+\Interval[linecolor=blue,labelpos=above]{1}{1.75}{8}{16}{$\sigma_n$}
+  \Interval[linecolor=red,labelpos=below]{1}{-0.75}{8}{12}{$\psi_{2n}$}
+  \Interval[linecolor=red,labelpos=below]{1}{-0.75}{12}{16}{$\psi_{1n}$}
   \end{RTGrid}
-  \caption{Example of $\sigma_{1n}$ and $\sigma_{2n}$.}
+  \caption{Example of $\psi_{1n}$ and $\psi_{2n}$.}
   \label{fig:sigma1nsigma2n}
 \end{figure}
 %%%%%%%%%%%%%%%%%%%%
 
-Here, $\sigma_{1n}$ represents the time interval in $[t_n-\sigma_n, t_n]$ when the control actuation of the previous valid job $\mathbf{u}(t_{n-1})$ is held constant, while $\sigma_{2n}$ is the (possible) interval where $\mathbf{u}(t_{n-2})$ is active. For the sake of clarity, an example is shown in Figure~\ref{fig:sigma1nsigma2n}.
-An operative procedure for computing $\sigma_{1n}$ and $\sigma_{2n}$ is given as follows:
+Here, $\psi_{1n}$ represents the time interval in $[t_n-\sigma_n,
+t_n]$ when the control actuation of the previous valid job
+$\mathbf{u}(t_{n-1})$ is held constant, while $\psi_{2n}$ is the
+(possible) interval where $\mathbf{u}(t_{n-2})$ is active. For the
+sake of clarity, an example is shown in
+Figure~\ref{fig:sigma1nsigma2n}. An operative procedure for computing
+$\psi_{1n}$ and $\psi_{2n}$ is given as follows:
 \begin{align*}
-\sigma_{1n} &=  a_{n-1} + \sigma_{n-1} - a_n \\
-\sigma_{2n} &= a_n + \sigma_n - (a_{n-1} + \sigma_{n-1}).
+\psi_{1n} &=  a_{n-1} + \sigma_{n-1} - a_n \\
+\psi_{2n} &= a_n + \sigma_n - (a_{n-1} + \sigma_{n-1}).
 \end{align*}
 
-%The prediction procedure in \eqref{eq:pred} is optimal since the noise $\mathbf{v}(t)$ is zero-mean and white, and the hold intervals $\sigma_{1n}$ and $\sigma_{2n}$ as well as the old control signals $\mathbf{u_c}(t_{n-2})$ and $\mathbf{u_c}(t_{n-1})$ are known.
-
-
-%
-%
-%
-%Similar to the design presented in \cite{ramanathan1999overload}, but
-%\begin{itemize}
-%	\item we formulate and measure the control performance criterion in continuous time
-%	\item we take into account all possible scenarios of input--output delay due to the various overrun strategies. in particular, we need sometimes to compensate for a delay of two intervals (picture with some different cases could be useful here)
-%\end{itemize}
-%
-%Known hold intervals => time-varying Ricatti equation
-%\[
-%S_n = A^{\T}(\sigma_n) S_{n+1} A(\sigma_n) + Q_1(\sigma_n) + \bigl(\Phi^{\T}(\sigma_n) S_{n+1} B(\sigma_n) + Q_{12}(\sigma_n)\bigr) L_n
-%\]
-%where the optimal feedback gain is given by
-%\[
-%L_n = \bigl(B^{\T}(\sigma_n) S_{n+1} B(\sigma_n) + Q_2(\sigma_n)\bigr)^{-1} \bigl(A^{\T}(\sigma_n) S_{n+1} B(\sigma_n) + Q_{12}(\sigma_n)\bigr)^{\T}
-%\]
-%The state is predicted as
-%\[
-%\hat x_n = \E\{ x(t_n+\sigma_n) \} = A(\sigma_k) x_n + A(\sigma_{1n}) B(\sigma_{2n}) u_{n-2} +  B(\sigma_{1n}) u_{n-1}
-%\]
-%The optimal state feedback is then
-%\[
-%u_n = -L_n \hat x_n
-%\]
-%($\sigma_n = \sigma_{1n} + \sigma_{2n}$ needs to be explained/defined.)
-
-
 \subsection{Robust Controller Synthesis}
 \label{sec:design:synthesis}
 
-The clairvoyant controller has two drawbacks. First of all, it relies on exact knowledge of the execution of the system, ahead of time. This is only possible in very special circumstances. The other drawback is that it is time varying, which is more complicated to implement and requires extra memory to store the time-varying feedback gain and prediction matrices. A more realistic approach is instead to design a fixed, \emph{robust} controller, based on the statistical properties of the system.
-
-Again starting from the sampled system description \eqref{eq:sampled}, we can instead solve a \emph{stochastic} Riccati equation \cite{nilsson1998automatica} based on the possible values of $A(h_n)$ and $B(h_n)$ and their relative frequency in the schedule during the execution of the system. The control law is then
+The clairvoyant controller has two drawbacks. First of all, it relies
+on exact knowledge of the execution of the system, ahead of time.
+This is only possible in very special circumstances. The other
+drawback is that it is time varying, which is more complicated to
+implement and requires extra memory to store the time-varying
+feedback gain and prediction matrices. A more realistic approach is
+instead to design a fixed, \emph{robust} controller, based on the
+statistical properties of the system.
+
+Again starting from the sampled system description
+\eqref{eq:sampled}, we can instead solve a \emph{stochastic} Riccati
+equation \cite{nilsson1998automatica} based on the possible values of
+$A(h_n)$ and $B(h_n)$ and their relative frequency in the schedule
+during the execution of the system. The control law is then
 \begin{equation}
 \label{eq:optfb_bar}
 \mathbf{u}(t_n) = -\bar L \, \mathbf{x}(t_n) ,
 \end{equation}
-where $\bar L$ is a \emph{fixed} gain matrix obtained from the solution to the stochastic Riccati equation
+where $\bar L$ is a \emph{fixed} gain matrix obtained from the
+solution to the stochastic Riccati equation
 \begin{align*}
 \bar X &= \E \left\{ \begin{bmatrix} A(h_n)^{\T} \\ B(h_n)^{\T} \end{bmatrix}^{\T} \bar S  \begin{bmatrix} A(h_n)^{\T} \\ B(h_n)^{\T} \end{bmatrix} +  \begin{bmatrix} Q_1(h_n) & Q_{12}(h_n) \\ Q_{12}(h_n)^{\T} & Q_2(h_n) \end{bmatrix} \right\} \\
 \bar S &= \bar X_{11}- \bar L^{\T} \bar X_{22} \bar L \\
 \bar L &= \bar X_{22}^{-1} \bar X_{12}^{\T}.
 \end{align*}
-This would be the optimal fixed-gain control law if the matrices $A(h_n)$ and $B(h_n)$ were random and independent from job to job. In reality, there is time dependence between the hold intervals due to the scheduling algorithm, and the control law is hence only sub-optimal.
-
-The predictor \eqref{eq:pred} must also be modified to work with statistics rather than known-ahead values. The state can be predicted using expected value calculations as
+This would be the optimal fixed-gain control law if the matrices
+$A(h_n)$ and $B(h_n)$ were random and independent from job to job. In
+reality, there is time dependence between the hold intervals due to
+the scheduling algorithm, and the control law is hence only
+sub-optimal.
+
+The predictor \eqref{eq:pred} must also be modified to work with
+statistics rather than known-ahead values. The state can be predicted
+using expected value calculations as
 \begin{equation}
 \label{eq:pred2}
-\mathbf{\hat x}(t_n) = \E \left\{ A(\sigma_n)\right\} \mathbf{x}(t_n- \sigma_n) + \E \left\{A(\sigma_{1n}) B(\sigma_{2n}) \right\}\mathbf{u}(t_{n-2}) +  \E \left\{B(\sigma_{1n}) \right\} \mathbf{u}(t_{n-1}).
+\mathbf{\hat x}(t_n) = \E \left\{ A(\sigma_n)\right\} \mathbf{x}(t_n- \sigma_n) + \E \left\{A(\psi_{1n}) B(\psi_{2n}) \right\}\mathbf{u}(t_{n-2}) +  \E \left\{B(\psi_{1n}) \right\} \mathbf{u}(t_{n-1}).
 \end{equation}
-Again, the predictor will only be sub-optimal due to the time-dependence induced by the scheduling algorithm.
-
-%
-%
-%Stochastic Riccati equation gives $\bar L$:
-%\begin{align*}
-%\bar X &= \E_n \left\{ \begin{bmatrix} A(h_n)^{\T} \\ B(h_n)^{\T} \end{bmatrix}^{\T} \bar S  \begin{bmatrix} A(h_n)^{\T} \\ B(h_n)^{\T} \end{bmatrix} +  \begin{bmatrix} Q_1(h_n) & Q_{12}(h_n) \\ Q_{12}(h_n)^{\T} & Q_2(h_n) \end{bmatrix} \right\} \\
-%\bar S &= \bar X_{11}- \bar L^{\T} \bar X_{22} \bar L \\
-%\bar L &= \bar X_{22}^{-1} \bar X_{12}^{\T}
-%\end{align*}
-%
-%
-%Mean-value calculations give $\bar B_2$ and $\bar B_1$. As before, a predictor is used
-%\[
-%\hat x_n = \E_n \left\{ A(\sigma_n) \right\} x_n + \E_n \left\{ A(\sigma_{1n}) B(\sigma_{2n})\right\}  u_{n-2} +  \E_n\left\{   B(\sigma_{1n})\right\}  u_{n-1}
-%\]
+Again, the predictor will only be sub-optimal due to the
+time-dependence induced by the scheduling algorithm.
 
 \subsection{Controller Synthesis Example}
 \label{sec:design:example}
 
-The synthesis methods presented above are illustrated in a simple 
-control example, which was used to evaluate the performance of a standard (non-deadline-miss-aware) controller under various overrun strategies in \cite{cervin2005analysis}. The plant to be controlled is an integrator process described by the parameters
+The synthesis methods presented above are illustrated in a simple
+control example, which was used to evaluate the performance of a
+standard (non-deadline-miss-aware) controller under various overrun
+strategies in \cite{cervin2005analysis}. The plant to be controlled
+is an integrator process described by the parameters
 \begin{equation} \label{eq:plant_integrator}
 A_c = 0, \quad B_c = 1, \quad Q_{1c} = 1, \quad Q_{2c} = 0.1, \quad R_{c} = 1
 \end{equation}
 
-The plant is controlled by a control task with stochastic execution times, executing alone in a CPU. The execution time may assume value equal to $1 \,$s with probability $0.8$, or uniformly distributed in the interval $(1,2]$ with combined probability $0.2$. 
-For periods ranging between $1$ and $2$, we compare the resulting performance under the Kill, Skip-Next, and Queue(1) strategies in  Figure~\ref{fig:onetask_results}. Since $J_\text{ctl}$ is defined as a cost, lower values in the graph mean better performance. For each configuration, a standard controller (assuming no missed deadlines), a robust controller, and a clairvoyant controller is designed, and the performance of each controller, measured in terms of the cost function~\eqref{eq:cost}, is evaluated using JitterTime~\cite{cervin2019jittertime} in a simulation of 100,000 jobs. It can be noted that there is a strict ordering from the worst performance under standard control to the best performance under clairvoyant control, as expected. As the period is decreased from 2 to lower values, the Kill and Queue(1) strategies initially behave similarly, with decreasing cost. In fact, in the case of a miss followed by a deadline hit, the Kill and Queue(1) strategies have the same behavior (since the output of the late-completed job under Queue(1) is overwritten by the completion of the next one). Skip-Next initially has an increase in cost due to the waste of resources when a very small overrun leads to a whole period being skipped. For smaller task periods, Queue(1) suffers performance degradation and even instability ($J_{\text{ctl}}\to\infty$) due to the lag introduced by the queuing. The Kill and Skip-Next strategies perform the best at $T_d=1$, with very similar results for this example. 
+The plant is controlled by a control task with stochastic execution
+times, executing alone in a CPU. The execution time may assume value
+equal to $1 \,$s with probability $0.8$, or uniformly distributed in
+the interval $(1,2]$ with combined probability $0.2$. For periods
+ranging between $1$ and $2$, we compare the resulting performance
+under the Kill, Skip-Next, and Queue(1) strategies in
+Figure~\ref{fig:onetask_results}. Since $J_\text{ctl}$ is defined as
+a cost, lower values in the graph mean better performance. For each
+configuration, a standard controller (assuming no missed deadlines),
+a robust controller, and a clairvoyant controller is designed, and
+the performance of each controller, measured in terms of the cost
+function~\eqref{eq:cost}, is evaluated using
+JitterTime~\cite{cervin2019jittertime} in a simulation of 100,000
+jobs. It can be noted that there is a strict ordering from the worst
+performance under standard control to the best performance under
+clairvoyant control, as expected. This means that designing control
+strategies that take into account deadline misses is beneficial in
+all cases. The \our\ design does not achieve the optimal cost that
+the clairvoyant design is able to achieve, but systematically beats
+classical control design, even when there are no deadline misses, due to its delay and hold compensation.
+
+As the period is decreased from 2 to lower values, the Kill and
+Queue(1) strategies initially behave similarly, with decreasing cost.
+In fact, in the case of a miss followed by a deadline hit, the Kill
+and Queue(1) strategies have the same behavior (since the output of
+the late-completed job under Queue(1) is overwritten by the
+completion of the next one). Skip-Next initially has an increase in
+cost due to the waste of resources when a very small overrun leads to
+a whole period being skipped. For smaller task periods, Queue(1)
+suffers performance degradation and even instability
+($J_{\text{ctl}}\to\infty$) due to the lag introduced by the queuing.
+The Kill and Skip-Next strategies perform the best at $T_d=1$, with
+very similar results for this example.
 
 \begin{figure}
 \centerline{\input{figures/fig_simple_example.tex}}
@@ -179,4 +218,9 @@ For periods ranging between $1$ and $2$, we compare the resulting performance un
 \label{fig:onetask_results}
 \end{figure}
 
-It should be noted that the results are problem dependent, and it is hard to judge whether Kill or Skip-Next works the best in general. In all examples, however, we have found that better performance can be achieved by shortening the period and allowing a few deadline misses. Some tests that include higher-priority tasks $\Gamma'$ are presented later in Section \ref{sec:experiments}.
+It should be noted that the results are problem dependent, and it is
+hard to judge whether Kill or Skip-Next works the best in general. In
+all examples, however, we have found that better performance can be
+achieved by shortening the period and allowing a few deadline misses.
+Some tests that include higher-priority tasks $\Gamma'$ are presented
+later in Section~\ref{sec:experiments}.