Skip to content
Snippets Groups Projects
Commit 72f19c67 authored by Claudio Mandrioli's avatar Claudio Mandrioli
Browse files

scenario update camera ready 1

parent baeb011c
Branches
No related tags found
No related merge requests found
......@@ -18,29 +18,30 @@ pessimistic~\cite{chen2017probabilistic} or have a high computational
complexity~\cite{von2018efficiently}. This limits the applicability
of these techniques in non-trivial cases. Moreover, there are few
works dealing with joint probabilities of consecutive jobs,
like~\cite{tanasa2015probabilistic}, but they still suffer from
limited scalability.
like~\cite{tanasa2015probabilistic}, but they still \st{suffer from limited}
\textcolor{red}{lack of (SE RIFORMULATO COSI' POSSIAMO RISPARMIARE UNA RIGA)}scalability.
To handle the scalability issue, we adopt a simulation-based
approach, backed up by the \emph{scenario
theory}~\cite{calafiore2006scenario}, that \emph{empirically}
performs the uncertainty characterization, and provides
\emph{formal guarantees} on the robustness of the resulting
estimation. The scenario theory is capable of exploiting the fact
that simulating the taskset execution (with statistical significance)
is less computationally expensive than an analytical approach that
incurs into the problem of combinatorial explosion of the different
possible uncertainty realizations. In practice, this means that we:
(i) randomly extract execution times from the probability
distributions specified for each task, $f_i^{\mathcal{C}}(c)$, (ii)
schedule the tasks, checking the resulting set of sequences $\Omega$,
and (iii) find the worst-case sequence $\omega_*$ based on the chosen
cost function. The probabilities of sequences of hits and misses are
estimation. The scenario theory \textcolor{red}{allows to exploit}
\st{is capable of exploiting} the fact that simulating the taskset
execution (with statistical significance) is less computationally
expensive than an analytical approach that incurs into the problem of combinatorial explosion of the different possible uncertainty
realizations. In practice, this means that we: (i) \st{randomly
extract} \textcolor{red}{sample the} execution times from the
probability distributions specified for each
task, $f_i^{\mathcal{C}}(c)$, (ii) schedule the tasks, checking the
resulting set of sequences $\Omega$, and (iii) find the worst-case
sequence $\omega_*$ based on the chosen cost function.
The probabilities of sequences of hits and misses are
then computed based on this sequence, and used in the design of
the controller to be robust with respect to the sequence. We use the
scenario theory to quantify the probability $\varepsilon$ of not
having extracted the \emph{true} worst-case sequence and the
confidence in the process $1-\beta$.
scenario theory to quantify the probability $\varepsilon$ of not having
extracted the \emph{true} worst-case sequence and the confidence in the
process $1-\beta$ \textcolor{red}{according to the number of extracted samples}.
\subsection{Scenario Theory}
\label{sec:analysis:scenario}
......@@ -53,8 +54,11 @@ for all the possible uncertainty realization might be achieved
analytically, but is computationally too heavy or results in
pessimistic bounds. The Scenario Theory proposes an empirical method
in which samples are drawn from the possible realizations of
uncertainty. It provides statistical guarantees with respect to the
general case, provided that the sources of uncertainty are the same.
uncertainty. \textcolor{red}{By providing a lower bound on the number of
samples to be drawn from the uncetainty space} it provides statistical
guarantees \textcolor{red}{on the value of the cost function} with
respect to the general case, provided that the sources of uncertainty
are the same.
One of the advantages of this approach is that there is no need to
enumerate the uncertainty sources, the only requirement being the
......@@ -76,9 +80,9 @@ $J_{seq}(\omega)$, that determines when we consider a sequence worse
than another (from the perspective of the controller execution).
Denoting with $\mu_{\text{tot}}(\omega)$ the total number of job
skips and deadline misses that the control task experienced in
$\omega$, and with $\mu_{\text{seq}}(\omega)$ the total number of
consecutive deadline misses or skipped jobs in $\omega$, we use as a
cost function
$\omega$, and with $\mu_{\text{seq}}(\omega)$ the \st{total}
\textcolor{red}{maximum} number of consecutive deadline misses or
skipped jobs in $\omega$, we use as a cost function
\begin{equation}\label{eq:Jseq}
J_{seq}(\omega) = \mu_{\text{tot}}(\omega)\,\mu_{\text{seq}}(\omega)
\end{equation}
......@@ -88,8 +92,14 @@ of simulated sequences $\Omega = \{ \omega_1, \dots
\text{arg}\,\max\limits_{\omega \in \Omega}J_{seq}(\omega)$. The
number of simulations, $n_{\text{sim}}$ is selected based on the
scenario approach, and provides probabilistic bounds on the
uncertainty realization, giving us some formal guarantees on the
design.
uncertainty realization, giving us \st{some} formal guarantees on the
design \textcolor{red}{according to the chosen cost function}.
\textcolor{red}{
The choice of the cost function is anyhow not-univocal. For instance the
number of sub-sequences of a given length with at least a given number of
deadline misses or the shortest subsequence with more than a given number
deadline misses would be other viable choices.
}
\subsection{Formal Guarantees}
\label{sec:analysis:guarantees}
......@@ -135,7 +145,10 @@ We simulate the system for a number $n_\text{job}$ of executions of
the control task. Clearly, we want to select $n_\text{job}$ to cover
an entire hyperperiod (to achieve complete analysis of the
interferences between the tasks). In practice, we want to be able to
detect cascaded effects, so simulations that include several
detect cascaded effects \textcolor{red}{that might happen due to the
probabilistic nature of the execution times of the tasks. Some samplings
could in fact make the utilization of instances of the taskset greater
than one. For this reason} \st{, so} simulations that include several
hyperperiods should be performed. On top of that significancy with
respect the controlled of the physical system is required, hence the
length of the simulated sequences should cover its dynamics.
respect the controlled of the physical system is required \textcolor{red}{(since the existance of the hyperperiod is not always guaranteed)}, hence
the length of the simulated sequences should cover its dynamics.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment