Skip to content
GitLab
Explore
Sign in
Register
Primary navigation
Search or go to…
Project
P
paolo-ecrts19
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container registry
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
GitLab community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Martina Maggio
paolo-ecrts19
Commits
72f19c67
Commit
72f19c67
authored
6 years ago
by
Claudio Mandrioli
Browse files
Options
Downloads
Patches
Plain Diff
scenario update camera ready 1
parent
baeb011c
Branches
Branches containing commit
No related tags found
No related merge requests found
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
paper/sec/analysis.tex
+38
-25
38 additions, 25 deletions
paper/sec/analysis.tex
with
38 additions
and
25 deletions
paper/sec/analysis.tex
+
38
−
25
View file @
72f19c67
...
...
@@ -18,29 +18,30 @@ pessimistic~\cite{chen2017probabilistic} or have a high computational
complexity~
\cite
{
von2018efficiently
}
. This limits the applicability
of these techniques in non-trivial cases. Moreover, there are few
works dealing with joint probabilities of consecutive jobs,
like~
\cite
{
tanasa2015probabilistic
}
, but they still suffer from
limited
scalability.
like~
\cite
{
tanasa2015probabilistic
}
, but they still
\st
{
suffer from
limited
}
\textcolor
{
red
}{
lack of (SE RIFORMULATO COSI' POSSIAMO RISPARMIARE UNA RIGA)
}
scalability.
To handle the scalability issue, we adopt a simulation-based
approach, backed up by the
\emph
{
scenario
theory
}
~
\cite
{
calafiore2006scenario
}
, that
\emph
{
empirically
}
performs the uncertainty characterization, and provides
\emph
{
formal guarantees
}
on the robustness of the resulting
estimation. The scenario theory is capable of exploiting the fact
that simulating the taskset execution (with statistical significance)
is less computationally expensive than an analytical approach that
incurs into the problem of combinatorial explosion of the different
possible uncertainty realizations. In practice, this means that we:
(i) randomly extract execution times from the probability
distributions specified for each task,
$
f
_
i
^{
\mathcal
{
C
}}
(
c
)
$
, (ii)
schedule the tasks, checking the resulting set of sequences
$
\Omega
$
,
and (iii) find the worst-case sequence
$
\omega
_
*
$
based on the chosen
cost function. The probabilities of sequences of hits and misses are
estimation. The scenario theory
\textcolor
{
red
}{
allows to exploit
}
\st
{
is capable of exploiting
}
the fact that simulating the taskset
execution (with statistical significance) is less computationally
expensive than an analytical approach that incurs into the problem of combinatorial explosion of the different possible uncertainty
realizations. In practice, this means that we: (i)
\st
{
randomly
extract
}
\textcolor
{
red
}{
sample the
}
execution times from the
probability distributions specified for each
task,
$
f
_
i
^{
\mathcal
{
C
}}
(
c
)
$
, (ii) schedule the tasks, checking the
resulting set of sequences
$
\Omega
$
, and (iii) find the worst-case
sequence
$
\omega
_
*
$
based on the chosen cost function.
The probabilities of sequences of hits and misses are
then computed based on this sequence, and used in the design of
the controller to be robust with respect to the sequence. We use the
scenario theory to quantify the probability
$
\varepsilon
$
of not
having
extracted the
\emph
{
true
}
worst-case sequence and the
confidence in the process
$
1
-
\beta
$
.
scenario theory to quantify the probability
$
\varepsilon
$
of not
having
extracted the
\emph
{
true
}
worst-case sequence and the
confidence in the
process
$
1
-
\beta
$
\textcolor
{
red
}{
according to the number of extracted samples
}
.
\subsection
{
Scenario Theory
}
\label
{
sec:analysis:scenario
}
...
...
@@ -53,8 +54,11 @@ for all the possible uncertainty realization might be achieved
analytically, but is computationally too heavy or results in
pessimistic bounds. The Scenario Theory proposes an empirical method
in which samples are drawn from the possible realizations of
uncertainty. It provides statistical guarantees with respect to the
general case, provided that the sources of uncertainty are the same.
uncertainty.
\textcolor
{
red
}{
By providing a lower bound on the number of
samples to be drawn from the uncetainty space
}
it provides statistical
guarantees
\textcolor
{
red
}{
on the value of the cost function
}
with
respect to the general case, provided that the sources of uncertainty
are the same.
One of the advantages of this approach is that there is no need to
enumerate the uncertainty sources, the only requirement being the
...
...
@@ -76,9 +80,9 @@ $J_{seq}(\omega)$, that determines when we consider a sequence worse
than another (from the perspective of the controller execution).
Denoting with
$
\mu
_{
\text
{
tot
}}
(
\omega
)
$
the total number of job
skips and deadline misses that the control task experienced in
$
\omega
$
, and with
$
\mu
_{
\text
{
seq
}}
(
\omega
)
$
the total
number of
consecutive deadline misses or skipped jobs in
$
\omega
$
, we use as a
cost function
$
\omega
$
, and with
$
\mu
_{
\text
{
seq
}}
(
\omega
)
$
the
\st
{
total
}
\textcolor
{
red
}{
maximum
}
number of consecutive deadline misses or
skipped jobs in
$
\omega
$
, we use as a
cost function
\begin{equation}
\label
{
eq:Jseq
}
J
_{
seq
}
(
\omega
) =
\mu
_{
\text
{
tot
}}
(
\omega
)
\,\mu
_{
\text
{
seq
}}
(
\omega
)
\end{equation}
...
...
@@ -88,8 +92,14 @@ of simulated sequences $\Omega = \{ \omega_1, \dots
\text
{
arg
}
\,\max\limits
_{
\omega
\in
\Omega
}
J
_{
seq
}
(
\omega
)
$
. The
number of simulations,
$
n
_{
\text
{
sim
}}$
is selected based on the
scenario approach, and provides probabilistic bounds on the
uncertainty realization, giving us some formal guarantees on the
design.
uncertainty realization, giving us
\st
{
some
}
formal guarantees on the
design
\textcolor
{
red
}{
according to the chosen cost function
}
.
\textcolor
{
red
}{
The choice of the cost function is anyhow not-univocal. For instance the
number of sub-sequences of a given length with at least a given number of
deadline misses or the shortest subsequence with more than a given number
deadline misses would be other viable choices.
}
\subsection
{
Formal Guarantees
}
\label
{
sec:analysis:guarantees
}
...
...
@@ -135,7 +145,10 @@ We simulate the system for a number $n_\text{job}$ of executions of
the control task. Clearly, we want to select
$
n
_
\text
{
job
}$
to cover
an entire hyperperiod (to achieve complete analysis of the
interferences between the tasks). In practice, we want to be able to
detect cascaded effects, so simulations that include several
detect cascaded effects
\textcolor
{
red
}{
that might happen due to the
probabilistic nature of the execution times of the tasks. Some samplings
could in fact make the utilization of instances of the taskset greater
than one. For this reason
}
\st
{
, so
}
simulations that include several
hyperperiods should be performed. On top of that significancy with
respect the controlled of the physical system is required
, he
nce the
length of the simulated sequences should cover its dynamics.
respect the controlled of the physical system is required
\textcolor
{
red
}{
(since the exista
nce
of
the
hyperperiod is not always guaranteed)
}
, hence
the
length of the simulated sequences should cover its dynamics.
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment