diff --git a/paper/sec/analysis.tex b/paper/sec/analysis.tex
index 07e0fc2b3dba10b57e83f6e289bd2176e753a585..047c70e0764491f6fbd5bb39c0dea7bd551ed133 100755
--- a/paper/sec/analysis.tex
+++ b/paper/sec/analysis.tex
@@ -20,21 +20,21 @@ of these techniques in non-trivial cases. Moreover, there are few
 works dealing with joint probabilities of consecutive jobs,
 like~\cite{tanasa2015probabilistic}, but they still 
 %suffer from limited 
-\textcolor{red}{lack of} scalability.
+{lack of} scalability.
 
 To handle the scalability issue, we adopt a simulation-based
 approach, backed up by the \emph{scenario
 theory}~\cite{calafiore2006scenario}, that \emph{empirically}
 performs the uncertainty characterization, and provides
 \emph{formal guarantees} on the robustness of the resulting
-estimation. The scenario theory \textcolor{red}{allows us to exploit}
+estimation. The scenario theory {allows us to exploit}
 %\st{is capable of exploiting} 
 the fact that simulating the taskset 
 execution (with statistical significance) is less computationally 
 expensive than an analytical approach that incurs into the problem of combinatorial explosion of the different possible uncertainty 
 realizations. In practice, this means that we: (i) 
 %\st{randomly extract}
-\textcolor{red}{sample the} execution times from the 
+{sample the} execution times from the 
 probability distributions specified for each 
 task, $f_i^{\mathcal{C}}(c)$, (ii) schedule the tasks, checking the 
 resulting set of sequences $\Omega$, and (iii) find the worst-case 
@@ -42,9 +42,9 @@ sequence $\omega_*$ based on the chosen cost function.
 The probabilities of sequences of hits and misses are
 then computed based on this sequence, and used in the design of
 the controller to be robust with respect to the sequence. We use the
-scenario theory to quantify\textcolor{red}{, according to the number of extracted samples,} the probability $\varepsilon$ of not having 
+scenario theory to quantify{, according to the number of extracted samples,} the probability $\varepsilon$ of not having 
 extracted the \emph{true} worst-case sequence and the  confidence in the 
-process $1-\beta$. \textcolor{red}{Scenario theory has for example found use in the management of energy storage\cite{darivianakis2017scenarioapplication}.}
+process $1-\beta$. {Scenario theory has for example found use in the management of energy storage\cite{darivianakis2017scenarioapplication}.}
 
 \subsection{Scenario Theory}
 \label{sec:analysis:scenario}
@@ -60,9 +60,9 @@ for all the possible uncertainty realization might be achieved
 analytically, but is computationally too heavy or results in
 pessimistic bounds. The scenario theory proposes an empirical method
 in which samples are drawn from the possible realizations of
-uncertainty, \textcolor{red}{finding a lower bound on the number of 
+uncertainty, {finding a lower bound on the number of 
 samples}. It provides statistical 
-guarantees \textcolor{red}{on the value of the cost function} with 
+guarantees {on the value of the cost function} with 
 respect to the general case, provided that the sources of uncertainty 
 are the same. 
 
@@ -102,7 +102,7 @@ of sequences $\Omega = \{ \omega_1, \dots
 %scenario approach, and provides probabilistic bounds on the
 %uncertainty realization, giving formal guarantees on the
 %design according to the chosen cost function.
-\textcolor{red}{
+{
 The choice of the cost function is anyhow not-univocal. For instance, other viable alternatives would be: (i) the 
 number of sub-sequences of a given length with at least a given number of 
 deadline misses, or (ii) the shortest subsequence with more than a given number of
@@ -153,10 +153,10 @@ We simulate the system for a number $n_\text{job}$ of executions of
 the control task. Clearly, we want to select $n_\text{job}$ to cover
 an entire hyperperiod (to achieve complete analysis of the
 interferences between the tasks). In practice, we want to be able to
-detect cascaded effects \textcolor{red}{that might happen due to the 
+detect cascaded effects {that might happen due to the 
 probabilistic nature of the execution times of the tasks. Some samplings
 could in fact make the utilization of instances of the taskset greater 
 than one. For this reason} simulations that include several 
 hyperperiods should be performed. On top of that significancy with 
-respect the controlled of the physical system is required \textcolor{red}{(since the existence of the hyperperiod is not always guaranteed)}, hence 
+respect the controlled of the physical system is required {(since the existence of the hyperperiod is not always guaranteed)}, hence 
 the length of the simulated sequences should cover its dynamics.
diff --git a/paper/sec/model.tex b/paper/sec/model.tex
index 78fcd18c560da6d4d0bf3513a88de31ac64aba7e..0d35993f210ef0af6e6639588fce9c06954ee1a6 100644
--- a/paper/sec/model.tex
+++ b/paper/sec/model.tex
@@ -46,7 +46,7 @@ Worst Case Execution Time (WCET) $C^{\text{max}}_i$. Furthermore, we
 consider tasks that behave well in most cases, i.e., tasks whose
 probability density functions are skewed towards lower values. 
 In fact,
-\pp{while our approach can be applied 
+{while our approach can be applied 
 to systems with generic probability density functions,} 
 we want to capture tasks which experience occasional faulty
 conditions. This choice 
@@ -113,7 +113,7 @@ design parameters of the controller. They represent the
 trade-off between regulating $\mathbf{x}(t)$ to zero and the cost of using the control signal $\mathbf{u_c}(t)$. This cost function is used both as a controller design objective and for performance evaluation of the control task.
 
 The plant is connected to the controller via time-triggered sampler and hold devices 
-as shown in Figure~\ref{fig:pandc}. \pp{The behavior of these devices can be modeled as a dedicated task that reads and writes data with zero execution time and highest priority}.
+as shown in Figure~\ref{fig:pandc}. {The behavior of these devices can be modeled as a dedicated task that reads and writes data with zero execution time and highest priority}.
 %
 \begin{figure}[t]
 	\centering
@@ -142,7 +142,7 @@ as shown in Figure~\ref{fig:pandc}. \pp{The behavior of these devices can be mod
 	\label{fig:pandc}
 \end{figure}
 %
-The plant state is sampled every $T_d$ time units, implying $\mathbf{x}(t_k) = \mathbf{x}(kT_d)$. \pp{The control job $J_{d,k}$ is released at the same instant, i.e. $a_{d,k} = kT_d$, and the sensor data $\mathbf{x}(t_k)$ is immediately available to it.}
+The plant state is sampled every $T_d$ time units, implying $\mathbf{x}(t_k) = \mathbf{x}(kT_d)$. {The control job $J_{d,k}$ is released at the same instant, i.e. $a_{d,k} = kT_d$, and the sensor data $\mathbf{x}(t_k)$ is immediately available to it.}
 Based on the state measurement, the controller computes the feedback control action $\mathbf{u}(t_{k})$.
 
 As an hypothesis, our control task $\tau_d$
@@ -168,7 +168,7 @@ output times). We further assume that the execution time properties of the contr
 %We operate under the hypothesis that the execution
 %time of the controller does not change when using different control
 %parameters and different periods. 
-\pp{(since only the values of some parameter are modified but the
+{(since only the values of some parameter are modified but the
 operations done by the control task are the same).}
 
 In the paper, $\tau_d$ is not treated as a hard-deadline task. On the
@@ -181,15 +181,15 @@ properly characterize the timing behavior of the controller and its
 synthesis.
 
 \begin{remark}
-\pp{
+{
 In this paper, we work under the assumption that $\tau_d$ is the task 
 with the lowest priority. If other tasks with priority lower
 than $\tau_d$ do exist, the design proposed hereafter is still valid 
 in principle, since those tasks cannot interfere with $\tau_d$.
 However, if this is the case, the range of possible values of $T_d$ 
-should be tied with schedulability guarantees for the lower 
+should be tied with the schedulability guarantees for the lower 
 priority tasks. 
-Due to space constraints, we reserve to analyze 
+We reserve to analyze 
 this more general case as a future work. }
 \end{remark}