Robust optimization and deep uncertainty

In this post, I describe how deeply uncertain parametric data can be controlled in an optimization model to gain conservative solutions and decisions. The investigated robust optimization approach can lead to least, partially, or fully conservative solutions by paying the price to deviate from optimality for the sake of achieving feasible decisions even at worst-case scenarios.

Consider a single objective, single constraint, and single variable optimization model as follows:

max c x s.t. a ~ x b   l x u

Assume that no information is available around uncertain parameter $\tilde a$ (the case of deep uncertainty), but only a range (or an interval) as $[ \bar a – \hat a, \bar a + \hat a]$, where $\bar a$ is the nominal value (or centroid of the interval) that the uncertain parameter $\tilde a$ can take and it may be deviated by $\hat a$. Using a simplified version of the suggested framework by Bertsimas and Sim (2004), the robust counterpart of the proposed model would be as follows:

max c x s.t. a ¯ x + s + t b s Γ ( a ^ y t ) y x y   l x u y , s , t 0

The parameter $\Gamma$ could be real or integer and is called a robustness controller. It controls the degree of conservatism of the solutions achieved by forcing the uncertain parameters of a constraint (here only $\tilde a$) to deviate from their nominal value. Accordingly, the size of the feasible region would be reduced. Notably, if $l$ (lower bound) of variable $x$ is greater than or equal to zero the constraint $-y \leq x \leq y$ and the variable $y$ are not required and the model degenerates to the following one:

max c x s.t. a ¯ x + s + t b s Γ ( a ^ x t )   l x u s , t 0

Now considering possible values of $\Gamma$, some properties of the model would be as follows:

  • If $\Gamma = 0$ then $s$ and $t$ get values equal to zero, as the model preferes to not reduce number of feasible solutions for maximization or such an approach is undesired.
  • If $\Gamma = 1$ then the model assigns the deviation $\hat a$ to one of variables $s$ or $t$ or their sum as $s+t \ge \hat a x$. In this case, the worst case proposed by Soyster (1973) is reached as the main constraint of the model would be $\left( {\bar a + \hat a} \right)x \le b$.
  • If $ 0 < \Gamma < 1$ then $0 \leq t<\hat a x$ and $ 0 < s \leq \Gamma \hat a x $. In this case, setting the value $t=\hat a x $ is not desirable as $s=0$ and the model reaches the worst case.
  • If $ \Gamma > 1$ then the model remains at its worst-case.

The general form of the proposed robust optimization model is as follows:

max c T x s.t. j a ¯ i j x j + s i + j t i j b i i s i Γ i ( a ^ i j y j t i j ) i , j y j x j y j j   l j x j u j j y j , s i , t i j 0 i , j

The fastest way to define the robust counterpart of an optimization model would be as follows:

  • For each separate uncertain parameter in the same or different constraints (or objective functions), use a variable $s$ which is indexed similar to that constraint, and a variable $t$, which is indexed similar to the constraint plus the indices on which the uncertain parameter is summed on.
  • The parameter $\Gamma$ is indexed similar to $s$ and takes values in range $[0, U]$ where $U$ is the product of cardinality of index sets which the uncertain parameter is summed on (herein $|J|$)
  • The variables $s$ and $t$ would always be positive and are both added to the left-hand side of lower than or equal constraints or minimization objectives or are subtracted from the right-hand side of greater than or equal constraints or maximization objectives, depending on the location of the deeply uncertain parameter. In all these cases, the form of the additional constraint (i.e., $s \geq \Gamma(\hat a y – t)$) would be the same.

The figure below represents the mechanism of this robust optimization approach when the parameter $\Gamma$ increases. Notably, all constraints have an uncertain parameter in the form of coefficients of the decision variables (since constraints only got circulated around a point and not moved in the direction of the black arrows).

Concept of Robust Optimization (the case where constraints contain deeply uncertain parametric data).

As the feasible region gets condensed, the value of the objective function may be steady or worse than before, similar to the following figure. The process of getting worse is called paying the price of robustness. This is done to achieve conservative solutions or decisions for real-world problems whenever we have a lack of information in our dataset or face deep uncertainty. The objective is to reduce worst-case impacts on a system or model, faced in prescriptive or predictive analytics.

The decreasing trend of objective value (in maximization form).

You May Also Like

Leave a Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.