# Debugging Models¶

A number of errors and warnings may be raised when attempting to solve a model. A model may be primal infeasible: there is no possible solution that satisfies all constraints. A model may be dual infeasible: the optimal value of one or more variables is 0 or infinity (negative and positive infinity in logspace).

For a GP model that does not solve, solvers may be able to prove its primal or dual infeasibility, or may return an unknown status.

GPkit contains several tools for diagnosing which constraints and variables might be causing infeasibility. The first thing to do with a model `m`

that won’t solve is to run `m.debug()`

, which will search for changes that would make the model feasible:

```
"Debug examples"
from gpkit import Variable, Model, units
x = Variable("x", "ft")
x_min = Variable("x_min", 2, "ft")
x_max = Variable("x_max", 1, "ft")
y = Variable("y", "volts")
m = Model(x/y, [x <= x_max, x >= x_min])
m.debug()
print("# Now let's try a model unsolvable with relaxed constants\n")
m2 = Model(x, [x <= units("inch"), x >= units("yard")])
m2.debug()
print("# And one that's only unbounded\n")
# the value of x_min was used up in the previous model!
x_min = Variable("x_min", 2, "ft")
m3 = Model(x/y, [x >= x_min])
m3.debug()
```

```
< DEBUGGING >
> Trying with bounded variables and relaxed constants:
Solves with these variables bounded:
sensitive to upper bound: y
value near upper bound: y
and these constants relaxed:
x_min [ft]: relaxed from 2 to 1
>> Success!
# Now let's try a model unsolvable with relaxed constants
< DEBUGGING >
> Trying with bounded variables and relaxed constants:
>> Failure.
> Trying with relaxed constraints:
Solves with these constraints relaxed:
1: 3500% relaxed, from x [ft] >= 1 [yd]
to 36·x [ft] >= 1 [yd]
>> Success!
# And one that's only unbounded
< DEBUGGING >
> Trying with bounded variables and relaxed constants:
Solves with these variables bounded:
sensitive to upper bound: y
value near upper bound: y
>> Success!
```

Note that certain modeling errors (such as omitting or forgetting a constraint) may be difficult to diagnose from this output.

## Potential errors and warnings¶

`RuntimeWarning: final status of solver 'mosek' was 'DUAL_INFEAS_CER', not 'optimal’`

- The solver found a certificate of dual infeasibility: the optimal value of one or more variables is 0 or infinity. See
*Dual Infeasibility*below for debugging advice.

- The solver found a certificate of dual infeasibility: the optimal value of one or more variables is 0 or infinity. See

`RuntimeWarning: final status of solver 'mosek' was 'PRIM_INFEAS_CER', not 'optimal’`

- The solver found a certificate of primal infeasibility: no possible solution satisfies all constraints. See
*Primal Infeasibility*below for debugging advice.

- The solver found a certificate of primal infeasibility: no possible solution satisfies all constraints. See

`RuntimeWarning: final status of solver 'cvxopt' was 'unknown', not 'optimal’`

or`RuntimeWarning: final status of solver 'mosek' was ‘UNKNOWN’, not 'optimal’.`

- The solver could not solve the model or find a certificate of infeasibility. This may indicate a dual infeasible model, a primal infeasible model, or other numerical issues. Try debugging with the techniques in
*Dual*and*Primal Infeasibility*below.

- The solver could not solve the model or find a certificate of infeasibility. This may indicate a dual infeasible model, a primal infeasible model, or other numerical issues. Try debugging with the techniques in

`RuntimeWarning: Primal solution violates constraint: 1.0000149786 is greater than 1`

- this warning indicates that the solver-returned solution violates a constraint of the model, likely because the solver’s tolerance for a final solution exceeds GPkit’s tolerance during solution checking. This is sometimes seen in dual infeasible models, see
*Dual Infeasibility*below. If you run into this, please note on this GitHub issue your solver and operating system.

- this warning indicates that the solver-returned solution violates a constraint of the model, likely because the solver’s tolerance for a final solution exceeds GPkit’s tolerance during solution checking. This is sometimes seen in dual infeasible models, see

`RuntimeWarning: Dual cost nan does not match primal cost 1.00122315152`

- Similarly to the above, this warning may be seen in dual infeasible models, see
*Dual Infeasibility*below.

- Similarly to the above, this warning may be seen in dual infeasible models, see

## Dual Infeasibility¶

In some cases a model will not solve because the optimal value of one or more variables is 0 or infinity (negative or positive infinity in logspace). Such a problem is dual infeasible because the GP’s dual problem, which determines the optimal values of the sensitivites, does not have any feasible solution. If the solver can prove that the dual is infeasible, it will return a dual infeasibility certificate. Otherwise, it may finish with a solution status of `unknown`

.

`gpkit.constraints.bounded.Bounded`

is a
simple tool that can be used to detect unbounded variables and get dual infeasible models to solve by adding extremely large upper bounds and extremely small lower bounds to all variables in a ConstraintSet.

When a model with a Bounded ConstraintSet is solved, it checks whether any variables slid off to the bounds, notes this in the solution dictionary and prints a warning (if verbosity is greater than 0).

For example, Mosek returns `DUAL_INFEAS_CER`

when attempting to solve the following model:

```
"Demonstrate a trivial unbounded variable"
from gpkit import Variable, Model
from gpkit.constraints.bounded import Bounded
x = Variable("x")
constraints = [x >= 1]
m = Model(1/x, constraints) # MOSEK returns DUAL_INFEAS_CER on .solve()
m = Model(1/x, Bounded(constraints))
# by default, prints bounds warning during solve
sol = m.solve(verbosity=0)
print(sol.summary())
# but they can also be accessed from the solution:
assert (sol["boundedness"]["value near upper bound"]
== sol["boundedness"]["sensitive to upper bound"])
```

Upon viewing the printed output,

```
Solves with these variables bounded:
sensitive to upper bound: x
value near upper bound: x
Optimal Cost
------------
1e-30
Free Variables
--------------
x : 1e+30
Most Sensitive Constraints
--------------------------
+1 : x ≤ 1e+30
```

The problem, unsurprisingly, is that the cost `1/x`

has no lower bound because `x`

has no upper bound.

For details read the Bounded docstring.

## Primal Infeasibility¶

A model is primal infeasible when there is no possible solution that satisfies all constraints. A simple example is presented below.

```
"A simple primal infeasible example"
from gpkit import Variable, Model
x = Variable("x")
y = Variable("y")
m = Model(x*y, [
x >= 1,
y >= 2,
x*y >= 0.5,
x*y <= 1.5
])
# raises UnknownInfeasible on cvxopt, PrimalInfeasible on mosek
# m.solve()
```

It is not possible for `x*y`

to be less than 1.5 while `x`

is greater than 1 and `y`

is greater than 2.

A common bug in large models that use `substitutions`

is to substitute overly constraining values in for variables that make the model primal infeasible. An example of this is given below.

```
"Another simple primal infeasible example"
from gpkit import Variable, Model
x = Variable("x")
y = Variable("y", 2)
constraints = [
x >= 1,
0.5 <= x*y,
x*y <= 1.5
]
objective = x*y
m = Model(objective, constraints)
# raises UnknownInfeasible on cvxopt and PrimalInfeasible on mosek
# m.solve()
```

Since `y`

is now set to 2 and `x`

can be no less than 1, it is again impossible for `x*y`

to be less than 1.5 and the model is primal infeasible. If `y`

was instead set to 1, the model would be feasible and the cost would be 1.

### Relaxation¶

If you suspect your model is primal infeasible, you can find the nearest primal feasible version of it by relaxing constraints: either relaxing all constraints by the smallest number possible (that is, dividing the less-than side of every constraint by the same number), relaxing each constraint by its own number and minimizing the product of those numbers, or changing each constant by the smallest total percentage possible.

```
"Relaxation examples"
from gpkit import Variable, Model
x = Variable("x")
x_min = Variable("x_min", 2)
x_max = Variable("x_max", 1)
m = Model(x, [x <= x_max, x >= x_min])
print("Original model")
print("==============")
print(m)
print("")
# m.solve() # raises a RuntimeWarning!
print("With constraints relaxed equally")
print("================================")
from gpkit.constraints.relax import ConstraintsRelaxedEqually
allrelaxed = ConstraintsRelaxedEqually(m)
mr1 = Model(allrelaxed.relaxvar, allrelaxed)
print(mr1)
print(mr1.solve(verbosity=0).table()) # solves with an x of 1.414
print("")
print("With constraints relaxed individually")
print("=====================================")
from gpkit.constraints.relax import ConstraintsRelaxed
constraintsrelaxed = ConstraintsRelaxed(m)
mr2 = Model(constraintsrelaxed.relaxvars.prod() * m.cost**0.01,
# add a bit of the original cost in
constraintsrelaxed)
print(mr2)
print(mr2.solve(verbosity=0).table()) # solves with an x of 1.0
print("")
print("With constants relaxed individually")
print("===================================")
from gpkit.constraints.relax import ConstantsRelaxed
constantsrelaxed = ConstantsRelaxed(m)
mr3 = Model(constantsrelaxed.relaxvars.prod() * m.cost**0.01,
# add a bit of the original cost in
constantsrelaxed)
print(mr3)
print(mr3.solve(verbosity=0).table()) # brings x_min down to 1.0
print("")
```

```
Original model
==============
Cost
----
x
Constraints
-----------
x ≤ x_max
x ≥ x_min
With constraints relaxed equally
================================
Cost
----
C
Constraints
-----------
"minimum relaxation":
C ≥ 1
"relaxed constraints":
x ≤ C·x_max
x_min ≤ C·x
Optimal Cost
------------
1.414
Free Variables
--------------
x : 1.414
| Relax
C : 1.414
Fixed Variables
---------------
x_max : 1
x_min : 2
Variable Sensitivities
----------------------
x_max : -0.5
x_min : +0.5
Most Sensitive Constraints
--------------------------
+0.5 : x ≤ C·x_max
+0.5 : x_min ≤ C·x
With constraints relaxed individually
=====================================
Cost
----
C[:].prod()·x^0.01
Constraints
-----------
"minimum relaxation":
C[:] ≥ 1
"relaxed constraints":
x ≤ C[0]·x_max
x_min ≤ C[1]·x
Optimal Cost
------------
2
Free Variables
--------------
x : 1
| Relax1
C : [ 1 2 ]
Fixed Variables
---------------
x_max : 1
x_min : 2
Variable Sensitivities
----------------------
x_min : +1
x_max : -0.99
Most Sensitive Constraints
--------------------------
+1 : x_min ≤ C[1]·x
+0.99 : x ≤ C[0]·x_max
+0.01 : C[0] ≥ 1
With constants relaxed individually
===================================
Cost
----
[Relax2.x_max, Relax2.x_min].prod()·x^0.01
Constraints
-----------
Relax2
"original constraints":
x ≤ x_max
x ≥ x_min
"relaxation constraints":
"x_max":
Relax2.x_max ≥ 1
x_max ≥ Relax2.OriginalValues.x_max/Relax2.x_max
x_max ≤ Relax2.OriginalValues.x_max·Relax2.x_max
"x_min":
Relax2.x_min ≥ 1
x_min ≥ Relax2.OriginalValues.x_min/Relax2.x_min
x_min ≤ Relax2.OriginalValues.x_min·Relax2.x_min
Optimal Cost
------------
2
Free Variables
--------------
x : 1
x_max : 1
x_min : 1
| Relax2
x_max : 1
x_min : 2
Fixed Variables
---------------
| Relax2.OriginalValues
x_max : 1
x_min : 2
Variable Sensitivities
----------------------
x_min : +1
x_max : -0.99
Most Sensitive Constraints
--------------------------
+1 : x ≥ x_min
+1 : x_min ≥ Relax2.OriginalValues.x_min/Relax2.x_min
+0.99 : x ≤ x_max
+0.99 : x_max ≤ Relax2.OriginalValues.x_max·Relax2.x_max
```