1. Home
  2. Docs
  3. Math blog
  4. Relaxations
  5. Nonlinear Relaxation for Convex MINLPs

Nonlinear Relaxation for Convex MINLPs

For the purposes of this article, we define a convex MINLP as a mixed integer nonlinear problem which is convex if the integrality requirements are relaxed. For example, consider the MINLP problem

\({\min\{f(x,y) : g_i(x,y) \leq 0, i\in I\, x\in[x_l, x_u], y \in {0,1}}\).

If, by relaxing the integrality requirements on (y), i.e.

\({\min\{f(x,y) : g_i(x,y) \leq 0, i\in I\, x\in[x_l, x_u], y \in [0,1]}\), the resulting problem is a convex NLP then, in a slight abuse of terminology, the original MINLP is characterised as convex.

Nonlinear Relaxations for convex problems

Octeract Engine will automatically identify convex MINLP problems or create them through reformulations. For those problems, there are two main classes of methods to derive a relaxation.

The first method is the same as it is for non-convex MINLPs – factorable relaxations. The nonlinear terms of the problem are linearised upon decomposing the problem to factorable form, using guaranteed underestimators and the lower bounding problem becomes an (MI)LP.

The second method is specific to convex MINLPs: the relaxation of the integrality requirements for convex MINLPs will yield a convex NLP, the solution of which is, by definition, lower bounding to the solution of the original MINLP.

What’s good about a nonlinear relaxation?

The solution of the nonlinear relaxation can be significantly tighter to the original problem. Therefore, the LLB solution can be very good from the beginning of the solution process. This is a feature that linear relaxations can also possess given enough good cuts, so it’s a balance. Sometimes the NLP is fast enough to solve and is convergent well enough, and sometimes it’s not. The only way to really know is to try out both methods.

There are two challenges with this approach:

  1. The NLP, corresponding to the relaxation of the MINLP, requires a convex nonlinear solver to be used. This poses a challenge, especially in large problems as nonlinear solvers may require a large amount of time to derive the solution. Furthermore, if the solver is posed with a time limit, the returned solution may be (severely) suboptimal. Typically, MILP and LP solvers handle large problems better than NLP solvers.
  2. On the contrary to the MILP relaxation of the convex MINLP, the nonlinear relaxation cannot provide any information regarding the integral values of the original problem. If a linear relaxation is used for the original problem, then the solution of the MILP relaxation can be used to hot-start the solution of the MINLP upper bounding problem. This, therefore, may reduce the computational time for the global solution.

If your problem is a convex MINLP, it is worth attempting to solve the nonlinear relaxation, if the (MI)LP one does work very well. This can be particularly good for problems that are already very large, since the resulting bounding (MI)LPs are gg.

Associated options