Convex optimization

Optimization problems
In general, a (constrained) optimization problem (in standard form) can be written as : $$\text{minimize } f_0(x)$$ $$\begin{matrix}\text{subject to} & f_i(x) \le 0, \,\,\,\, i=1,\dots,m \\ & h_i(x) =0, \,\,\,\, i=1,\dots,p\end{matrix}$$

where $$f_0:\mathbb R^n \to \mathbb R$$ is the objective function and $$f_i:\mathbb R^n \to R, \,\, i=1,\dots,m$$ are the inequality constraint functions and $$h_i:\mathbb R^n\to R\,\, i=1,\dots,p$$ are the equality constraint functions.

Convex optimization
A problem is a convex optimization problem if the functions $$f_0, f_i$$ are convex functions and the functions $$h_i$$ are affine functions.

Quasiconvex optimization
The definition of a quasiconvex optimization problem is very similar to the definition of a convex optimization problem; the only different is that $$f_0$$ is allowed to be a quasiconvex function and not expected to be strictly a convex function. Note that quasiconvex constraint functions can always be converted to equivalent convex constraint functions (see page 144 ).

Convex feasibility problem
A problem is called convex feasibility problem if the goal is to just find any $$x$$ that is feasible:

$$\text{find } x$$ $$\begin{matrix}\text{subject to} & f_i(x) \le 0, \,\,\,\, i=1,\dots,m \\ & h_i(x) =0, \,\,\,\, i=1,\dots,p\end{matrix}$$

Convex feasibility problems are used for solving quasiconvex problems; see the bisection method for quasiconvex problems.

Geometric program
An optimization problem in the form $$\text{minimize } f_0(x)$$ $$\begin{matrix}\text{subject to} & f_i(x) \le 1, \,\,\,\, i=1,\dots,m \\ & h_i(x) = 1, \,\,\,\, i=1,\dots,p\end{matrix}$$

where $$f_0, f_i, h_i$$ are posynomials are called geometric programs; GPs can be transformed into convex optimization problems.

Equivalent convex problems
Problems can be described in equivalent forms that are easier to solve. Those problems are not the same problems but they are equivalent, (roughly) meaning that they have the same solution. For example, the problem $$\text{minimize } ||x||_2$$ is not the same as the problem $$\text{minimize } ||x||_2^2$$ but they have the same solution.

A convex optimization problem is very often solved by converting it into an equivalent problem; in fact, in terms of implementation, that seems to be almost always the case (see also How are optimization problems solved?). This is very understandable because one can convert, for example, a problem with a non-differentiable objective function into an equivalent problem with a differentiable function.

Beyond facilitating the solution, converting problems into an equivalent problem may also make the problem easier to understand. Furthermore, the dual problems of equivalent problems are in general not the same. This is very important, because it means that the dual of a converted problem can have a very simple solution even if the dual of the original problem does not have a solution.

Some standard conversions
Below are some standard problem conversions.
 * Eliminate equality constraint: One can take the constraint $$Ax=b$$ and embed it into the objective function.
 * Introducing equality constraints: The problem may become easier to solve by adding (affine) equality constraints
 * Introducing slack variables: e.g. converting the inequality constraint $$a^Tx\le b$$ into two constraints $$a^T x+s=b$$ and $$s\ge 0$$. Here $$s$$ is called the slack variable.
 * Minimizing over some variables: the problem $$\text{minimize } f_0(x_1,x_2)$$ is equivalent to $$\text{minimize } \tilde{f}_0 (x_1)$$ where $$\tilde{f}_0(x_1)=\inf_{x_2}f_0(x_1,x_2)$$.
 * The epigraph form or the epigraph trick is one of the most common and useful problem conversion techniques. It goes like the this: $$\text{minimize (over } x\text{) } f_0(x)$$ is equivalent to $$\text{minimize (over }x,t\text{) }t \text{ subject to  }  f_0(x)-t\le 0.$$

How are convex optimization problems actually solved?
A curious question is how the problems that involve possibly non-differentiable functions and inequality constraints are solved; after all, all that we know to minimize analytically (without iterations) is a quadratic and unconstrained function.

The page How are optimization problems solved? tries to very superficially answer the question, but here is the idea: We can solve those problems by converting them to a succession of problems that we know how to solve: unconstrained convex optimization problems with quadratic objective function.