How are optimization problems solved?

To me it was always some kind of mystery how to solve optimization problems. After all, the only way we can compute an unknown is by reducing it into a linear equation; in the context of minimization, to get to a linear equation that gives the minimum is always done through taking some derivative. I didn't know anything beyond this to actually do a computation -- without computing derivatives, I thought that we would be completely lost. So how come can we find the minimum in various sorts of problems including those with non-differentiable objective/constraint functions or yet worse with inequality constraints?

It turns out that we somehow convert problems to differentiable ones, no matter how far they initially are from being differentiable.

The order goes like this:

$$ \begin{matrix} \text{Non-differentiable objective function, inequality constraints} \\ \downarrow \text{Smooth objective function, inequality constraints} \\ \downarrow \text{Smooth unconstrained} \\ \downarrow \text{Quadratic unconstrained} \\ \downarrow \end{matrix} $$