The field of optimization studies formal methods for solving programs of the form
\[ \begin{align*} \min_x & f(x) \\ \text{subject to } & g_i(x) \le 0 \\ & h_j(x) = 0 \end{align*} \]
where \( f : \mathbb{R}^n \rightarrow \mathbb{R} \) is a scalar objective function and \( g_i, h_j : \mathbb{R}^n \rightarrow \mathbb{R} \) are scalar constraint functions. In general, finding the global minimizer \( x^\star \) is NP-hard. However, there exist efficient (read: polynomial-time) algorithms for two "easier" versions.
Firstly, if \( f \) is a function with a convex epigraph, and the constraints defined by \( g_i \) and \( h_j \) define a convex set, then the optimization is called a convex program and can be solved to global optimality using efficient (read: polynomial-time) algorithms.
Secondly, if the program is not convex, there exist efficient (read: polynomial-time) algorithms for determining local minimizers of \( f \). Local minimizers may or may not be globally optimal, yet in many cases a local minimum are acceptable.
The Karush-Kuhn-Tucker conditions (KKT conditions) for optimality are the necessary conditions for global optimality. In the case of convex programs, they are the necessary and sufficient conditions for global optimality!
Reference: Karush-Kuhn-Tucker Conditions
The KKT conditions suggest an excellent way to solve nonlinear programs. This is the Primal-Dual Interior Point Method discussed below!
The primal-dual interior point method (PDIPM) is a highly efficient algorithm for constrained optimization.
Reference: Primal-Dual Interior Point Method
The open-source solver IPOPT (pronounced "eye-pea-opt") performs highly efficient optimization on constrained nonlinear, nonconvex programs. It is widely considered the academic and industry standard PDIPM solver.