Continuous Optimization
Foundational continuous optimization methods forming the computational backbone of operations research. Linear programming via simplex, interior-point, and dual simplex methods with full sensitivity analysis (shadow prices, reduced costs, allowable ranges). Quadratic programming via SLSQP and trust-region interior point for convex optimization. These algorithms underpin nearly every other problem family in the repository.
View on GitHubProblem Collection
Two fundamental continuous optimization paradigms
Linear Programming
Optimize a linear objective function subject to linear equality and inequality constraints. The workhorse of operations research — solvable in polynomial time via interior-point methods (Karmarkar, 1984), and practically efficient via the simplex method (Dantzig, 1947). Implementation includes sensitivity analysis: shadow prices (dual variables), reduced costs, and allowable ranges.
s.t. Ax ≤ b (inequality constraints; SciPy: A_ub, b_ub)
Gx = h (equality constraints; SciPy: A_eq, b_eq)
l ≤ x ≤ u
| Algorithm | Type | Complexity | Details |
|---|---|---|---|
| Revised Simplex Method | Exact | Exp. worst-case | Dantzig (1947). Pivots along vertices of the feasible polytope. Exponential worst-case but typically very fast. Via HiGHS highs-ds |
| Interior-Point Method | Exact | O(n³⋅½ L) | Karmarkar (1984). Traverses the interior of the feasible region. Polynomial-time guaranteed. Via HiGHS highs-ipm |
| Dual Simplex | Exact | Exp. worst-case | Lemke (1954). Works in dual space; key for sensitivity analysis and branch-and-bound re-optimization. Via HiGHS |
Quadratic Programming
Optimize a quadratic objective subject to linear constraints. Generalizes LP by adding a quadratic term (LP is the special case where Q = 0). When Q is positive semidefinite the problem is convex and admits a unique global optimum; when Q is indefinite the problem becomes NP-hard in general. Implementation provides two solver methods: SLSQP (Kraft, 1988) and trust-region interior point (Nocedal & Wright, 2006).
s.t. Ax ≤ b (inequality constraints)
Gx = h (equality constraints)
l ≤ x ≤ u
Q ⪰ 0 (positive semidefinite ⇒ convex)
| Algorithm | Type | Complexity | Details |
|---|---|---|---|
| SLSQP (Sequential Least Squares QP) | Exact | Superlinear | Kraft (1988). General NLP solver using QP subproblems. Superlinear convergence; no polynomial-time guarantee. Via scipy.optimize.minimize |
| Trust-Region Interior Point | Exact | Polynomial (convex) | Nocedal & Wright (2006). Exploits explicit Hessian Q; polynomial for convex QP. Via trust-constr |
Method Comparison
Linear vs. Quadratic Programming at a glance
| Aspect | Linear Programming | Quadratic Programming |
|---|---|---|
| Objective | Linear: cTx | Quadratic: (½) xTQx + cTx |
| Relationship | Special case of QP (Q = 0) | Generalizes LP with quadratic term |
| Constraints | Linear equalities & inequalities | Linear equalities & inequalities |
| Algorithms | Revised Simplex, Interior-Point, Dual Simplex (via HiGHS) | SLSQP, Trust-Region Interior Point (via SciPy) |
| Complexity | Poly (IPM) / Exp worst-case (Simplex) | Poly (convex IPM) / Superlinear (SLSQP) |
| Sensitivity | Shadow prices (dual variables), reduced costs, allowable ranges | Via KKT multipliers |
| When to Use | Objective and all constraints are linear | Quadratic penalty or risk term needed (e.g., variance, regularization) |
| Tests | 11 tests | 11 tests |
Try It Yourself
2D Linear Programming — visualize the feasible region and find the optimal vertex
Note: The solver below uses maximization (max cTx). This is equivalent to the standard minimization form: max cTx = −min(−c)Tx. The code internally converts to minimization via negated coefficients.
Linear Programming
Key References
Foundational works in continuous optimization
Linear Programming
Dantzig, G.B. (1947). Maximization of a linear function of variables subject to linear inequalities. In Activity Analysis of Production and Allocation, Wiley.
Khachiyan, L.G. (1979). A polynomial algorithm in linear programming. Soviet Mathematics Doklady, 20, 191–194.
Karmarkar, N. (1984). A new polynomial-time algorithm for linear programming. Combinatorica, 4(4), 373–395.
Lemke, C.E. (1954). The dual method of solving the linear programming problem. Naval Research Logistics, 1(1), 36–47.
Quadratic Programming
Markowitz, H. (1952). Portfolio selection. The Journal of Finance, 7(1), 77–91.
Kraft, D. (1988). A software package for sequential quadratic programming. DFVLR-FB 88-28.
Nocedal, J. & Wright, S.J. (2006). Numerical Optimization. 2nd ed., Springer.
Optimality Theory
Kuhn, H.W. & Tucker, A.W. (1951). Nonlinear programming. In Proc. 2nd Berkeley Symposium on Mathematical Statistics and Probability, 481–492.
Bertsimas, D. & Tsitsiklis, J.N. (1997). Introduction to Linear Optimization. Athena Scientific.
Solvers & Benchmarks
Huangfu, Q. & Hall, J.A.J. (2018). Parallelizing the dual revised simplex method. Math. Prog. Comp., 10, 119–142. (HiGHS)
Netlib LP Test Problems: netlib.org/lp
QPLIB — QP Instance Library: qplib.zib.de
Explore the full repository with 57 problems across 10 families