The simplex algorithm
The simplex algorithm (or “method”) is an iterative technique for finding an optimal solution of the linear programming problem of eq. ¿eq:constrained_linear_optimization_1?,eq:constrained_linear_optimization_2. The details of the algorithm are somewhat involved, but the basic idea is to start at a vertex of the feasible solution space \(S\) and traverse an edge of the polytope that leads to another vertex with a greater value of \(f\). Then, repeat this process until there is no neighboring vertex with a greater value of \(f\), at which point the solution is guaranteed to be optimal.
Rather than present the details of the algorithm, we choose to show
an example using Python. There have been some improvements on the
original algorithm that have been implemented into many standard
software packages, including Python’s scipy package (Virtanen et al. 2019) module
scipy.optimize.1
Maximize the objective function $$\begin{align} \label{eq:simplex_example_01_f} f(\bm{x}) = \bm{c}\cdot\bm{x} \end{align}$$ for x ∈ ℝ2 and $$\begin{align} \bm{c} &= \begin{bmatrix} 5 & 2 \end{bmatrix}^\top \end{align}$$ subject to constraints $$\begin{align}\label{eq:simplex_example_01_constraints} 0 \le x_1 &\le 10 \\ -5 \le x_2 &\le 15 \\ 4 x_1 + x_2 &\le 40 \\ x_1 + 3 x_2 &\le 35 \\ -8 x_1 - x_2 &\ge -75. \end{align}$$
First, load some Python packages.
from scipy.optimize import linprog
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as pltEcoding the Problem
Before we can use linprog, we must
first encode [@eq:simplex_example_01_f;@eq:simplex_example_01_constraints]
into a form linprog will recognize. We
begin with f, which we can
write as c ⋅ x with
the coefficients of c
as follows.
c = [-5, -2] # negative to find maxWe’ve negated each constant because linprog minimizes f and we want to maximize
f. Now let’s encode the
inequality constraints. We will write the left-hand side coefficients in
the matrix A and the
right-hand-side values in vector a such that $$\begin{align}
    A \bm{x} \le \bm{a}.
\end{align}$$ Notice that one of our constraint inequalities is ≥ instead of ≤. We can flip this by multiplying the
inequality by  − 1. We use simple lists
to encode A and a.
A = [
  [4, 1], 
  [1, 3],
  [8, 1]
]
a = [40, 35, 75]Now we need to define the lower l and upper u bounds of x. The function linprog expects these to be in a single list
of lower- and upper-bounds of each xi.
lu = [
  (0, 10),
  (-5,15),
]We want to keep track of each step linprog takes. We can access these by defining
a function callback, to be passed to
linprog.
x = [] # for storing the steps
def callback(res): # called at each step
  global x
  print(f"nit = {res.nit}, x_k = {res.x}")
  x.append(res.x.copy()) # storeNow we need to call linprog. We don’t
have any equality constraints, so we need only use the keyword arguments
A_ub=A and b_ub=a. For demonstration purposes, we tell it
to use the 'simplex' method, which is not as good as its other
methods, which use better algorithms
based on the simplex.
res = linprog(
  c, 
  A_ub=A, 
  b_ub=a, 
  bounds=lu,
  method='simplex',
  callback=callback
)
x = np.array(x)nit = 0, x_k = [ 0. -5.]
nit = 1, x_k = [10. -5.]
nit = 2, x_k = [8.75 5.  ]
nit = 3, x_k = [7.72727273 9.09090909]
nit = 4, x_k = [7.72727273 9.09090909]
nit = 5, x_k = [7.72727273 9.09090909]
nit = 5, x_k = [7.72727273 9.09090909]
So the optimal solution (x,f(x)) is as follows.
print(f"optimum x: {res.x}")
print(f"optimum f(x): {-res.fun}")optimum x: [7.72727273 9.09090909]
optimum f(x): 56.81818181818182
The last point was repeated
- Once because there was no adjacent vertex with greater f(x) and
 - Twice because the algorithm calls 
callbacktwice on the last step. 
Plotting
When the solution space is in ℝ2, it is helpful to graphically represent the solution space, constraints, and the progression of the algorithm. We begin by defining the inequality lines from A and a over the bounds of x1.
n = len(c) # number of variables x
m = np.shape(A)[0] # number of inequality constraints
x2 = np.empty([m,2])
for i in range(0,m):
  x2[i,:] = -A[i][0]/A[i][1]*np.array(lu[0]) + a[i]/A[i][1]Now we plot a contour plot of f over the bounds of x1 and x2 and overlay the
inequality constraints and the steps of the algorithm stored in x.
lu_array = np.array(lu)
fig, ax = plt.subplots()
mpl.rcParams['lines.linewidth'] = 3
# contour plot
X1 = np.linspace(*lu_array[0],100)
X2 = np.linspace(*lu_array[1],100)
X1, X2 = np.meshgrid(X1,X2)
F2 = -c[0]*X1 + -c[1]*X2 # negative because max hack
con = ax.contourf(X1,X2,F2)
cbar = fig.colorbar(con,ax=ax)
cbar.ax.set_ylabel('objective function')
# bounds on x
un = np.array([1,1])
opts = {'c':'w','label':None,'linewidth':6}
plt.plot(lu_array[0],lu_array[1,0]*un,**opts)
plt.plot(lu_array[0],lu_array[1,1]*un,**opts)
plt.plot(lu_array[0,0]*un,lu_array[1],**opts)
plt.plot(lu_array[0,1]*un,lu_array[1],**opts)
# inequality constraints
for i in range(0,m):
  p, = plt.plot(lu[0],x2[i,:],c='w')
p.set_label('constraint')
# steps
plt.plot(
  x[:,0], x[:,1], '-o', c='r', 
  clip_on=False, zorder=20, label='simplex'
)
plt.ylim(lu_array[1])
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
plt.legend()
plt.show()Online Resources for Section 10.3
No online resources.