The Use of the Duality Principle to Solve Optimization Problems

— The duality principle provides that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem. However the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition. In other words given any linear program, there is another related linear program called the dual. In this paper, an understanding of the dual linear program will be developed. This understanding will give important insights into the algorithm and solution of optimization problem in linear programming. Thus the main concepts of duality will be explored by the so-lution of simple optimization problem.


Introduction
The duality theorem for linear optimization was conjectured by John von Neumann immediately after Dantzig presented the linear programming problem. Von Neumann used information from his game theory and conjectured that two-person-zero-sum matrix game was equivalent to linear programming. Rigorous proofs were first published in 1948 by Evar et al (1993).
Linear programming problems are optimization problems in which the objective function and the constraints are all linear. In the primal problem, the objective function is a linear combination of n variables. There are m constraints, each of which places an upper bound on a linear combination of the n variables. The goal is to maximize the value of the objective function subject to the constraints. A solution is a vector (a list) of n values that achieves the maximum value for the objective function (Wikipedia, 2016).
In the dual problem, the objective function is a linear combination of the m values that are the limits in the m constraints from the primal problem. There are n dual constraints, each of which places a lower bound on a linear combination of m dual variables (Wikipedia, 2016). Some Uses of Duality in Linear Programming The first time one sees duality in linear programming the first reaction is to question its relevance. Duality in linear programming seems to be analogous to eigenvalues and eigenvectors in linear algebra because at first glance the concept does not show immediately its relevance but as one learns further one realizes how fundamental it is to the entire subject (Wordpress, 2012). The description of the uses of duality in linear programming includes but not limited to the following (Wordpress, 2012).
• Any feasible solution to the dual problem gives a bound on the optimal objective function value in the primal problem. The formal statement of this fact gives credence to the weak duality theorem. • Understanding the dual problem leads to specialized algorithms for some important classes of linear programming problems. Examples include the transportation simplex method, the Hungarian algorithm for the assignment problem and the network simplex method. The column generation also relies partly on duality. • The dual can be helpful in sensitivity analysis. Changing the right-hand side constraint vector of the primal or adding a new constraint to it can make the original primal optimal solution infeasible. However, this operation changes only the objective function or adds a new variable to the dual respectively. The changes notwithstanding, the original dual optimal solution is still feasible (and is usually not far from the new dual optimal solution). • It is easier in some cases to find an initial feasible solution to the dual than finding one for the primal. • The dual variables give the shadow prices for the primal constraints. For instance, if there exist a profit maximization problem with a resource constraint i, then the value y i of the corresponding dual variable in the optimal solution indicates an increase of y i in the maximum profit for each unit increase in the amount of resource i (absent degeneracy and for small increases in resource i). • Sometimes the dual is easier to solve. A primal problem with many constraints and few variables can be converted into a dual problem with few constraints and many variables. Fewer constraints are nice in linear programs because the basis matrix is an n by n matrix, where n is the number of constraints. Thus the fewer the constraints, the smaller the size of the basis matrix, and thus the fewer computations required in each iteration of the simplex method. • The dual can be used to detect primal infeasibility. This is a consequence of weak duality. If the dual is a minimization problem whose objective function value can be made as small as possible, and any feasible solution to the dual gives an upper bound on the optimal objective function value in the primal, then the primal problem cannot have any feasible solutions. • Duality in linear programming has certain far reaching consequence of economic nature. This fact may assist managers to arrive at alternative courses of action and their relative values. • Calculation of the dual checks the accuracy of the primal solution.
• Duality in linear programming shows that each linear programme is equivalent to a two-person zero-sum game. It also indicates a fairly close relationship existing between linear programming and the theory of games.

Methodology
There is a direction or subspace of directions to move that increases the objective function from each sub-optimal point that satisfies all the constraints in the primal problem of linear programming. Moving in such direction is said to remove slack between the candidate solution and one or more constraints. An infeasible value of the candidate solution is one that exceeds one or more of the constraints.
In the dual problem, the dual vector multiplies the constraints that determine the positions of the constraints in the primal. Varying the dual vector in the dual problem is equivalent to revising the upper bounds in the primal problem. The lowest upper bound is sought. In other words, the dual vector is minimized in order to remove slack between the candidate positions of the constraints and the actual optimum. An infeasible value of the dual vector is one that is too low. It sets the candidate positions of one or more of the constraints in a position that excludes the actual optimum. The above table can be interpreted both from left to right and from right to left. Below is an optimization model to consider as the primal problem and its dual problem. This is a case of primal minimization problem. To define the respective dual problem the table summarizing the relationships of duality is read from left to right. Consequently, the dual problem will be one of maximization. Additionally, the first and second constraints of the primal problem will define the decision variables (dual variables) in the dual problem (Y 1 and Y 2 respectively), with the coefficients in the objective function becoming the current values of the right sides of the primal problem constraints. Thus the objective function of the dual problem is defined by the following expression: Then the number of constraints will be same as the dual problem for each variable in the primal problem. In this case the variables X 1 , X 2 and X 3 define the structure of restrictions 1, 2 and 3 in the dual problem. For example the first constraint of the dual problem (associated with primal variable X1) will be 2Y1 + 2Y2 ! 160. Note that the coefficients weighted to the dual variables are variable parameters associated with the primal X 1 in the first and second constraints, respectively. The constraint on the dual problem is ! because the X 1 variable in the primal minimization problem has the status of non-negativity !. Finally the right side of the restriction is the coefficient of the X 1 variable of the objective function of the primal problem. Following this procedure the constraints of the dual problem will be: Finally, because the first two restrictions of the primal problem are ! type in the primal minimization problem, the dual respective variables associated in the maximization problem will not be negative. Thus the dual problem is:

Worked Examples
Two examples will be presented to illustrate the relationship between primal and dual linear programs, showing that linear programming problems can be solved from two different perspectives.

Example 1.
A university staff contemplated what to purchase for his family for lunch in the bakery of the school cafeteria. He came a bit late and there were just 2 choices of food left namely wheat bread which costs N12 each and chocolate bread which costs N20 each. The bakery was service-oriented and was happy to let the staff purchase a fraction of an item if he wished. The bakery required 7 ounces of chocolate to make each wheat bread and 12 ounces of chocolate for the chocolate bread. In addition, 6 ounces of sugar were needed for each wheat bread and 8 ounces of sugar for each chocolate bread. The staff has decided that he needed at least 100 ounces of sugar and 120 ounces of chocolate. He wished to optimize his purchase by finding the least expensive combination of wheat and chocolate bread to meet these requirements.
Solution Primal: Let x 1 and x 2 be the wheat bread and chocolate bread respectively. Then the optimization problem can be formulated as follows: Minimize Z = 12x 1 + 20 x 2 Subject to 6x 1 + 8x 2 " 100 7x 1 + 12x 2 " 120 x 1 , x 2 " 0 Expressing in standard form we have Z = 12x 1 + 20x 2 + 0s 1 + 0s 2 Subject to 6x 1 + 8x 2 -s 1 = 100 7x 1 + 12x 2 -s 2 = 120 x 1 , x 2 , s 1 , s 2 " 0  Optimal solution is x 1 = 15, x 2 = 5/4; Zmin = N205. Hence 15 loaves of wheat bread and 1' loaf of chocolate bread will optimize his cost and also give him the required amount of sugar and chocolate he desired. Dual: We now adopt the perspective of the wholesaler who supplies the baker with the chocolate and sugar needed to bake the bread. The baker told the supplier all he needed and also showed him the list drafted from the university staff's demand. The supplier then solved the following optimization problem. How can I set the prices per ounce of chocolate and sugar so that the baker will buy from me so that I will maximize revenue? The baker will buy only if the total cost of raw materials for wheat bread is below N12, otherwise he runs the risk of making a loss if the staff opts to buy wheat bread and also for the chocolate bread. This restriction imposes the following constraints on the price.
Here we will be using the duality technique to form a new model from the primal problem model Then we have: Maximize: Y = 100u 1 + 120u 2 Subject to 6u 1 + 7u 2 ! 12 8u 1 + 12u 2 ! 20 Where u 1 and u 2 is price for sugar and chocolate respectively u 1 , u 2 " 0 Now using the simplex table to optimize we have   The wholesaler's optimal price for an ounce of sugar and an ounce of chocolate is N0.25 and N1.5 respectively. Looking closely we discover that the result is the same optimal amount of N205. This clearly indicates that the dual problem could also be a solution to the primal problem.

Example 2
The simplex method is employed as the solution method as follows: • Write the problem in matrix form starting with the first constraint as the first row and the objective function as the last row. • Find the transpose of the primal matrix to get the dual matrix.
• The first row of the transpose translates to the dual constraints and the last row becomes the new objective function. • Observe that the inequality signs of the constraints change after dualization and the objective function changes from maximize to minimize or vice versa depending on the objective function.

Discussion
The examples show that the optimization problem can be viewed from two perspectives namely the primal problem and the dual problem.
The examples also show that the duality gap is zero. This is an indication of convex optimization problem. In other words, the variables and the constraints of the optimization problem for the examples are small and equal in number.
Sometimes it is easier to solve the dual especially when it is used to detect primal infeasibility which is the consequence of weak duality. This is the case when the primal problem has many constraints and few variables. It can be converted into a dual problem with few constraints and many variables. Fewer constraints have been shown to be preferred in linear programs because the basis matrix is an n by n matrix, where n is the number of constraints. Thus the fewer the constraints, the smaller the size of the basis matrix, and thus the fewer computations required in each iteration of the simplex method.

Conclusion
There are two approaches to optimization problem namely the primal problem and the dual problem. The latter is used to check the accuracy of the former especially in the case of zero duality gap.
Duality in linear programming has far reaching consequence of economic nature which assists managers in arriving at alternative courses of action and their relative values. It therefore provides efficient algebraic technique that enhances the study of the dynamic behavior of optimization problems.