Chapter 1 - Introduction
Perspective
Formulation of Optimization Problems
Topics in Optimization
Method of Attack
Summary
Perspective
The objective of optimization is to select the best possible decision for a given set of circumstances without having to enumerate all of the possibilities. From experience, designers learn to recognize good proportions and critical restrictions so their preliminary work will not require significant modification and improvement. The subject which formulates and explains this talent is a branch of applied mathematics known as optimization theory, a science that studies the best (1). In recent years the subject of optimization has matured and is widely used in numerous applications, e.g., petroleum refining operations, routes for commercial aircraft, livestock feed blending and missile trajectories. Optimization methods take advantage of the mathematical structure of the problem to find the best values efficiently; and the size of the problems being solved has followed the growth in computer capability, especially in the case of linear programming.
Scientists, especially mathematicians, have always been occupied with questions of
optimization, i.e., finding extreme points (maxima and minima). Euclid in 300 B.C.
was associated with the problem of finding the shortest distance which may be drawn
from a point to a line, and Heron of Alexandria in 100 B.C. studied the optimization
problem of light traveling between two points by the shortest path. It was Fermat
in 1657 who developed the more general principle that light travels between two points
in a minimum time. In 1857 Gibbs developed the law which states that a system is in
chemical equilibrium if the free energy is a minimum (2). This result is routinely
used today to compute the equilibrium composition of a multicomponent mixture of gases,
liquids and solids.
The development of mathematical theory for optimization followed closely with the
development of calculus, as pointed out by Hancock(3). In fact, Hancock wrote the
first modern book on the subject, entitled Theory of Maxima and Minima, which was
published in 1917. This definitive work serves even today as an authoritative source.
In the late 1930's there was a spurt of interest in the subject of the calculus of
variations, but the real impetus to optimization came with World War II and the development
of the digital computer. In the 1940's Dantzig (4) recognized the mathematical structure
of some military logistics problems and developed the Simplex Method of linear programming.
Linear programming has moved from an interesting mathematical topic to probably the
most important and widely applied optimization procedure. Its development followed
closely the continually increasing capabilities of the digital computer. The ability
to solve large sets of linear equations with the computer has permitted the application
of linear programming to industrial problems, such as the optimization of a large
petroleum refinery.
Again in the 1950's optimization received another boost with the advent of the space
age. The optimal trajectory for a missile was one of a number of problems for which
the methods of dynamic programming and the maximum principle were developed in this
country and the U.S.S.R. These methods are now being used in areas that are not space-related.
Formulation of Optimization Problems
Three basic components are required to optimize an industrial process. First, the
process or a mathematical model of the process must be available, and the process
variables which can be manipulated and controlled must be known. Often, obtaining
a satisfactory process model with known control variables is the most difficult task.
Secondly, an economic model of the process is required. This is an equation that represents
the profit made from the sale of products and costs associated with their production,
such as raw materials, operating costs, fixed costs, taxes, etc. Finally, an optimization
procedure must be selected which locates the values of the independent variables of
the process to produce the maximum profit or minimum cost as measured by the economic
model. Also, the constraints in materials, process equipment, manpower, etc. must
be satisfied as specified in the process model.
Figure 1-1 is a diagram that helps place industrial practice in perspective by relating
process and economic models and the two levels of optimization. Plant optimization
finds the best operating conditions for a plant made up of process units manufacturing
specified amounts of various products to maximize the company's profits within the
constraints set by the available raw materials and how these raw materials can be
transformed in the plant. Plant optimization usually approximates the individual process
units in a relatively simple manner to obtain a satisfactory answer in a reasonable
time. This requires that the optimal operating conditions of the individual process
unit be known, and these results be used in the plant optimization to have the plant
operating with the maximum profit. Also, due to the complexity of large industrial
plants, individual process models are usually simplified by using simulation equations
to keep the computer programming efforts and computer costs within reason. However,
with individual process units it is feasible to use more detailed models to determine
more precisely the optimal operating conditions, e.g., temperatures, pressures, recycle
rates, etc. to have minimum operating cost known as a function of these variables.
As shown in Figure 1-1, simulation equations are obtained from process models. The procedure is to develop precise process models based on the fundamentals of thermodynamics, kinetics and transport phenomena. This usually leads to process models which accurately represent the physical and chemical changes taking place over a wide range of conditions. However, these models usually are more complicated in mathematical form and may require the solution of differential equations. Consequently, these process models are usually exercised over the range of operation of the process, and simulation (regression) equations of a simplified mathematical form are developed, which are then used with the optimization method for the plant optimization. However, it may not be necessary to go through the simulation equations step if the equation that describe the key variables, i.e., the ones that effect the economic performance of the process or plant, are not complicated.
Topics in Optimization
The two areas of optimization theory are mathematical programming and variational methods, as shown in Figure 1-2. Also, a number of techniques are listed under each of these areas. In mathematical programming, the objective is to locate a best point x (x1,x2,...xn) which optimizes (maximizes or minimizes) the economic model of the process. In variational methods, the objective is to locate the optimal function which maximizes or minimizes the economic model. An example of an optimization problem for each division is given in the figure. Generally, mathematical programming methods are applicable to steady-state problems, and variational methods are for dynamic problems.
Mathematical programming methods are of two types and are referred to as direct or indirect methods. Direct methods, such as multivariable search methods and linear programming, move from a starting point through consistently improved values of the economic model to arrive at the optimum. Indirect methods, such as analytical methods and geometric programming, solve a set of algebraic equations, and the solution to the set of equations may be the optimum of the economic model. For example, in analytical methods the algebraic equation set is obtained by differentiating the economic model with respect to each independent variable and setting the resulting equations equal to zero.
In this book, the first seven mathematical programming methods listed in Figure 1-2 will be discussed as will the topic of the calculus of variations under variational methods. These were selected because they are the more widely used in industrial practice. A bibliography of texts on each of these subjects is given at the end of each chapter.
To briefly describe the topics given in Figure 1-2, analytical methods are also called the classical theory of maxima and minima which is concerned with finding the extreme points of a function. This topic is discussed in Chapter 2 for both unconstrained and constrained optimization problems. Geometric programming may be considered an extension of analytical methods where the economic model and constraints are polynomials, and a dual problem is constructed that may be significantly easier to optimize than the original or primal problem as described in Chapter 3.
Optimization
Mathematical Programming | Variational Methods |
Objective: Fine the best point that optimizes the economic model. Example: Optimum operating conditions for a petroleum refinery. |
Objective: Find the best function that optimizes the economic model. Example: Best temperature profile that maximizes the conversion in a tubular chemical reactor. |
Mathematical Formula: | Mathematical Formulation: |
Optimize: y(x) Subject to: f i(x) > 0 i = 1,2,...m where x = (x1,x2...xn) |
Optimize: I [y(x)] = IF[y(x),y'(x)]dx Subject to: Algebraic, integral or differential equation constraints. |
Methods | Methods |
Analytical Methods Geometric Programming Linear Programming Quadratic Programming Convex Programming Dynamic Programming (Discrete) Nonlinear Programming or Multivariable Search Methods Integer Programming Separable Programming Goal Programming or Multicriterion Optimization Combinatorial Programming Maximum Principle (Discrete) Heuristic Programming |
Calculus of Variations Dynamic Programming(Continuous) Maximum Principle (Continuous) |
Figure 1-2 Areas and Topics in Optimization
Linear programming requires that both the economic model and the set of constraint
equations be linear, and the Simplex Method is the algorithm which locates the optimum
by beginning at a feasible starting point (initially feasible basis) as discussed
in Chapter 4. In quadratic programming, the economic model is a quadratic equation
and the constraint equations are linear. Using analytical methods, this problem can
be converted to a linear programming problem and solved by the Simplex Method as shown
in Chapter 6. For convex programming, the economic model is a concave function, and
the constraint equations are convex functions. The details on this procedure are given
in Chapter 2 as part of general analytical methods and show that a global optimum
will be located. Dynamic programming uses a series of partial optimizations by taking
advantage of the stage structure in the problem and is effective for resource allocation
and optimization through time as discussed in Chapter 7. Nonlinear programming or
multivariable search methods, as the theory and algorithms are called, must begin
at a feasible starting point and move toward the optimum in steps of improved values
of the economic model. The algorithms described in Chapter 6 have been effective for
optimization of industrial processes, and they are based on the theory of Chapter
2.
Integer programming is an extension of linear programming where the variables must
take on discrete values, and a text on this topic is by Taha(5). Separable programming
is an extension of linear programming where a small number of nonlinear constraints
are approximated by piecewise linear functions. However, the nonlinear functions must
have the form so they can be separated into sums and differences of nonlinear functions
of one variable, and the IBM MPSX code(6) is capable of solving these problems. Goal
programming is an extension of linear programming also where multiple, conflicting
objectives (or goals) are optimized using weights or rankings, for example, and this
technique is described by Ignizio(7). Combinatorial programming has been described
by Popadimitriou and Steiglitz(1) as a body of mathematical programming knowledge
including linear and integer programming, graph and network flows, dynamic programming
and related topics. The maximum principle is comparable to dynamic programming in
using the stage structure of the system, but it uses constrained derivatives that
require piecewise continuously differentiable functions and successive approximations(2).
Finally, the term heuristic programming has been used to describe rules-of-thumb that
can be used for approximations to optimization.
In discussing the various topics in optimization, the economic model has been given several different names. These names arose in the literature as the optimization procedures were being developed. Regardless of the name, the economic model is the equation which expresses the economic return from the process for specified values of the control (manipulative, decision or independent) variables. The two most common names are the profit function or cost function. However, in linear programming the term objective function is used, and in dynamic programming the term return function is employed. Other synonymous names are: benefit function, criterion, measure of effectiveness and response surface.
Method of Attack
In solving an optimization problem, the structure and complexity of the equations for the economic model and process or plant constraints are very important, since most mathematical programming procedures take advantage of the mathematical form of these models. Examples are linear programming, where all of the equations must be linear, and geometric programming, where all of the equations must be polynomials. Consequently, it is extremely important to have the capabilities of the various optimization techniques in mind when the economic and process models are being formulated. For example, if a satisfactory representation of the economics and process performance can be obtained using all linear equations, the powerful techniques of linear programming can be applied, and this method guarantees that a global optimum is found. However, if one has to resort to nonlinear equations to represent the economics and process performance, it may be necessary to use a multivariable search method to locate the optimum. Unfortunately, these search techniques only find points that are better than the starting point, and they do not carry any guarantee that a global or a local maximum or minimum has been found.
Figure 1-3, shows a simplified approach to attacking an optimization problem, and it incorporates some thoughts which should be remembered as the particular optimization techniques are studied. Also, it will give some reasons for the order in which the techniques are presented. At the start, it is necessary to determine if the problem requires an optimal point or function. If it is a point, mathematical programming is applicable; and if an optimal function, variational methods. Let us follow through with mathematical programming. If the equation for the economic model is relatively simple and there are no applicable constraints (process model), it is possible to locate the optimum by differentiating the economic model with respect to the independent variables, setting these equations equal to zero, and solving for the optimum. However, if there are constraints, and there usually are, but the equations are relatively simple, the method of Lagrange multipliers may be used. This converts the constrained problem to an unconstrained problem, and the previous procedure for unconstrained problems is used.
Now, if the problem has a large number of independent variables and the precision needed for the economic and process models can be obtained with linear equations, then linear programming may be used. However, if nonlinear equations are required and polynomial will suffice, it may be possible to determine the optimum rapidly and easily using geometric programming (1).
Not having been successful to this point, it may be feasible to take advantage of the stage structure of the problem and apply dynamic programming with a series of partial optimizations. However, if this is not successful it will be necessary to resort to multivariable search techniques and seek best values without having a guarantee of finding the global optimum.
Summary
This chapter presented a brief discussion of the historical background of optimization. Then the relation of process and plant optimization was described in terms of using simulation equations and problem size. The topics in the areas of mathematical programming and variational methods were diagrammed, and a simplified method of attack was described to give some perspective about the different methods as they are studied. The next chapter reviews analytical methods and sets the stage for more modern techniques.
References
- Wilde, D.J., Globally Optimal Design, John Wiley and Sons, Inc., New York (1978).
- Wilde, D.J. and C.S. Beightler, Foundations of Optimization, Prentice Hall, Inc., Englewood Cliffs, New Jersey (1967).
- Hancock, Harris, Theory of Maxima and Minima, Dover Publications, Inc., New York (1960).
- Dantzig, G.B., Linear Programming and Extensions, Princeton University Press, Princeton, New Jersey (1963).
- Taha, H.H., Integer Prograsmming: Theory, Applications and Computations, Academic Press, New York (1975).
- IBM Mathemcatical Programming Systems Extended/370 (MPSX/370) Program Reference Manual, Fourth Edition, SH19-1095-3, IBM Corporation, White Plains, New York (December, 1979).
- Ignizio, J.P., Linear Programming in Single- and Multiple-Objective Systems, Prentice-Hall, Inc., Englewood Cliffs, New Jersey (1982).
- Papadimitriou, C.H., and K. Steiglitz, Combinatorial Optimization: Algorithms and Complexity, Prentice-Hall, Inc., Englewood Cliffs, New Jersey (1982).