Download An Introduction to Optimization 4th Edition Solution Manual PDF

TitleAn Introduction to Optimization 4th Edition Solution Manual
TagsEigenvalues And Eigenvectors Determinant Numerical Analysis Mathematical Relations
File Size2.5 MB
Total Pages220
Document Text Contents
Page 1

AN INTRODUCTION TO
OPTIMIZATION

SOLUTIONS MANUAL

Fourth Edition

Edwin K. P. Chong and Stanislaw H. Żak

A JOHN WILEY & SONS, INC., PUBLICATION

Page 2

1. Methods of Proof and Some Notation

1.1

A B not A not B A⇒B (not B)⇒(not A)
F F T T T T

F T T F T T

T F F T F F

T T F F T T

1.2

A B not A not B A⇒B not (A and (not B))
F F T T T T

F T T F T T

T F F T F F

T T F F T T

1.3

A B not (A and B) not A not B (not A) or (not B))

F F T T T T

F T T T F T

T F T F T T

T T F F F F

1.4

A B A and B A and (not B) (A and B) or (A and (not B))

F F F F F

F T F F F

T F F T T

T T T F T

1.5
The cards that you should turn over are 3 andA. The remaining cards are irrelevant to ascertaining the
truth or falsity of the rule. The card with S is irrelevant becauseS is not a vowel. The card with 8 is not
relevant because the rule does not say that if a card has an even number on one side, then it has a vowel on
the other side.

Turning over the A card directly verifies the rule, while turning over the 3 card verifies the contraposition.

2. Vector Spaces and Matrices

2.1
We show this by contradiction. Supposen < m. Then, the number of columns ofA is n. Since rankA is
the maximum number of linearly independent columns ofA, then rank A cannot be greater than n < m,
which contradicts the assumption that rank A = m.

2.2

⇒: Since there exists a solution, then by Theorem 2.1, rankA = rank[ A
...b]. So, it remains to prove that

rank A = n. For this, suppose that rankA < n (note that it is impossible for rank A > n since A has
only n columns). Hence, there existsy ∈ Rn, y 6= 0, such that Ay = 0 (this is because the columns of

1

Page 110

A plot of the objective function values (best, average, and poorest) is shown below.

14.4
a. Expanding the right hand side of the second expression gives the desired result.

b. Applying the algorithm, we get a binary representation of 11111001011, i.e.,

1995 = 210 + 29 + 28 + 27 + 26 + 23 + 21 + 20.

c. Applying the algorithm, we get a binary representation of 0.1011101, i.e.,

0.7265625 = 2−1 + 2−3 + 2−4 + 2−5 + 2−7.

d. We have 19 = 24 + 21 + 20, i.e., the binary representation for 19 is 10011. For the fractional part, we
need at least 7 bits to keep at least the same accuracy. We have 0.95 = 2−1 + 2−2 + 2−3 + 2−4 + 2−7 + · · · ,
i.e., the binary representation is 0.1111001 · · · . Therefore, the binary representation of 19.95 with at least
the same degree of accuracy is 10011.1111001.

14.5
It suffices to prove the result for the case where only one symbol is swapped, since the general case is
obtained by repeating the argument. We have two scenarios. First, suppose the symbol swapped is at a
position corresponding to a don’t care symbol in H. Clearly, after the swap, both chromosomes will still be
in H. Second, suppose the symbol swapped is at a position corresponding to a fixed symbol in H. Since
both chromosomes are in H, their symbols at that position must be identical. Hence, the swap does not
change the chromosomes. This completes the proof.

14.6
Consider a given chromosome in M(k)


H. The probability that it is chosen for crossover is qc. If neither

of its offsprings is in H, then at least one of the crossover points must be between the corresponding first
and last fixed symbols of H. The probability of this is 1 − (1 − δ(H)/(L − 1))2. To see this, note that
the probability that each crossover point is not between the corresponding first and last fixed symbols is
1 − δ(H)/(L − 1), and thus the probability that both crossover points are not between the corresponding
first and last fixed symbols of H is (1− δ(H)/(L− 1))2. Hence, the probability that the given chromosome
is chosen for crossover and neither of its offsprings is in H is bounded above by

qc

(
1−

(
1−

δ(H)
L− 1

)2)
.

109

Page 111

14.7
As for two-point crossover, the n-point crossover operation is a composition of n one-point crossover opera-
tions (i.e., n one-point crossover operations in succession). The required result for this case is as follows.
Lemma:
Given a chromosome in M(k)


H, the probability that it is chosen for crossover and neither of its offsprings

is in H is bounded above by

qc

(
1−

(
1−

δ(H)
L− 1

)n)
.

2

For the proof, proceed as in the solution of Exercise 14.6, replacing 2 by n.

14.8

function M=roulette_wheel(fitness);

%function M=roulette_wheel(fitness)

%fitness = vector of fitness values of chromosomes in population

%M = vector of indices indicating which chromosome in the

% given population should appear in the mating pool

fitness = fitness - min(fitness); % to keep the fitness positive

if sum(fitness) == 0,

disp(’Population has identical chromosomes -- STOP’);

break;

else

fitness = fitness/sum(fitness);

end

cum_fitness = cumsum(fitness);

for i = 1:length(fitness),

tmp = find(cum_fitness-rand>0);

M(i) = tmp(1);

end

14.9

% parent1, parent2 = two binary parent chromosomes (row vectors)

L = length(parent1);

crossover_pt = ceil(rand*(L-1));

offspring1 = [parent1(1:crossover_pt) parent2(crossover_pt+1:L)];

offspring2 = [parent2(1:crossover_pt) parent1(crossover_pt+1:L)];

14.10

% mating_pool = matrix of 0-1 elements; each row represents a chromosome

% p_m = probability of mutation

N = size(mating_pool,1);

L = size(mating_pool,2);

mutation_points = rand(N,L) < p_m;

new_population = xor(mating_pool,mutation_points);

14.11
A MATLAB routine for a genetic algorithm with binary encoding is:

110

Page 219

and for somej, fj(x̂) < fj(x∗). Since for all i = 1 , 2, . . . , `, c∗i > 0, we must have

∑̀
i=1

c∗i fi(x
∗) >

∑̀
i=1

c∗i fi(x̂),

which contradicts the above condition,
∑`

i=1 c

i fi(x

∗) ≤
∑`

i=1 c

i fi(x) for all x ∈ Ω. This completes the

proof. (See also Exercise 24.2, part a.)

24.5
Let

f (x) = c∗>f (x)

wherex ∈ Ω = {x : g(x) ≤ 0}. The function f is convex because all the functionsfi are convex andc∗i > 0,
i = 1 , 2, . . . , `. We can represent the given KKT condition in the form

µ∗ ≥ 0
Df (x∗) + µ∗>Dg(x∗) = 0>

µ∗>g(x∗) = 0
g(x∗) ≤ 0.

By Theorem 22.9, the point x∗ is a global minimizer of f over Ω. Therefore,

f (x∗) ≤ f (x) for all x ∈ Ω.

That is, ∑̀
i=1

c∗i fi(x
∗) ≤

∑̀
i=1

c∗i fi(x) for all x ∈ Ω.

To finish the proof, we now assume thatx∗ is not Pareto optimal and the above condition holds. We then
proceed using the proof by contradiction. Because, by assumption,x∗ is not Pareto optimal, there exists a
point x̂ ∈ Ω such that

fi(x̂) ≤ fi(x∗) for all i = 1 , 2, . . . , `
and for somej, fj(x̂) < fj(x∗). Since for all i = 1 , 2, . . . , `, c∗i > 0, we must have

∑̀
i=1

c∗i fi(x
∗) >

∑̀
i=1

c∗i fi(x̂),

which contradicts the above condition,
∑`

i=1 c

i fi(x

∗) ≤
∑`

i=1 c

i fi(x) for all x ∈ Ω. This completes the

proof. (See also Exercise 24.2, part a.)

24.6
Let

f (x) = c∗>f (x)

where x ∈ Ω = {x : h(x) = 0, g(x) ≤ 0}. The function f is convex because all the functionsfi are convex
and c∗i > 0, i = 1 , 2, . . . , `. We can represent the given KKT-type condition in the form

µ∗ ≥ 0
Df (x∗) + λ∗>Dh(x∗) + µ∗>Dg(x∗) = 0>

µ∗>g(x∗) = 0
h(x∗) = 0
g(x∗) ≤ 0.

By Theorem 22.9, the point x∗ is a global minimizer of f over Ω. Therefore,

f (x∗) ≤ f (x) for all x ∈ Ω.

218

Page 220

That is, ∑̀
i=1

c∗i fi(x
∗) ≤

∑̀
i=1

c∗i fi(x) for all x ∈ Ω.

To finish the proof, we now assume thatx∗ is not Pareto optimal and the above condition holds. We then
proceed using the proof by contradiction. Because, by assumption,x∗ is not Pareto optimal, there exists a
point x̂ ∈ Ω such that

fi(x̂) ≤ fi(x∗) for all i = 1 , 2, . . . , `

and for somej, fj(x̂) < fj(x∗). Since for all i = 1 , 2, . . . , `, c∗i > 0, we must have

∑̀
i=1

c∗i fi(x
∗) >

∑̀
i=1

c∗i fi(x̂),

which contradicts the above condition,
∑`

i=1 c

i fi(x

∗) ≤
∑`

i=1 c

i fi(x) for all x ∈ Ω. This completes the

proof. (See also Exercise 24.2, part a.)

24.7
The given minimax problem is equivalent to the problem given in the hint:

minimize z
subject to fi(x) − z ≤ 0, i = 1 , 2.

Suppose [x∗>, z∗]> is a local minimizer for the above problem (which is equivalent tox∗ being a local
minimizer to for the original problem). Then, by the KKT Theorem, there exists µ∗> ≥ 0, where µ∗ ∈ R2,
such that

[0>, 1] + µ∗>
[

[∇f1(x∗)>,−1]
[∇f2(x∗)>,−1]

]
= 0>

µ∗>

[
f1(x∗) − z∗

f2(x∗) − z∗

]
= 0 .

Rewriting the first equation above, we get

µ∗1∇f1(x
∗) + µ∗2∇f2(x

∗) = 0, µ∗1 + µ

2 = 1 .

Rewriting the second equation, we get

µ∗i (fi(x
∗) − z∗) = 0 , i = 1 , 2.

Supposefi(x∗) < max{f1(x∗), f2(x∗)}, where i ∈ {1, 2}. Then, z∗ > fi(x∗). Hence, by the above equation
we conclude that µ∗i = 0.

219

Similer Documents