Page 1
Dynamic Programming
2
Weighted Activity Selection
Weighted activity selection problem (generalization of CLR 17.1).
¦ Job requests 1, 2, … , N.
¦ Job j starts at s
j
, finishes at f
j
, and has weight w
j
.
¦ Two jobs compatible if they don't overlap.
¦ Goal: find maximum weight subset of mutually compatible jobs.
Time
0
A
C
F
B
D
G
E
123456789 10 11
H
3
Activity Selection: Greedy Algorithm
Recall greedy algorithm works if all weights are 1.
Sort jobs by increasing finish times so that
f
1
£ f
2
£ ... £ f
N
.
S = f FOR j = 1 to N
IF (job j compatible with A)
S ‹ S ¨ {j}
RETURN S
Greedy Activity Selection Algorithm
S = jobs selected.
4
Weighted Activity Selection
Notation.
¦ Label jobs by finishing time: f
1
£ f
2
£ . . . £ f
N
.
¦ Define q
j
= largest index i < j such that job i is compatible with j.
– q
7
= 3, q
2
= 0
Time
0
3
2
6
1
5
7
4
123456789 10 11
8
Page 2
Dynamic Programming
2
Weighted Activity Selection
Weighted activity selection problem (generalization of CLR 17.1).
¦ Job requests 1, 2, … , N.
¦ Job j starts at s
j
, finishes at f
j
, and has weight w
j
.
¦ Two jobs compatible if they don't overlap.
¦ Goal: find maximum weight subset of mutually compatible jobs.
Time
0
A
C
F
B
D
G
E
123456789 10 11
H
3
Activity Selection: Greedy Algorithm
Recall greedy algorithm works if all weights are 1.
Sort jobs by increasing finish times so that
f
1
£ f
2
£ ... £ f
N
.
S = f FOR j = 1 to N
IF (job j compatible with A)
S ‹ S ¨ {j}
RETURN S
Greedy Activity Selection Algorithm
S = jobs selected.
4
Weighted Activity Selection
Notation.
¦ Label jobs by finishing time: f
1
£ f
2
£ . . . £ f
N
.
¦ Define q
j
= largest index i < j such that job i is compatible with j.
– q
7
= 3, q
2
= 0
Time
0
3
2
6
1
5
7
4
123456789 10 11
8
5
Weighted Activity Selection: Structure
Let OPT(j) = value of optimal solution to the problem consisting of
job requests {1, 2, . . . , j }.
¦ Case 1: OPT selects job j.
– can’t use incompatible jobs { q
j
+ 1, q
j
+ 2, . . . , j-1 }
– must include optimal solution to problem consisting of
remaining compatible jobs { 1, 2, . . . , q
j
}
¦ Case 2: OPT does not select job j.
– must include optimal solution to problem consisting of
remaining compatible jobs { 1, 2, . . . , j - 1 }
{}
- +
=
=
otherwise ) 1 ( ), ( max
0 j if 0
) (
j OPT q OPT w
j OPT
j j
6
Weighted Activity Selection: Brute Force
INPUT: N, s
1
,…,s
N,
f
1
,…,f
N,
w
1
,…,w
N
Sort jobs by increasing finish times so that
f
1
£ f
2
£ ... £ f
N
.
Compute q
1
, q
2
, ... , q
N
r-compute(j) {
IF (j = 0)
RETURN 0
ELSE
return max(w
j
+ r-compute(q
j
), r-compute(j-1))
}
Recursive Activity Selection
7
Dynamic Programming Subproblems
Spectacularly redundant subproblems exponential algorithms.
1, 2, 3, 4, 5, 6, 7, 8
1, 2, 3, 4, 5, 6, 7 1, 2, 3, 4, 5
1, 2, 3, 4, 5, 6 1, 2, 3 f 1, 2, 3, 4
1 1, 2, 3 f 1, 2 1, 2 1, 2, 3, 4, 5
f 1 f f . . . . . . . . .
8
Divide-and-Conquer Subproblems
Independent subproblems efficient algorithms.
1, 2, 3, 4, 5, 6, 7, 8
1, 2, 3, 4
1, 2 3, 4
3 4 1 2
5, 6, 7, 8
5, 6 7, 8
7 8 5 6
Page 3
Dynamic Programming
2
Weighted Activity Selection
Weighted activity selection problem (generalization of CLR 17.1).
¦ Job requests 1, 2, … , N.
¦ Job j starts at s
j
, finishes at f
j
, and has weight w
j
.
¦ Two jobs compatible if they don't overlap.
¦ Goal: find maximum weight subset of mutually compatible jobs.
Time
0
A
C
F
B
D
G
E
123456789 10 11
H
3
Activity Selection: Greedy Algorithm
Recall greedy algorithm works if all weights are 1.
Sort jobs by increasing finish times so that
f
1
£ f
2
£ ... £ f
N
.
S = f FOR j = 1 to N
IF (job j compatible with A)
S ‹ S ¨ {j}
RETURN S
Greedy Activity Selection Algorithm
S = jobs selected.
4
Weighted Activity Selection
Notation.
¦ Label jobs by finishing time: f
1
£ f
2
£ . . . £ f
N
.
¦ Define q
j
= largest index i < j such that job i is compatible with j.
– q
7
= 3, q
2
= 0
Time
0
3
2
6
1
5
7
4
123456789 10 11
8
5
Weighted Activity Selection: Structure
Let OPT(j) = value of optimal solution to the problem consisting of
job requests {1, 2, . . . , j }.
¦ Case 1: OPT selects job j.
– can’t use incompatible jobs { q
j
+ 1, q
j
+ 2, . . . , j-1 }
– must include optimal solution to problem consisting of
remaining compatible jobs { 1, 2, . . . , q
j
}
¦ Case 2: OPT does not select job j.
– must include optimal solution to problem consisting of
remaining compatible jobs { 1, 2, . . . , j - 1 }
{}
- +
=
=
otherwise ) 1 ( ), ( max
0 j if 0
) (
j OPT q OPT w
j OPT
j j
6
Weighted Activity Selection: Brute Force
INPUT: N, s
1
,…,s
N,
f
1
,…,f
N,
w
1
,…,w
N
Sort jobs by increasing finish times so that
f
1
£ f
2
£ ... £ f
N
.
Compute q
1
, q
2
, ... , q
N
r-compute(j) {
IF (j = 0)
RETURN 0
ELSE
return max(w
j
+ r-compute(q
j
), r-compute(j-1))
}
Recursive Activity Selection
7
Dynamic Programming Subproblems
Spectacularly redundant subproblems exponential algorithms.
1, 2, 3, 4, 5, 6, 7, 8
1, 2, 3, 4, 5, 6, 7 1, 2, 3, 4, 5
1, 2, 3, 4, 5, 6 1, 2, 3 f 1, 2, 3, 4
1 1, 2, 3 f 1, 2 1, 2 1, 2, 3, 4, 5
f 1 f f . . . . . . . . .
8
Divide-and-Conquer Subproblems
Independent subproblems efficient algorithms.
1, 2, 3, 4, 5, 6, 7, 8
1, 2, 3, 4
1, 2 3, 4
3 4 1 2
5, 6, 7, 8
5, 6 7, 8
7 8 5 6
9
Weighted Activity Selection: Memoization
INPUT: N, s
1
,…,s
N,
f
1
,…,f
N,
w
1
,…,w
N
Sort jobs by increasing finish times so that
f
1
£ f
2
£ ... £ f
N
.
Compute q
1
, q
2
, ... , q
N
Global array OPT[0..N]
FOR j = 0 to N
OPT[j] = "empty"
m-compute(j) {
IF (j = 0)
OPT[0] = 0
ELSE IF (OPT[j] = "empty")
OPT[j] = max(w
j
+ m-compute(q
j
), m-compute(j-1))
RETURN OPT[j]
}
Memoized Activity Selection
10
Weighted Activity Selection: Running Time
Claim: memoized version of algorithm takes O(N log N) time.
¦ Ordering by finish time: O(N log N).
¦ Computing q
j
: O(N log N) via binary search.
¦ m-compute(j): each invocation takes O(1) time and either
– (i) returns an existing value of OPT[]
– (ii) fills in one new entry of OPT[] and makes two recursive calls
¦ Progress measure F = # nonempty entries of OPT[].
! Initially F = 0, throughout F£ N.
! (ii) increases F by 1 at most 2N recursive calls.
¦ Overall running time of m-compute(N) is O(N).
11
Weighted Activity Selection: Finding a Solution
m-compute(N) determines value of optimal solution.
¦ Modify to obtain optimal solution itself.
¦ # of recursive calls £ N O(N).
ARRAY: OPT[0..N]
Run m-compute(N)
find-sol(j) {
IF (j = 0)
output nothing
ELSE IF (w
j
+ OPT[q
j
] > OPT[j-1])
print j
find-sol(q
j
)
ELSE
find-sol(j-1)
}
Finding an Optimal Set of Activities
12
Weighted Activity Selection: Bottom-Up
Unwind recursion in memoized algorithm.
INPUT: N, s
1
,…,s
N,
f
1
,…,f
N,
w
1
,…,w
N
Sort jobs by increasing finish times so that
f
1
£ f
2
£ ... £ f
N
.
Compute q
1
, q
2
, ... , q
N
ARRAY: OPT[0..N]
OPT[0] = 0
FOR j = 1 to N
OPT[j] = max(w
j
+ OPT[q
j
], OPT[j-1])
Bottom-Up Activity Selection
Page 4
Dynamic Programming
2
Weighted Activity Selection
Weighted activity selection problem (generalization of CLR 17.1).
¦ Job requests 1, 2, … , N.
¦ Job j starts at s
j
, finishes at f
j
, and has weight w
j
.
¦ Two jobs compatible if they don't overlap.
¦ Goal: find maximum weight subset of mutually compatible jobs.
Time
0
A
C
F
B
D
G
E
123456789 10 11
H
3
Activity Selection: Greedy Algorithm
Recall greedy algorithm works if all weights are 1.
Sort jobs by increasing finish times so that
f
1
£ f
2
£ ... £ f
N
.
S = f FOR j = 1 to N
IF (job j compatible with A)
S ‹ S ¨ {j}
RETURN S
Greedy Activity Selection Algorithm
S = jobs selected.
4
Weighted Activity Selection
Notation.
¦ Label jobs by finishing time: f
1
£ f
2
£ . . . £ f
N
.
¦ Define q
j
= largest index i < j such that job i is compatible with j.
– q
7
= 3, q
2
= 0
Time
0
3
2
6
1
5
7
4
123456789 10 11
8
5
Weighted Activity Selection: Structure
Let OPT(j) = value of optimal solution to the problem consisting of
job requests {1, 2, . . . , j }.
¦ Case 1: OPT selects job j.
– can’t use incompatible jobs { q
j
+ 1, q
j
+ 2, . . . , j-1 }
– must include optimal solution to problem consisting of
remaining compatible jobs { 1, 2, . . . , q
j
}
¦ Case 2: OPT does not select job j.
– must include optimal solution to problem consisting of
remaining compatible jobs { 1, 2, . . . , j - 1 }
{}
- +
=
=
otherwise ) 1 ( ), ( max
0 j if 0
) (
j OPT q OPT w
j OPT
j j
6
Weighted Activity Selection: Brute Force
INPUT: N, s
1
,…,s
N,
f
1
,…,f
N,
w
1
,…,w
N
Sort jobs by increasing finish times so that
f
1
£ f
2
£ ... £ f
N
.
Compute q
1
, q
2
, ... , q
N
r-compute(j) {
IF (j = 0)
RETURN 0
ELSE
return max(w
j
+ r-compute(q
j
), r-compute(j-1))
}
Recursive Activity Selection
7
Dynamic Programming Subproblems
Spectacularly redundant subproblems exponential algorithms.
1, 2, 3, 4, 5, 6, 7, 8
1, 2, 3, 4, 5, 6, 7 1, 2, 3, 4, 5
1, 2, 3, 4, 5, 6 1, 2, 3 f 1, 2, 3, 4
1 1, 2, 3 f 1, 2 1, 2 1, 2, 3, 4, 5
f 1 f f . . . . . . . . .
8
Divide-and-Conquer Subproblems
Independent subproblems efficient algorithms.
1, 2, 3, 4, 5, 6, 7, 8
1, 2, 3, 4
1, 2 3, 4
3 4 1 2
5, 6, 7, 8
5, 6 7, 8
7 8 5 6
9
Weighted Activity Selection: Memoization
INPUT: N, s
1
,…,s
N,
f
1
,…,f
N,
w
1
,…,w
N
Sort jobs by increasing finish times so that
f
1
£ f
2
£ ... £ f
N
.
Compute q
1
, q
2
, ... , q
N
Global array OPT[0..N]
FOR j = 0 to N
OPT[j] = "empty"
m-compute(j) {
IF (j = 0)
OPT[0] = 0
ELSE IF (OPT[j] = "empty")
OPT[j] = max(w
j
+ m-compute(q
j
), m-compute(j-1))
RETURN OPT[j]
}
Memoized Activity Selection
10
Weighted Activity Selection: Running Time
Claim: memoized version of algorithm takes O(N log N) time.
¦ Ordering by finish time: O(N log N).
¦ Computing q
j
: O(N log N) via binary search.
¦ m-compute(j): each invocation takes O(1) time and either
– (i) returns an existing value of OPT[]
– (ii) fills in one new entry of OPT[] and makes two recursive calls
¦ Progress measure F = # nonempty entries of OPT[].
! Initially F = 0, throughout F£ N.
! (ii) increases F by 1 at most 2N recursive calls.
¦ Overall running time of m-compute(N) is O(N).
11
Weighted Activity Selection: Finding a Solution
m-compute(N) determines value of optimal solution.
¦ Modify to obtain optimal solution itself.
¦ # of recursive calls £ N O(N).
ARRAY: OPT[0..N]
Run m-compute(N)
find-sol(j) {
IF (j = 0)
output nothing
ELSE IF (w
j
+ OPT[q
j
] > OPT[j-1])
print j
find-sol(q
j
)
ELSE
find-sol(j-1)
}
Finding an Optimal Set of Activities
12
Weighted Activity Selection: Bottom-Up
Unwind recursion in memoized algorithm.
INPUT: N, s
1
,…,s
N,
f
1
,…,f
N,
w
1
,…,w
N
Sort jobs by increasing finish times so that
f
1
£ f
2
£ ... £ f
N
.
Compute q
1
, q
2
, ... , q
N
ARRAY: OPT[0..N]
OPT[0] = 0
FOR j = 1 to N
OPT[j] = max(w
j
+ OPT[q
j
], OPT[j-1])
Bottom-Up Activity Selection
13
Dynamic Programming Overview
Dynamic programming.
¦ Similar to divide-and-conquer.
– solves problem by combining solution to sub-problems
¦ Different from divide-and-conquer.
– sub-problems are not independent
– save solutions to repeated sub-problems in table
Recipe.
¦ Characterize structure of problem.
– optimal substructure property
¦ Recursively define value of optimal solution.
¦ Compute value of optimal solution.
¦ Construct optimal solution from computed information.
Top-down vs. bottom-up.
¦ Different people have different intuitions.
14
Least Squares
Least squares.
¦ Foundational problem in statistic and numerical analysis.
¦ Given N points in the plane { (x
1
, y
1
), (x
2
, y
2
) , . . . , (x
N
, y
N
) },
find a line y = ax + b that minimizes the sum of the squared error:
¦ Calculus min error is achieved when:
=
- - =
N
i
i i
b ax y SS
1
2
) (
N
x a y
b
x x N
y x y x N
a
ii i i
ii i i
ii i i i i i
- =
- - = ,
) (
) ( ) (
2 2
15
Segmented Least Squares
Segmented least squares.
¦ Points lie roughly on a sequence of 3 lines.
¦ Given N points in the plane p
1
, p
2
, . . . , p
N
, find a sequence of
lines that minimize:
– the sum of the sum of the squared errors E in each segment
– the number of lines L
¦ Tradeoff function: e + c L, for some constant c > 0.
3 lines better than one
16
Segmented Least Squares: Structure
Notation.
¦ OPT(j) = minimum cost for points p
1
, p
i+1
, . . . , p
j
.
¦ e(i, j) = minimum sum of squares for points p
i
, p
i+1
, . . . , p
j
Optimal solution:
¦ Last segment uses points p
i
, p
i+1
, . . . , p
j
for some i.
¦ Cost = e(i, j) + c + OPT(i-1).
New dynamic programming technique.
¦ Weighted activity selection: binary choice.
¦ Segmented least squares: multi-way choice.
{}
- + +
=
=
£ £ otherwise ) 1 ( ) , ( min
0 j if 0
) (
1
i OPT c j i e
j OPT
j i
Page 5
Dynamic Programming
2
Weighted Activity Selection
Weighted activity selection problem (generalization of CLR 17.1).
¦ Job requests 1, 2, … , N.
¦ Job j starts at s
j
, finishes at f
j
, and has weight w
j
.
¦ Two jobs compatible if they don't overlap.
¦ Goal: find maximum weight subset of mutually compatible jobs.
Time
0
A
C
F
B
D
G
E
123456789 10 11
H
3
Activity Selection: Greedy Algorithm
Recall greedy algorithm works if all weights are 1.
Sort jobs by increasing finish times so that
f
1
£ f
2
£ ... £ f
N
.
S = f FOR j = 1 to N
IF (job j compatible with A)
S ‹ S ¨ {j}
RETURN S
Greedy Activity Selection Algorithm
S = jobs selected.
4
Weighted Activity Selection
Notation.
¦ Label jobs by finishing time: f
1
£ f
2
£ . . . £ f
N
.
¦ Define q
j
= largest index i < j such that job i is compatible with j.
– q
7
= 3, q
2
= 0
Time
0
3
2
6
1
5
7
4
123456789 10 11
8
5
Weighted Activity Selection: Structure
Let OPT(j) = value of optimal solution to the problem consisting of
job requests {1, 2, . . . , j }.
¦ Case 1: OPT selects job j.
– can’t use incompatible jobs { q
j
+ 1, q
j
+ 2, . . . , j-1 }
– must include optimal solution to problem consisting of
remaining compatible jobs { 1, 2, . . . , q
j
}
¦ Case 2: OPT does not select job j.
– must include optimal solution to problem consisting of
remaining compatible jobs { 1, 2, . . . , j - 1 }
{}
- +
=
=
otherwise ) 1 ( ), ( max
0 j if 0
) (
j OPT q OPT w
j OPT
j j
6
Weighted Activity Selection: Brute Force
INPUT: N, s
1
,…,s
N,
f
1
,…,f
N,
w
1
,…,w
N
Sort jobs by increasing finish times so that
f
1
£ f
2
£ ... £ f
N
.
Compute q
1
, q
2
, ... , q
N
r-compute(j) {
IF (j = 0)
RETURN 0
ELSE
return max(w
j
+ r-compute(q
j
), r-compute(j-1))
}
Recursive Activity Selection
7
Dynamic Programming Subproblems
Spectacularly redundant subproblems exponential algorithms.
1, 2, 3, 4, 5, 6, 7, 8
1, 2, 3, 4, 5, 6, 7 1, 2, 3, 4, 5
1, 2, 3, 4, 5, 6 1, 2, 3 f 1, 2, 3, 4
1 1, 2, 3 f 1, 2 1, 2 1, 2, 3, 4, 5
f 1 f f . . . . . . . . .
8
Divide-and-Conquer Subproblems
Independent subproblems efficient algorithms.
1, 2, 3, 4, 5, 6, 7, 8
1, 2, 3, 4
1, 2 3, 4
3 4 1 2
5, 6, 7, 8
5, 6 7, 8
7 8 5 6
9
Weighted Activity Selection: Memoization
INPUT: N, s
1
,…,s
N,
f
1
,…,f
N,
w
1
,…,w
N
Sort jobs by increasing finish times so that
f
1
£ f
2
£ ... £ f
N
.
Compute q
1
, q
2
, ... , q
N
Global array OPT[0..N]
FOR j = 0 to N
OPT[j] = "empty"
m-compute(j) {
IF (j = 0)
OPT[0] = 0
ELSE IF (OPT[j] = "empty")
OPT[j] = max(w
j
+ m-compute(q
j
), m-compute(j-1))
RETURN OPT[j]
}
Memoized Activity Selection
10
Weighted Activity Selection: Running Time
Claim: memoized version of algorithm takes O(N log N) time.
¦ Ordering by finish time: O(N log N).
¦ Computing q
j
: O(N log N) via binary search.
¦ m-compute(j): each invocation takes O(1) time and either
– (i) returns an existing value of OPT[]
– (ii) fills in one new entry of OPT[] and makes two recursive calls
¦ Progress measure F = # nonempty entries of OPT[].
! Initially F = 0, throughout F£ N.
! (ii) increases F by 1 at most 2N recursive calls.
¦ Overall running time of m-compute(N) is O(N).
11
Weighted Activity Selection: Finding a Solution
m-compute(N) determines value of optimal solution.
¦ Modify to obtain optimal solution itself.
¦ # of recursive calls £ N O(N).
ARRAY: OPT[0..N]
Run m-compute(N)
find-sol(j) {
IF (j = 0)
output nothing
ELSE IF (w
j
+ OPT[q
j
] > OPT[j-1])
print j
find-sol(q
j
)
ELSE
find-sol(j-1)
}
Finding an Optimal Set of Activities
12
Weighted Activity Selection: Bottom-Up
Unwind recursion in memoized algorithm.
INPUT: N, s
1
,…,s
N,
f
1
,…,f
N,
w
1
,…,w
N
Sort jobs by increasing finish times so that
f
1
£ f
2
£ ... £ f
N
.
Compute q
1
, q
2
, ... , q
N
ARRAY: OPT[0..N]
OPT[0] = 0
FOR j = 1 to N
OPT[j] = max(w
j
+ OPT[q
j
], OPT[j-1])
Bottom-Up Activity Selection
13
Dynamic Programming Overview
Dynamic programming.
¦ Similar to divide-and-conquer.
– solves problem by combining solution to sub-problems
¦ Different from divide-and-conquer.
– sub-problems are not independent
– save solutions to repeated sub-problems in table
Recipe.
¦ Characterize structure of problem.
– optimal substructure property
¦ Recursively define value of optimal solution.
¦ Compute value of optimal solution.
¦ Construct optimal solution from computed information.
Top-down vs. bottom-up.
¦ Different people have different intuitions.
14
Least Squares
Least squares.
¦ Foundational problem in statistic and numerical analysis.
¦ Given N points in the plane { (x
1
, y
1
), (x
2
, y
2
) , . . . , (x
N
, y
N
) },
find a line y = ax + b that minimizes the sum of the squared error:
¦ Calculus min error is achieved when:
=
- - =
N
i
i i
b ax y SS
1
2
) (
N
x a y
b
x x N
y x y x N
a
ii i i
ii i i
ii i i i i i
- =
- - = ,
) (
) ( ) (
2 2
15
Segmented Least Squares
Segmented least squares.
¦ Points lie roughly on a sequence of 3 lines.
¦ Given N points in the plane p
1
, p
2
, . . . , p
N
, find a sequence of
lines that minimize:
– the sum of the sum of the squared errors E in each segment
– the number of lines L
¦ Tradeoff function: e + c L, for some constant c > 0.
3 lines better than one
16
Segmented Least Squares: Structure
Notation.
¦ OPT(j) = minimum cost for points p
1
, p
i+1
, . . . , p
j
.
¦ e(i, j) = minimum sum of squares for points p
i
, p
i+1
, . . . , p
j
Optimal solution:
¦ Last segment uses points p
i
, p
i+1
, . . . , p
j
for some i.
¦ Cost = e(i, j) + c + OPT(i-1).
New dynamic programming technique.
¦ Weighted activity selection: binary choice.
¦ Segmented least squares: multi-way choice.
{}
- + +
=
=
£ £ otherwise ) 1 ( ) , ( min
0 j if 0
) (
1
i OPT c j i e
j OPT
j i
17
Segmented Least Squares: Algorithm
Running time:
¦ Bottleneck = computing e(i, n) for O(N
2
) pairs, O(N) per pair using
previous formula.
¦ O(N
3
) overall.
INPUT: N, p
1
,…,p
N,
c
ARRAY: OPT[0..N]
OPT[0] = 0
FOR j = 1 to N
FOR i = 1 to j
compute the least square error e[i,j] for
the segment p
i
,..., p
j
OPT[j] = min
1 £ i £ j
(e[i,j] + c + OPT[i-1])
RETURN OPT[N]
Bottom-Up Segmented Least Squares
18
Segmented Least Squares: Improved Algorithm
A quadratic algorithm.
¦ Bottleneck = computing e(i, j).
¦ O(N
2
) preprocessing + O(1) per computation.
2
2
2 2
l L - l L l L - =
= =
= = =
j
i k
k
j
i k
k
j
i k
k
j
i k
k
j
i k
k k
ij
x x n
y x y x n
a
=
=
=
=
=
=
i
k
k k k
i
k
k k
i
k
k k
y x xy
x xxs
x xs
1
1
2
1
=
=
=
=
i
k
k k
i
k
k k
y yys
y ys
1
2
1
1 - =
- =
i j
j
i k
k
xs xs x
L + - =
- - =
- =
) (
) ( ) , (
1
2
i j
j
i k
k k
yys yys
b ax y j i e
1 + - =
- =
= =
i j n
n
x a y
b
ij
j
i k
k
j
i k
k
ij
Preprocessing
19
Knapsack Problem
Knapsack problem.
¦ Given N objects and a "knapsack."
¦ Item i weighs w
i
> 0 Newtons and has value v
i
> 0.
¦ Knapsack can carry weight up to W Newtons.
¦ Goal: fill knapsack so as to maximize total value.
Item Value Weight
1 1 1
2 6 2
3 18 5
4 22 6
5 28 7
W = 11
OPT value = 40: { 3, 4 }
Greedy = 35: { 5, 2, 1 }
v
i
/ w
i
20
Knapsack Problem: Structure
OPT(n, w) = max profit subset of items {1, . . . , n} with weight limit w.
¦ Case 1: OPT selects item n.
– new weight limit = w –w
n
– OPT selects best of {1, 2, . . . , n – 1} using this new weight limit
¦ Case 2: OPT does not select item n.
– OPT selects best of {1, 2, . . . , n – 1} using weight limit w
New dynamic programming technique.
¦ Weighted activity selection: binary choice.
¦ Segmented least squares: multi-way choice.
¦ Knapsack: adding a new variable.
{}
- - + - > - =
=
otherwise ) , 1 ( ), , 1 ( max
w w if ) , 1 (
0 n if 0
) , (
n
n n
w w n OPT v w n OPT
w n OPT w n OPT
Read More