FE501 – 1207 - NLP Week 2Portfolio Optimization Problem.Generic types of portfolio optimization problemsStatistaical Supplementary Knowledges about the ProblemCalculationsSolving Portfolio Optimization Problems with Excel SolverSensitivity ReportGeneralization of Nonlinear ProgrammingConvex and Concave FunctionsPropertiesDetermining convexity (concavity) of the function using slope.Determine Convexity of a single variable function (concavity) with Second derivitive checkSingle Variable Examples: Determine if A function is Convex or ConcaveConvexity of the Feasible SetExample: Check convexity of
The problem was first introduced by Markowitz in 1981 (in Modern Portfolio Theory).
More information about this topic can be found here.
Expectation
Variance
Covariance
Note: Also check the supplementary PDF file from Hocam here.
Year | Stocks | Gold | T-Bills |
---|---|---|---|
1968 | 11 | 11 | 5 |
1969 | 9 | 8 | 7 |
1970 | 4 | 14 | 7 |
1971 | 14 | 14 | 4 |
1972 | 19 | 44 | 4 |
1973 | 15 | 66 | 7 |
1974 | 27 | 64 | 8 |
1975 | 37 | 0 | 6 |
1976 | 24 | 22 | 5 |
1977 | 7 | 18 | 5 |
1978 | 7 | 31 | 7 |
1979 | 19 | 59 | 10 |
1980 | 33 | 99 | 11 |
1981 | 5 | 25 | 15 |
1982 | 22 | 4 | 11 |
1983 | 23 | 11 | 9 |
1984 | 6 | 15 | 10 |
1985 | 32 | 12 | 8 |
1986 | 19 | 16 | 6 |
1987 | 5 | 22 | 5 |
1988 | 17 | 2 | 6 |
NOTE:
For ramdom variables
Formula for coveriance matrix is:
So that for the original problem, we have:
Notice: My exel file did not work on Mac, so I’ve attached the original excel file from Hocam.
Excel: here
Notice: Variance always positive (cause there is a square in the formula) and coveriance can be negative.
For an NLP we can also obtain a sensitivity report, but there are several changes.
For NLP we have Lagrange Multiplier, it is similar to the shadow price in LP, which is :
An estimated value change for a small amount of change in the RHS. (not for 1 unit, cause the function is not linear, it is a curve)
When we change the
RHS constraint 1000 to 1001, the objectvie value will become smaller, but not exactly as the
“180.95” but will be close because of the nonlinearity.
where
For a NLP (minimization) we have,
Iff :
Any local optimal solution is the global optimal point.
For a NLP (maximization) we have,
Iff :
Any local optimal solution is the global optimal point.
Feasible region can be :
21- convex
2- Non convex
A objective function can be :
- Convex
- Concave
- Neither Convex nor Concave
A function
is a convex function on a convex set S (S is the domain of the function) if for any and holds for
A function
is a concave function on a convex set S (S is the domain of the function) if for any and holds for
Explanation:
for any
This means that, if we took two random points from the domain of the function, function value of the new point, will always below(3) (or above (4)) the line connects those two points.
This is a convex function.
This is a concave function.
This function is NOT a convex function NEITHER a concave function.
Question: Determine the type ( convex/concave) of function:
Answer:
f(x)=ln is a concave function.
Also:
f(x) is a convex function if
, .
Recall that if f (x) is a convex function of a single variable, the line joining anytwo points on y f (x) is never below the curve y f (x).
Theorem: Suppose
exists for all x in a convext set S. Then is a convex function on S if and only if for all x in S. Theorem: Suppose
exists for all x in a convext set S. Then is a concave function on S if and only if for all x in S.
A set S of points in
Parsing: Take two points randomly from the set, connect the two points with a segment, if the all points of segment is inside the Set, then the set is convex.
This is a convex set.
A problem is convex optimization problem when objective function is convex and feasible region is a convex set for minimization,
OR,
when objective function is concave and feasible region is a convex set for maximization problem.
Source: https://www.youtube.com/watch?v=BLkz5LGWihw
Taylor series is the approximation of a given complex function.
Hessian Matrix is a way of packing (organizing all the second partial derivative onformation of a multivariable function).
OR:
The basic formula for quadratic approximation with base point
This works for values of x near 0; it is just the formula for linear approximation with one new term.
Wher did that extra term come from, and what is the
Ideally, the quadratic approximation of a quadratic function shouldbe identical to the original function.
Is there a reason why a is the constant term? I'd be inclined to rewrite thisas
For instance, consider:
Set the base point
This tells us what the coe cients of the quadratic approximation formula mustbe in order for the quadratic approximation of a quadratic function to equalthat function.
The Hessian of
is the matrix whose th entry is:
We let
then:
And we have:
An
th principal minor of an matrix is the determinant of any matrix obtained by deleting rows and the corresponding columns of the matrix.
Thus, for the matrix
The first principle minors are
and the second principle monor
For any matrix, the first principal minors are just the diagonal entries ofthe matrix.
The kth leading principle minor of an
matrix is the determinant of the matrix obtained by deleting the last rows and columns of the matrix.
We let
Suppose
is a convex function on S iff (if and only if) for each , all principal monors of H are non-negative ( )
is a concave function on S iff for each and all nonzero principal minors have the same sign as (
)
Expl.
The first principal minors (2 >=0 ), and the second principal minor is 4-4=0 >=0 .
So this is a convex function.
Expl.
first order principal minors (-2, -4) are negative. so as
second order principal minor (7) is positive.
So, the function is a concave function.
Expl.
First order principal minors are positive (2,4)
Second PM = (8-9=-1) < 0.
So this function is neither convex nor concave function.
Practice1: Check convexity or concavity for
(63420)
Practice2:
(Nota5200)
For a single variable function,
is a inflaction point, if function’s second derivative changes sign. Such point is called saddle point in multi-dimensional function.
[End of 1207]