[Optimization]Doubt about Lagrangian multiplier's derivative

zero064

New member
Joined
Nov 6, 2015
Messages
2
The Lagrangian dual equation is like the following:

\(\displaystyle \displaystyle \psi(\lambda)\, =\, \min_x\, T(\mathbf{x})\, =\, f (\mathbf{x}) \, +\, \sum_{j\, =\, 1}^{\ell}\, \lambda_j \, h_j\, +\, \dfrac{1}{2}\, r\, \sum_{j\, =\, 1}^{\ell}\, \left[h_j(\mathbf{x})\right]^2\)

This is T function.

\(\displaystyle T(\mathbf{x})\, =\, f (\mathbf{x}) \, +\, \sum_{j\, =\, 1}^{\ell}\, \lambda_j \, h_j\, +\, \dfrac{1}{2}\, r\, \sum_{j\, =\, 1}^{\ell}\, \left[h_j(\mathbf{x})\right]^2\)

Therefore, its total derivative with respect to lambda i should be:

\(\displaystyle \dfrac{d \psi}{d \lambda_i}\, =\, \dfrac{\partial \psi}{\partial \lambda_i}\, +\, \nabla \psi^T\, \dfrac{d \mathbf{x}}{d \lambda_i}\)

Because the gradient of T with respect to x is zero, the second term vanishes. But I don't know why the final result is this:

\(\displaystyle \dfrac{d \psi}{d \lambda_i}\, =\,h_i (\mathbf{x}(\lambda)),\,\). . .\(\displaystyle i\, =\,1,\, 2,\,...,\, \ell\)

In my thought, the x is also function of lamda. It should depend on lamda. So the result of partial derivative should be

\(\displaystyle h_i(\mathbf{x}(\lambda))\, +\, \lambda_i\, \dfrac{d}{d \lambda_i}\, (h_i)\)

Why the second term vanishes? What's wrong with my idea? My classmate also asked the professor this question, but he didn't answer.
 
Last edited by a moderator:
Top