Can weighted inner products have cross-terms?

Metronome

Junior Member
Joined
Jun 12, 2018
Messages
150
This lecture claims that a weighted inner product can have the form xTWy\vec x^T W \vec y, where WW is any positive definite matrix, a generalization of the prior case in which WW must be a diagonal matrix. The diagonal WW case makes sense to me, but the non-diagonal case seems suspicious. If WW can be non-diagonal, then there are cross-terms in the expanded expression, i.e., w11x1y1+w12(x1y2+x2y1)+w22x2y2w_{11} x_1 y_1 + w_{12} (x_1 y_2 + x_2 y_1) + w_{22} x_2 y_2. I'd taken it to be part of the nature of inner products that they only interact input-wise. For example, it's not clear how cross-terms would be represented for an integral inner product, nor does it seem consistent with the definition. I know cross-correlation is sometimes referred to as the sliding inner product, and so I guess too for convolution with minor modification, but I can't find anything relating these concepts to the above positive definite, bilinear form.

Can an inner product actually be weighted to have cross-terms, and if so, is this equivalent to cross-correlation/convolution?
 
I've only listened to a few minutes in a video, so it's possible that it is mentioned in there, but here is where this weighting matrix might come from.
Consider some linear map A such that x=Au,y=Avx = Au, y= Av. Then xTyx^T y = uTATAvu^T A^T A v = uTWvu^T W v where W=ATAW = A^TA. For example you can think of AA as a change of coordinates.
 
We also require W W to be symmetric. Inner products allow us to perform geometry since they allow the definition of lengths, distances, and angles of and between vectors. Symmetry because a distance should not depend on which point comes first, and positive definiteness because lengths should be positive. An easy non-diagonal example would be
W=13(5445). W=\dfrac{1}{3}\begin{pmatrix}5&4\\4&5\end{pmatrix} .This allows you to check all properties and definitions, and you can draw example vectors. It is easier to understand a concept if you have specific numbers.
 
Isn't being symmetric redundant, given positive definiteness?

In any case, if I understand correctly, there can be cross-terms in a weighted dot product because a weighted dot product is defined to behave well under the space of all linear transformations, and this necessitates cross-terms for some transformations.

Then for an inner product such as 01f(x)g(x)dx\int_0^1 f(x)g(x) dx, the argument analogous to blamocur's would be to consider an arbitrary linear transformation TT, so that the weighted inner product is 01T(f(x))T(g(x))dx\int_0^1 T(f(x))T(g(x)) dx. I guess you can't do anything further to merge the TT's, but the point appears to be that this weighted inner product does interact input-wise in terms of xx, but there might be cross-terms in terms of T(f)T(f) and T(g)T(g) (no longer viewed as functions of xx). Is that a good way to view it, or is that equivalent to an elementary calculus mistake to even think of T(f)T(f) and T(g)T(g) within the integrand?
 
Isn't being symmetric redundant, given positive definiteness?
No. Positive definiteness means xWx>0 x^\dagger Wx>0 for x0. x\neq 0. This makes lengths positive. Symmetry means xWy=yWx. x^\dagger Wy=y^\dagger Wx . This makes the measurement of angles symmetric since cosα=cos(α). \cos \alpha=\cos(-\alpha).
In any case, if I understand correctly, there can be cross-terms in a weighted dot product because a weighted dot product is defined to behave well under the space of all linear transformations, and this necessitates cross-terms for some transformations.
Your language is a bit confusing. A symmetric, non-singular, positive definite bilinear form defines an inner product by xwy=xWy. x\cdot_w y=x^\dagger W y. It simply fulfills all requirements of an inner product: you can define lengths and angles. Non-diagonal matrix entries are possible as long as the other requirements hold.
Then for an inner product such as 01f(x)g(x)dx\int_0^1 f(x)g(x) dx, the argument analogous to blamocur's would be to consider an arbitrary linear transformation TT, so that the weighted inner product is 01T(f(x))T(g(x))dx\int_0^1 T(f(x))T(g(x)) dx. I guess you can't do anything further to merge the TT's, but the point appears to be that this weighted inner product does interact input-wise in terms of xx, but there might be cross-terms in terms of T(f)T(f) and T(g)T(g) (no longer viewed as functions of xx). Is that a good way to view it, or is that equivalent to an elementary calculus mistake to even think of T(f)T(f) and T(g)T(g) within the integrand?
Not sure what you mean here. There is a transformation theorem:
φ(U)f(x)dx=Uf(φ(x))det(Jφ(x)) \displaystyle{\int_{\varphi(U)}}f(x)\,dx=\int_U f(\varphi(x))\cdot \left|\det(J \varphi(x))\right| where φ:Uφ(U) \varphi:U \to \varphi(U) is a bijective, continuously differentiable transformation of an open set U U and Jφ(x) J\varphi(x) the Jacobi-matrix of φ. \varphi. A linear transformation T=φ T=\varphi is an example.

Other examples of bilinear forms, not necessarily inner products, in the realm of function spaces are
fg=f(x)g(x)r(x)dx or fg=k(x,y)f(x)g(y)dydx\begin{array}{lll} f\cdot g &=\displaystyle{\int f(x)g(x)r(x)\, dx} \quad \text{ or }\\[12pt] f\cdot g &=\displaystyle{\int \int k(x,y)f(x)g(y)\, dy\, dx} \end{array}They turn into inner products if the real correction function r r and the kernel k k behave accordingly.
 
Top