Can weighted inner products have cross-terms?

Metronome

Junior Member
Joined
Jun 12, 2018
Messages
154
This lecture claims that a weighted inner product can have the form [imath]\vec x^T W \vec y[/imath], where [imath]W[/imath] is any positive definite matrix, a generalization of the prior case in which [imath]W[/imath] must be a diagonal matrix. The diagonal [imath]W[/imath] case makes sense to me, but the non-diagonal case seems suspicious. If [imath]W[/imath] can be non-diagonal, then there are cross-terms in the expanded expression, i.e., [imath]w_{11} x_1 y_1 + w_{12} (x_1 y_2 + x_2 y_1) + w_{22} x_2 y_2[/imath]. I'd taken it to be part of the nature of inner products that they only interact input-wise. For example, it's not clear how cross-terms would be represented for an integral inner product, nor does it seem consistent with the definition. I know cross-correlation is sometimes referred to as the sliding inner product, and so I guess too for convolution with minor modification, but I can't find anything relating these concepts to the above positive definite, bilinear form.

Can an inner product actually be weighted to have cross-terms, and if so, is this equivalent to cross-correlation/convolution?
 
I've only listened to a few minutes in a video, so it's possible that it is mentioned in there, but here is where this weighting matrix might come from.
Consider some linear map A such that [imath]x = Au, y= Av[/imath]. Then [imath]x^T y[/imath] = [imath]u^T A^T A v[/imath] = [imath]u^T W v[/imath] where [imath]W = A^TA[/imath]. For example you can think of [imath]A[/imath] as a change of coordinates.
 
We also require [imath] W [/imath] to be symmetric. Inner products allow us to perform geometry since they allow the definition of lengths, distances, and angles of and between vectors. Symmetry because a distance should not depend on which point comes first, and positive definiteness because lengths should be positive. An easy non-diagonal example would be
[math] W=\dfrac{1}{3}\begin{pmatrix}5&4\\4&5\end{pmatrix} .[/math]This allows you to check all properties and definitions, and you can draw example vectors. It is easier to understand a concept if you have specific numbers.
 
Isn't being symmetric redundant, given positive definiteness?

In any case, if I understand correctly, there can be cross-terms in a weighted dot product because a weighted dot product is defined to behave well under the space of all linear transformations, and this necessitates cross-terms for some transformations.

Then for an inner product such as [imath]\int_0^1 f(x)g(x) dx[/imath], the argument analogous to blamocur's would be to consider an arbitrary linear transformation [imath]T[/imath], so that the weighted inner product is [imath]\int_0^1 T(f(x))T(g(x)) dx[/imath]. I guess you can't do anything further to merge the [imath]T[/imath]'s, but the point appears to be that this weighted inner product does interact input-wise in terms of [imath]x[/imath], but there might be cross-terms in terms of [imath]T(f)[/imath] and [imath]T(g)[/imath] (no longer viewed as functions of [imath]x[/imath]). Is that a good way to view it, or is that equivalent to an elementary calculus mistake to even think of [imath]T(f)[/imath] and [imath]T(g)[/imath] within the integrand?
 
Isn't being symmetric redundant, given positive definiteness?
No. Positive definiteness means [imath] x^\dagger Wx>0 [/imath] for [imath] x\neq 0. [/imath] This makes lengths positive. Symmetry means [imath] x^\dagger Wy=y^\dagger Wx .[/imath] This makes the measurement of angles symmetric since [imath] \cos \alpha=\cos(-\alpha). [/imath]
In any case, if I understand correctly, there can be cross-terms in a weighted dot product because a weighted dot product is defined to behave well under the space of all linear transformations, and this necessitates cross-terms for some transformations.
Your language is a bit confusing. A symmetric, non-singular, positive definite bilinear form defines an inner product by [imath] x\cdot_w y=x^\dagger W y. [/imath] It simply fulfills all requirements of an inner product: you can define lengths and angles. Non-diagonal matrix entries are possible as long as the other requirements hold.
Then for an inner product such as [imath]\int_0^1 f(x)g(x) dx[/imath], the argument analogous to blamocur's would be to consider an arbitrary linear transformation [imath]T[/imath], so that the weighted inner product is [imath]\int_0^1 T(f(x))T(g(x)) dx[/imath]. I guess you can't do anything further to merge the [imath]T[/imath]'s, but the point appears to be that this weighted inner product does interact input-wise in terms of [imath]x[/imath], but there might be cross-terms in terms of [imath]T(f)[/imath] and [imath]T(g)[/imath] (no longer viewed as functions of [imath]x[/imath]). Is that a good way to view it, or is that equivalent to an elementary calculus mistake to even think of [imath]T(f)[/imath] and [imath]T(g)[/imath] within the integrand?
Not sure what you mean here. There is a transformation theorem:
[math] \displaystyle{\int_{\varphi(U)}}f(x)\,dx=\int_U f(\varphi(x))\cdot \left|\det(J \varphi(x))\right| [/math]where [imath] \varphi:U \to \varphi(U)[/imath] is a bijective, continuously differentiable transformation of an open set [imath] U [/imath] and [imath] J\varphi(x) [/imath] the Jacobi-matrix of [imath] \varphi. [/imath] A linear transformation [imath] T=\varphi [/imath] is an example.

Other examples of bilinear forms, not necessarily inner products, in the realm of function spaces are
[math]\begin{array}{lll} f\cdot g &=\displaystyle{\int f(x)g(x)r(x)\, dx} \quad \text{ or }\\[12pt] f\cdot g &=\displaystyle{\int \int k(x,y)f(x)g(y)\, dy\, dx} \end{array}[/math]They turn into inner products if the real correction function [imath] r [/imath] and the kernel [imath] k [/imath] behave accordingly.
 
Top