Conditional probabilities of the parameters

tworitdash

New member
Joined
Aug 12, 2022
Messages
10
I have the following function

[math]x(k) = \sum_{m}^{M} e^{i(U_m k + \beta_m)}[/math]
Where

[math]i = \sqrt{-1}[/math]
And [imath] k [/imath] is an integer.

The [imath] U_m [/imath] values come from a normal distribution and the [imath]\beta_m [/imath] values come from a uniform distribution.

[math]U_m \sim \mathcal{N}(\mu, \sigma^2)[/math][math]\beta_m \sim \mathcal{U}(0, 2 \pi)[/math]
I want to know the conditional probability of both parameters [imath]\mu [/imath] and [imath]\sigma^2 [/imath] with each other.

Something like [imath]p(\mu|x, \sigma^2) [/imath] and [imath]p(\sigma^2|x, \mu) [/imath]

I have seen how people do it when [imath]x [/imath] is a random number following a specific distribution. However, how to deal with this sum. I have attempted finding the distribution of the sum inside the function [imath]x [/imath], but I am almost unsuccessful. It can be found after the question. I know with central limit theorem (when [imath]M [/imath] is large and [imath]\sigma [/imath] is large), the distribution of [imath]x [/imath] is a Gaussian distribution with mean [imath]0 [/imath] and standard deviation [imath]\sqrt{M/2} [/imath] for both the real and the imaginary parts. However, how to find the statistics of [imath]x [/imath] when [imath]M [/imath] is still large but with not so big [imath]\sigma [/imath]. So there are three cases:

1) [imath]M [/imath] large and [imath]\sigma [/imath] large - x becomes Gaussian with [imath]\mu_x = 0 [/imath] and [imath]\sigma_x = \sqrt{M/2} [/imath] for both real and imaginary

2) [imath]M [/imath] large and [imath]\sigma -> 0 [/imath], the distribution of [imath]x [/imath] becomes [imath]\delta(1) [/imath].

3) [imath]M [/imath] large but [imath]\sigma [/imath] reasonable - I want the distribution of [imath]x [/imath] as a function of [imath]\sigma [/imath] (and possibly [imath]\mu [/imath]).

Or, am I attempting the question in a wrong way? The original problem is to estimate [imath]\mu [/imath] and [imath]\sigma [/imath] when some realizations of[imath]x [/imath] is available. Is it worth having an expression of the distribution of both real and imaginary parts of[imath]x [/imath] when[imath]M [/imath] is large in terms of [imath]\mu [/imath] and [imath]\sigma [/imath] ? I was thinking of it because, in that case the conditional probabilities would be easier to find and any iterative technique can be used to estimate [imath]\mu[/imath] and [imath]\sigma [/imath].

What I have tried so far:

If I write the original model like this:

[math]x(k) = \sum_m A_m + i \Gamma_m[/math]
I have the distribution of [imath] \alpha [/imath] and [imath] \gamma [/imath].

They look like the following;

[math]f_A(\alpha) = \sum_{n = -\infty}^{+\infty} \frac{1}{\sqrt{1 - \alpha^2}} \Big( f_Y(2(n+1)\pi - \cos^{-1}(\alpha)) - f_Y(2(n)\pi + \cos^{-1}(\alpha)) \Big)[/math]
[math]f_\Gamma(\gamma) = \sum_{n = -\infty}^{+\infty} \frac{1}{\sqrt{1 - \alpha^2}} \Big( f_Y(2(n)\pi + \sin^{-1}(\alpha)) - f_Y((2n-1)\pi - \sin^{-1}(\alpha)) \Big)[/math]
Where [imath] f_Y(y) [/imath] is the density function of the random variable[imath] Y = U k + \beta[/imath] , which is

[math] f_Y(y) = \frac{1}{4\pi} \Big[ erf\Big( \frac{k \mu -y + 2 \pi}{\sqrt{2} k \sigma } \Big) - erf\Big( \frac{k \mu -y}{\sqrt{2} k \sigma } \Big) \Big][/math]
 
Last edited:
The original problem is to estimate [imath]\mu[/imath] and [imath]\sigma[/imath] when some realizations of [imath]x[/imath] is available.
I haven't looked through the rest of your post, but if this is your goal then you want to look at the Bayesian Estimator and Bulmann's Credibility. The theory starts on page 5 followed by examples.

The main idea of the theory is that you make an assumption about your model [imath]f(x|\theta)[/imath] and the initial distribution of your parameter [imath]f(\theta)[/imath]. As you collect more data [imath]x[/imath], you update your model.
[math]f(\theta|\text{data})=\frac{f(\text{data}|\theta)\times f(\theta)}{\int_{\infty}^{\infty}f(\text{data}|\theta)\times f(\theta)\,d\theta}[/math]
Some other common methods to estimate parameters are Maximum Likelihood Estimators and K-S hypothesis testing.
 
I haven't looked through the rest of your post, but if this is your goal then you want to look at the Bayesian Estimator and Bulmann's Credibility. The theory starts on page 5 followed by examples.

The main idea of the theory is that you make an assumption about your model [imath]f(x|\theta)[/imath] and the initial distribution of your parameter [imath]f(\theta)[/imath]. As you collect more data [imath]x[/imath], you update your model.
[math]f(\theta|\text{data})=\frac{f(\text{data}|\theta)\times f(\theta)}{\int_{\infty}^{\infty}f(\text{data}|\theta)\times f(\theta)\,d\theta}[/math]
Some other common methods to estimate parameters are Maximum Likelihood Estimators and K-S hypothesis testing.
Thank you for your answer. However, in this case I am unable to find the [imath] f(data|\theta) [/imath]. How can I find this?
 
Thank you for your answer. However, in this case I am unable to find the [imath] f(data|\theta) [/imath]. How can I find this?
The concept is too dense for me to type it all out.
See this Wikipedia page which has an example of the Bernoulli-Uniform(which is Beta with [imath]\alpha=\beta=1)[/imath] likelihood-conjugate prior.
All conjugate prior pairs are derived in the same manner. Scroll down to the bottom to see the continuous distributions. Most of them are Normal , however, I don't see one for Normal-Uniform(Beta). I might need to derive it yourself or research it.

Edit: Look at the example, and see if you can spot the [imath]f(data|\theta)[/imath]
 
Last edited:
The concept is too dense for me to type it all out.
See this Wikipedia page which has an example of the Bernoulli-Uniform(which is Beta with [imath]\alpha=\beta=1)[/imath] likelihood-conjugate prior.
All conjugate prior pairs are derived in the same manner. Scroll down to the bottom to see the continuous distributions. Most of them are Normal , however, I don't see one for Normal-Uniform(Beta). I might need to derive it yourself or research it.

Edit: Look at the example, and see if you can spot the [imath]f(data|\theta)[/imath]
Please correct me if I am wrong. I see in the example of conjugate prior that, the prior of the parameters is chosen as a distribution having hyperparameters. And the posterior predictive can be formulated on a closed form. My doubts are the following.

1. Where can I see a computation of this posterior predictive with a non-linear model like mine? I want to see how it is computed in a closed form.
2. I see that the list describes a set of posterior-predictive distribution for a set of likelihood functions $p(data|\theta)$ . However, for me, how to find this likelihood of my parameters? For a given $\theta$, what is my [imath]p(data|\theta)[/imath] ? I find it difficult to think because the model has random samples drawn from a distribution defined by the parameters and not directly from the parameters.

For example: Let's consider a model like the following:


[math]x(k) = e^{i v k}[/math][math]z(k) = e^{i v k} + n(k)[/math]
Where [math]n(k)[/math] is a zero mean complex Gaussian noise.

[math]\Re(n) \sim \frac{1}{\sqrt{2}} \mathcal{N}(0, S^2)[/math]
[math]\Im(n) \sim \frac{1}{\sqrt{2}} \mathcal{N}(0, S^2)[/math]
In this case the likelihood $p(z|v)$ can be found by doing something like this,

[math]\log(p(z|v)) = - \frac{ \sum_{k} || z_k - x_k(v) ||^2 }{S^2}[/math]
However, in my model described above, it is not a direct function of the parameters [imath]\theta = [\mu, \sigma][/imath]. In my model above, I also should have a zero mean complex Gaussian random noise [imath]n(k) [/imath] added with [imath]x(k)[/imath]. I will re-write the question again. EDIT: I just realized I can't edit the question now.

I am new to statistical inference problems. Could you please elaborate?
 
Last edited:
Please correct me if I am wrong. I see in the example of conjugate prior that, the prior of the parameters is chosen as a distribution having hyperparameters. And the posterior predictive can be formulated on a closed form. My doubts are the following.

1. Where can I see a computation of this posterior predictive with a non-linear model like mine? I want to see how it is computed in a closed form.
2. I see that the list describes a set of posterior-predictive distribution for a set of likelihood functions $p(data|\theta)$ . However, for me, how to find this likelihood of my parameters? For a given $\theta$, what is my [imath]p(data|\theta)[/imath] ? I find it difficult to think because the model has random samples drawn from a distribution defined by the parameters and not directly from the parameters.

For example: Let's consider a model like the following:


[math]x(k) = e^{i v k}[/math][math]z(k) = e^{i v k} + n(k)[/math]
Where [math]n(k)[/math] is a zero mean complex Gaussian noise.

[math]\Re(n) \sim \frac{1}{\sqrt{2}} \mathcal{N}(0, S^2)[/math]
[math]\Im(n) \sim \frac{1}{\sqrt{2}} \mathcal{N}(0, S^2)[/math]
In this case the likelihood $p(z|v)$ can be found by doing something like this,

[math]\log(p(z|v)) = - \frac{ \sum_{k} || z_k - x_k(v) ||^2 }{S^2}[/math]
However, in my model described above, it is not a direct function of the parameters [imath]\theta = [\mu, \sigma][/imath]. In my model above, I also should have a zero mean complex Gaussian random noise [imath]n(k) [/imath] added with [imath]x(k)[/imath]. I will re-write the question again. EDIT: I just realized I can't edit the question now.

I am new to statistical inference problems. Could you please elaborate?
Your situation is quite unique, especially dealing involving a complex and non-linear model. I don't have much experience in dealing with complex models. Plus, I do not have the background and a full understanding of the model. I can only point you in a general direction on where to look. I'm afraid I can't answer any specific questions in regards to your model.
I do see that you've posted the thread on several forums and haven't gotten a response. One suggestion I can make is to break your BIG question into smaller questions. You might gain more information that way.
 
Your situation is quite unique, especially dealing involving a complex and non-linear model. I don't have much experience in dealing with complex models. Plus, I do not have the background and a full understanding of the model. I can only point you in a general direction on where to look. I'm afraid I can't answer any specific questions in regards to your model.
I do see that you've posted the thread on several forums and haven't gotten a response. One suggestion I can make is to break your BIG question into smaller questions. You might gain more information that way.
Thank you for the response. I really appreciate it. I am also thinking about different ways of approaching this in a broad sense. Coming from a background of more deterministic things in physics, it has been quite hard on my part to think in terms of estimation and probability theories. On a larger scale, I understand the physical limitations of the problem. The problem here is to find the [imath]\sigma[/imath] when there is not enough samples. I have tried many methods but at the end it is a problem of resolution. So, with physics I know it is not achievable by just having a few samples of [imath] x [/imath]. That is why I chose to model the signal instead and to use some techniques to just get an estimate of this parameter. However, it seems quite tough already. I will try to research more on this.
 
Thank you for the response. I really appreciate it. I am also thinking about different ways of approaching this in a broad sense. Coming from a background of more deterministic things in physics, it has been quite hard on my part to think in terms of estimation and probability theories. On a larger scale, I understand the physical limitations of the problem. The problem here is to find the [imath]\sigma[/imath] when there is not enough samples. I have tried many methods but at the end it is a problem of resolution. So, with physics I know it is not achievable by just having a few samples of [imath] x [/imath]. That is why I chose to model the signal instead and to use some techniques to just get an estimate of this parameter. However, it seems quite tough already. I will try to research more on this.
For my curiosity, what are you modelling? What kind of data do you have? If you can provide a small sample of your data, I might be able to make some recommendations.
 
For my curiosity, what are you modelling? What kind of data do you have? If you can provide a small sample of your data, I might be able to make some recommendations.
I really appreciate your help. I will try to explain it in a bit more detail. Now, my signal or observations are artificial and I make them with a simulation. It is a signal having a lot of frequencies and they in reality form a Gaussian shape distribution. It is the nature of the problem. However, the sensor can detect a limited number of frequencies because of the sampling in time [imath] dt [/imath]. So, I assume first that the frequencies, although they make a Gaussian distribution in reality, most part of the Gaussian remain in this limit [imath] 0 [/imath] to [imath] \omega_{max} [/imath] (I am writing in terms of angular frequencies to make the problem simpler to explain). All the sinusoids in this signal have an initial phase that is uniformly distributed from [imath] 0 [/imath] to [imath] 2 \pi [/imath]. That is why I write the signal model as following.

[math]x(k) = \sum_m^{M} e^{i (\omega_m k + \beta_m)} , \quad k \epsilon Z, k \geq 0[/math]
Where [math]\omega \sim \mathcal{N}(\mu, \sigma)[/math] and

[math]\beta \sim \mathcal{U}(0, 2 \pi)[/math]
Then, I add a complex Gaussian noise to [imath] x [/imath] to create the observations.

[math]z = x + n[/math]
[math]\Re(n) \sim \frac{1}{\sqrt{2}} \mathcal{N} (0, S^2)[/math][math]\Im(n) \sim \frac{1}{\sqrt{2}} \mathcal{N} (0, S^2)[/math]
I am pasting the code that generates that signal (MATLAB).

One more detail: The observations are not available continuously. They are available periodically with a period much larger than the sampling interval. For example, [imath] 5 [/imath] (in the code it is [imath] Nt [/imath]) observations are available and then a gap equivalent to [imath] 95 [/imath] (in the code it is [imath] Ngap [/imath]) observation interval and again [imath] 5 [/imath] observations are available and so on.

This type of observations with gap makes it difficult for any frequency domain techniques (like DFT ) to blindly estimate the frequency statistics for the signal. That is why I wanted to model this signal and find the parameters of the model. So, the objective is to find an estimate of (a posterior if that is the correct jargon) for [imath] \mu [/imath] and [imath] \sigma [/imath]. Especially the [imath] \sigma [/imath], that is more difficult.

```
clear;

MU = pi;
SIG = 0.133 * pi;

M = 1e6;

Omega = normrnd(MU, SIG, [1 M]);
beta = 2*pi*rand([1 M]);
Nt = 5;
Ngap = 95;

K = [0:1:Nt-1 Ngap+Nt+1:1:Ngap+2*Nt 2*(Ngap+Nt)+1:1:2*(Ngap+Nt)+Nt 3*(Ngap+Nt)+1:1:3*(Ngap+Nt)+Nt];

for k = 1:length(K)


x(k) = sum( exp(1j .* Omega .* K(k) + 1j .* beta) );
end

S_n = 0.01;

z = x + 1/sqrt(2) .* (normrnd(0, S_n, [1 length(K)]) + 1j .* normrnd(0, S_n, [1 length(K)]));

figure; plot(real(z)); hold on; plot(imag(z));
```
 
Last edited:
For my curiosity, what are you modelling? What kind of data do you have? If you can provide a small sample of your data, I might be able to make some recommendations.
As the code didn't format well and i can't edit it again, I am writing another response.

The code is originally in MATLAB but I can't find a format for MATLAB in this editor so I used python.

I really appreciate your help. I will try to explain it in a bit more detail. Now, my signal or observations are artificial and I make them with a simulation. It is a signal having a lot of frequencies and they in reality form a Gaussian shape distribution. It is the nature of the problem. However, the sensor can detect a limited number of frequencies because of the sampling in time [imath] dt [/imath]. So, I assume first that the frequencies, although they make a Gaussian distribution in reality, most part of the Gaussian remain in this limit [imath] 0 [/imath] to [imath] \omega_{max} [/imath] (I am writing in terms of angular frequencies to make the problem simpler to explain). All the sinusoids in this signal have an initial phase that is uniformly distributed from [imath] 0 [/imath] to [imath] 2 \pi [/imath]. That is why I write the signal model as following.

[math]x(k) = \sum_m^{M} e^{i (\omega_m k + \beta_m)} , \quad k \epsilon Z, k \geq 0[/math]
Where [math]\omega \sim \mathcal{N}(\mu, \sigma)[/math] and

[math]\beta \sim \mathcal{U}(0, 2 \pi)[/math]
Then, I add a complex Gaussian noise to [imath] x [/imath] to create the observations.

[math]z = x + n[/math]
[math]\Re(n) \sim \frac{1}{\sqrt{2}} \mathcal{N} (0, S^2)[/math][math]\Im(n) \sim \frac{1}{\sqrt{2}} \mathcal{N} (0, S^2)[/math]
I am pasting the code that generates that signal (MATLAB).

One more detail: The observations are not available continuously. They are available periodically with a period much larger than the sampling interval. For example, [imath] 5 [/imath] (in the code it is [imath] Nt [/imath]) observations are available and then a gap equivalent to [imath] 95 [/imath] (in the code it is [imath] Ngap [/imath]) observation interval and again [imath] 5 [/imath] observations are available and so on.

This type of observations with gap makes it difficult for any frequency domain techniques (like DFT ) to blindly estimate the frequency statistics for the signal. That is why I wanted to model this signal and find the parameters of the model. So, the objective is to find an estimate of (a posterior if that is the correct jargon) for [imath] \mu [/imath] and [imath] \sigma [/imath]. Especially the [imath] \sigma [/imath], that is more difficult.

```
Python:
clear;

MU = pi;
SIG = 0.133 * pi;

M = 1e6;

Omega = normrnd(MU, SIG, [1 M]);
beta = 2*pi*rand([1 M]);
Nt = 5;
Ngap = 95;

K = [0:1:Nt-1 Ngap+Nt+1:1:Ngap+2*Nt 2*(Ngap+Nt)+1:1:2*(Ngap+Nt)+Nt 3*(Ngap+Nt)+1:1:3*(Ngap+Nt)+Nt];

for k = 1:length(K)


    x(k) = sum( exp(1j .* Omega .* K(k) + 1j .* beta) );
end

S_n = 0.01;

z = x + 1/sqrt(2) .* (normrnd(0, S_n, [1 length(K)]) + 1j .* normrnd(0, S_n, [1 length(K)]));

figure; plot(real(z)); hold on; plot(imag(z));
 
As the code didn't format well and i can't edit it again, I am writing another response.

The code is originally in MATLAB but I can't find a format for MATLAB in this editor so I used python.

I really appreciate your help. I will try to explain it in a bit more detail. Now, my signal or observations are artificial and I make them with a simulation. It is a signal having a lot of frequencies and they in reality form a Gaussian shape distribution. It is the nature of the problem. However, the sensor can detect a limited number of frequencies because of the sampling in time [imath] dt [/imath]. So, I assume first that the frequencies, although they make a Gaussian distribution in reality, most part of the Gaussian remain in this limit [imath] 0 [/imath] to [imath] \omega_{max} [/imath] (I am writing in terms of angular frequencies to make the problem simpler to explain). All the sinusoids in this signal have an initial phase that is uniformly distributed from [imath] 0 [/imath] to [imath] 2 \pi [/imath]. That is why I write the signal model as following.

[math]x(k) = \sum_m^{M} e^{i (\omega_m k + \beta_m)} , \quad k \epsilon Z, k \geq 0[/math]
Where [math]\omega \sim \mathcal{N}(\mu, \sigma)[/math] and

[math]\beta \sim \mathcal{U}(0, 2 \pi)[/math]
Then, I add a complex Gaussian noise to [imath] x [/imath] to create the observations.

[math]z = x + n[/math]
[math]\Re(n) \sim \frac{1}{\sqrt{2}} \mathcal{N} (0, S^2)[/math][math]\Im(n) \sim \frac{1}{\sqrt{2}} \mathcal{N} (0, S^2)[/math]
I am pasting the code that generates that signal (MATLAB).

One more detail: The observations are not available continuously. They are available periodically with a period much larger than the sampling interval. For example, [imath] 5 [/imath] (in the code it is [imath] Nt [/imath]) observations are available and then a gap equivalent to [imath] 95 [/imath] (in the code it is [imath] Ngap [/imath]) observation interval and again [imath] 5 [/imath] observations are available and so on.

This type of observations with gap makes it difficult for any frequency domain techniques (like DFT ) to blindly estimate the frequency statistics for the signal. That is why I wanted to model this signal and find the parameters of the model. So, the objective is to find an estimate of (a posterior if that is the correct jargon) for [imath] \mu [/imath] and [imath] \sigma [/imath]. Especially the [imath] \sigma [/imath], that is more difficult.

`
Python:
clear;

MU = pi;
SIG = 0.133 * pi;

M = 1e6;

Omega = normrnd(MU, SIG, [1 M]);
beta = 2*pi*rand([1 M]);
Nt = 5;
Ngap = 95;

K = [0:1:Nt-1 Ngap+Nt+1:1:Ngap+2*Nt 2*(Ngap+Nt)+1:1:2*(Ngap+Nt)+Nt 3*(Ngap+Nt)+1:1:3*(Ngap+Nt)+Nt];

for k = 1:length(K)


    x(k) = sum( exp(1j .* Omega .* K(k) + 1j .* beta) );
end

S_n = 0.01;

z = x + 1/sqrt(2) .* (normrnd(0, S_n, [1 length(K)]) + 1j .* normrnd(0, S_n, [1 length(K)]));

figure; plot(real(z)); hold on; plot(imag(z));
Ok, I have a little better understanding of the situation. Let's try something simple i.e. the Maximum Likelihood Estimator (MLE).
For [imath]\omega \sim N(\mu,\sigma)[/imath], we can estimate the parameters [imath]\hat{\mu}[/imath] and [imath]\hat{\sigma}^2[/imath] as follows.

[math]\hat{u}=\frac{\sum_{=1}^{n}x_i}{n}\\ \hat{\sigma}^2=\frac{\sum_{i=1}^n(x_i-\hat{u})^2}{n}[/math]
 
Ok, I have a little better understanding of the situation. Let's try something simple i.e. the Maximum Likelihood Estimator (MLE).
For [imath]\omega \sim N(\mu,\sigma)[/imath], we can estimate the parameters [imath]\hat{\mu}[/imath] and [imath]\hat{\sigma}^2[/imath] as follows.

[math]\hat{u}=\frac{\sum_{=1}^{n}x_i}{n}\\ \hat{\sigma}^2=\frac{\sum_{i=1}^n(x_i-\hat{u})^2}{n}[/math]
But that would just give me the statistics of [imath] x [/imath] and not the variable [imath] \omega [/imath] as far as I understand. For a large [imath] M [/imath], as I explain with central limit theorem, [imath] \mu_x =0 [/imath] and [imath] \sigma_x^2 = \frac{M}{2} [/imath]. But these are not the parameters I want to estimate. I want to estimate the parameters of the distribution of [imath] \omega [/imath]
 
But that would just give me the statistics of [imath] x [/imath] and not the variable [imath] \omega [/imath] as far as I understand. For a large [imath] M [/imath], as I explain with central limit theorem, [imath] \mu_x =0 [/imath] and [imath] \sigma_x^2 = \frac{M}{2} [/imath]. But these are not the parameters I want to estimate. I want to estimate the parameters of the distribution of [imath] \omega [/imath]
I might have reused [imath]x_i[/imath] as an observation of [imath]x(k)[/imath], which confused you.
It's a similar idea for [imath]\omega[/imath]. Use the observations [imath]\omega_i[/imath] instead.
Omega = normrnd(MU, SIG, [1 M]);
 
I might have reused [imath]x_i[/imath] as an observation of [imath]x(k)[/imath], which confused you.
It's a similar idea for [imath]\omega[/imath]. Use the observations [imath]\omega_i[/imath] instead.
For me, the observation is not done for [imath] \omega [/imath] directly. If it was, then it was much easier. The observations I would have in reality is [imath] z [/imath].
 
For me, the observation is not done for [imath] \omega [/imath] directly. If it was, then it was much easier. The observations I would have in reality is [imath] z [/imath].
I'm not too familiar with MATLAB, but aren't you generating an array of random samples for omega here?
Omega = normrnd(MU, SIG, [1 M]);
Or you're saying you won't be generating random omega in reality?
 
I'm not too familiar with MATLAB, but aren't you generating an array of random samples for omega here?

Or you're saying you won't be generating random omega in reality?
So to explain it clearly, the code I present here is the code to generate the observations manually. Consider this as a black box and you get the observations [imath] z [/imath] out of it. The only thing known to a user (prior information) is that [imath] \omega [/imath] is Gaussian distributed and [imath] \beta [/imath] is uniformly distributed and the model of the signal is [imath] x(k) = \sum_m^{M} e^{i(\omega_m k + \beta_m)} [/imath] and that the noise variance [imath] \sigma_n^2 [/imath] is known. The question is to estimate the statistics of [imath] \omega [/imath] from the observations [imath] z [/imath] . The MATLAB code I present is just for generating the measurements or observations. Treat it as a sensor output.
 
So to explain it clearly, the code I present here is the code to generate the observations manually. Consider this as a black box and you get the observations [imath] z [/imath] out of it. The only thing known to a user (prior information) is that [imath] \omega [/imath] is Gaussian distributed and [imath] \beta [/imath] is uniformly distributed and the model of the signal is [imath] x(k) = \sum_m^{M} e^{i(\omega_m k + \beta_m)} [/imath] and that the noise variance [imath] \sigma_n^2 [/imath] is known. The question is to estimate the statistics of [imath] \omega [/imath] from the observations [imath] z [/imath] . The MATLAB code I present is just for generating the measurements or observations. Treat it as a sensor output.
Mathematically speaking, if [imath]z=x+n[/imath], and you only know [imath]z[/imath], then you have 1 equation with 2 unknowns. Especially for [imath]x \in \mathbb{C}[/imath], there would be infinitely many solutions. Am I getting it correctly?

Edit: However, you can estimate the variance of Gaussian white noise. See MSE, but I'm not 100% familiar with the method.
 
Last edited:
Mathematically speaking, if [imath]z=x+n[/imath], and you only know [imath]z[/imath], then you have 1 equation with 2 unknowns. Especially for [imath]x \in \mathbb{C}[/imath], there would be infinitely many solutions. Am I getting it correctly?

Edit: However, you can estimate the variance of Gaussian white noise. See MSE, but I'm not 100% familiar with the method.
Let's for example say that I know the noise variance [imath] \sigma_n [/imath]. I forgot to mention this in the original question.
 
Let's for example say that I know the noise variance [imath] \sigma_n [/imath]. I forgot to mention this in the original question.
Your situation is difficult because your observation isn't a direct observation of the variable you want to estimate. Now, if you know [imath]z[/imath] and [imath]n[/imath], then we can solve for [imath]x[/imath]. However, we run into the same problem again where [imath]x[/imath] is in terms of [imath]\beta[/imath] and [imath]U[/imath]. One equation 2 unknowns.
 
Your situation is difficult because your observation isn't a direct observation of the variable you want to estimate. Now, if you know [imath]z[/imath] and [imath]n[/imath], then we can solve for [imath]x[/imath]. However, we run into the same problem again where [imath]x[/imath] is in terms of [imath]\beta[/imath] and [imath]U[/imath]. One equation 2 unknowns.
One solution I can think of is to let [imath]\beta[/imath] be its expected value instead i.e. [imath]E[\beta]=\pi[/imath].
 
Top