Conditional probabilities of the parameters

tworitdash

New member
Joined
Aug 12, 2022
Messages
10
I have the following function

x(k)=mMei(Umk+βm)x(k) = \sum_{m}^{M} e^{i(U_m k + \beta_m)}
Where

i=1i = \sqrt{-1}
And k k is an integer.

The Um U_m values come from a normal distribution and the βm\beta_m values come from a uniform distribution.

UmN(μ,σ2)U_m \sim \mathcal{N}(\mu, \sigma^2)βmU(0,2π)\beta_m \sim \mathcal{U}(0, 2 \pi)
I want to know the conditional probability of both parameters μ\mu and σ2\sigma^2 with each other.

Something like p(μx,σ2)p(\mu|x, \sigma^2) and p(σ2x,μ)p(\sigma^2|x, \mu)

I have seen how people do it when xx is a random number following a specific distribution. However, how to deal with this sum. I have attempted finding the distribution of the sum inside the function xx , but I am almost unsuccessful. It can be found after the question. I know with central limit theorem (when MM is large and σ\sigma is large), the distribution of xx is a Gaussian distribution with mean 00 and standard deviation M/2\sqrt{M/2} for both the real and the imaginary parts. However, how to find the statistics of xx when MM is still large but with not so big σ\sigma . So there are three cases:

1) MM large and σ\sigma large - x becomes Gaussian with μx=0\mu_x = 0 and σx=M/2\sigma_x = \sqrt{M/2} for both real and imaginary

2) MM large and σ>0\sigma -> 0 , the distribution of xx becomes δ(1)\delta(1) .

3) MM large but σ\sigma reasonable - I want the distribution of xx as a function of σ\sigma (and possibly μ\mu ).

Or, am I attempting the question in a wrong way? The original problem is to estimate μ\mu and σ\sigma when some realizations ofxx is available. Is it worth having an expression of the distribution of both real and imaginary parts ofxx whenMM is large in terms of μ\mu and σ\sigma ? I was thinking of it because, in that case the conditional probabilities would be easier to find and any iterative technique can be used to estimate μ\mu and σ\sigma .

What I have tried so far:

If I write the original model like this:

x(k)=mAm+iΓmx(k) = \sum_m A_m + i \Gamma_m
I have the distribution of α \alpha and γ \gamma .

They look like the following;

fA(α)=n=+11α2(fY(2(n+1)πcos1(α))fY(2(n)π+cos1(α)))f_A(\alpha) = \sum_{n = -\infty}^{+\infty} \frac{1}{\sqrt{1 - \alpha^2}} \Big( f_Y(2(n+1)\pi - \cos^{-1}(\alpha)) - f_Y(2(n)\pi + \cos^{-1}(\alpha)) \Big)
fΓ(γ)=n=+11α2(fY(2(n)π+sin1(α))fY((2n1)πsin1(α)))f_\Gamma(\gamma) = \sum_{n = -\infty}^{+\infty} \frac{1}{\sqrt{1 - \alpha^2}} \Big( f_Y(2(n)\pi + \sin^{-1}(\alpha)) - f_Y((2n-1)\pi - \sin^{-1}(\alpha)) \Big)
Where fY(y) f_Y(y) is the density function of the random variableY=Uk+β Y = U k + \beta , which is

fY(y)=14π[erf(kμy+2π2kσ)erf(kμy2kσ)] f_Y(y) = \frac{1}{4\pi} \Big[ erf\Big( \frac{k \mu -y + 2 \pi}{\sqrt{2} k \sigma } \Big) - erf\Big( \frac{k \mu -y}{\sqrt{2} k \sigma } \Big) \Big]
 
Last edited:
The original problem is to estimate μ\mu and σ\sigma when some realizations of xx is available.
I haven't looked through the rest of your post, but if this is your goal then you want to look at the Bayesian Estimator and Bulmann's Credibility. The theory starts on page 5 followed by examples.

The main idea of the theory is that you make an assumption about your model f(xθ)f(x|\theta) and the initial distribution of your parameter f(θ)f(\theta). As you collect more data xx, you update your model.
f(θdata)=f(dataθ)×f(θ)f(dataθ)×f(θ)dθf(\theta|\text{data})=\frac{f(\text{data}|\theta)\times f(\theta)}{\int_{\infty}^{\infty}f(\text{data}|\theta)\times f(\theta)\,d\theta}
Some other common methods to estimate parameters are Maximum Likelihood Estimators and K-S hypothesis testing.
 
I haven't looked through the rest of your post, but if this is your goal then you want to look at the Bayesian Estimator and Bulmann's Credibility. The theory starts on page 5 followed by examples.

The main idea of the theory is that you make an assumption about your model f(xθ)f(x|\theta) and the initial distribution of your parameter f(θ)f(\theta). As you collect more data xx, you update your model.
f(θdata)=f(dataθ)×f(θ)f(dataθ)×f(θ)dθf(\theta|\text{data})=\frac{f(\text{data}|\theta)\times f(\theta)}{\int_{\infty}^{\infty}f(\text{data}|\theta)\times f(\theta)\,d\theta}
Some other common methods to estimate parameters are Maximum Likelihood Estimators and K-S hypothesis testing.
Thank you for your answer. However, in this case I am unable to find the f(dataθ) f(data|\theta) . How can I find this?
 
Thank you for your answer. However, in this case I am unable to find the f(dataθ) f(data|\theta) . How can I find this?
The concept is too dense for me to type it all out.
See this Wikipedia page which has an example of the Bernoulli-Uniform(which is Beta with α=β=1)\alpha=\beta=1) likelihood-conjugate prior.
All conjugate prior pairs are derived in the same manner. Scroll down to the bottom to see the continuous distributions. Most of them are Normal , however, I don't see one for Normal-Uniform(Beta). I might need to derive it yourself or research it.

Edit: Look at the example, and see if you can spot the f(dataθ)f(data|\theta)
 
Last edited:
The concept is too dense for me to type it all out.
See this Wikipedia page which has an example of the Bernoulli-Uniform(which is Beta with α=β=1)\alpha=\beta=1) likelihood-conjugate prior.
All conjugate prior pairs are derived in the same manner. Scroll down to the bottom to see the continuous distributions. Most of them are Normal , however, I don't see one for Normal-Uniform(Beta). I might need to derive it yourself or research it.

Edit: Look at the example, and see if you can spot the f(dataθ)f(data|\theta)
Please correct me if I am wrong. I see in the example of conjugate prior that, the prior of the parameters is chosen as a distribution having hyperparameters. And the posterior predictive can be formulated on a closed form. My doubts are the following.

1. Where can I see a computation of this posterior predictive with a non-linear model like mine? I want to see how it is computed in a closed form.
2. I see that the list describes a set of posterior-predictive distribution for a set of likelihood functions $p(data|\theta)$ . However, for me, how to find this likelihood of my parameters? For a given $\theta$, what is my p(dataθ)p(data|\theta) ? I find it difficult to think because the model has random samples drawn from a distribution defined by the parameters and not directly from the parameters.

For example: Let's consider a model like the following:


x(k)=eivkx(k) = e^{i v k}z(k)=eivk+n(k)z(k) = e^{i v k} + n(k)
Where n(k)n(k) is a zero mean complex Gaussian noise.

(n)12N(0,S2)\Re(n) \sim \frac{1}{\sqrt{2}} \mathcal{N}(0, S^2)
(n)12N(0,S2)\Im(n) \sim \frac{1}{\sqrt{2}} \mathcal{N}(0, S^2)
In this case the likelihood $p(z|v)$ can be found by doing something like this,

log(p(zv))=kzkxk(v)2S2\log(p(z|v)) = - \frac{ \sum_{k} || z_k - x_k(v) ||^2 }{S^2}
However, in my model described above, it is not a direct function of the parameters θ=[μ,σ]\theta = [\mu, \sigma]. In my model above, I also should have a zero mean complex Gaussian random noise n(k)n(k) added with x(k)x(k). I will re-write the question again. EDIT: I just realized I can't edit the question now.

I am new to statistical inference problems. Could you please elaborate?
 
Last edited:
Please correct me if I am wrong. I see in the example of conjugate prior that, the prior of the parameters is chosen as a distribution having hyperparameters. And the posterior predictive can be formulated on a closed form. My doubts are the following.

1. Where can I see a computation of this posterior predictive with a non-linear model like mine? I want to see how it is computed in a closed form.
2. I see that the list describes a set of posterior-predictive distribution for a set of likelihood functions $p(data|\theta)$ . However, for me, how to find this likelihood of my parameters? For a given $\theta$, what is my p(dataθ)p(data|\theta) ? I find it difficult to think because the model has random samples drawn from a distribution defined by the parameters and not directly from the parameters.

For example: Let's consider a model like the following:


x(k)=eivkx(k) = e^{i v k}z(k)=eivk+n(k)z(k) = e^{i v k} + n(k)
Where n(k)n(k) is a zero mean complex Gaussian noise.

(n)12N(0,S2)\Re(n) \sim \frac{1}{\sqrt{2}} \mathcal{N}(0, S^2)
(n)12N(0,S2)\Im(n) \sim \frac{1}{\sqrt{2}} \mathcal{N}(0, S^2)
In this case the likelihood $p(z|v)$ can be found by doing something like this,

log(p(zv))=kzkxk(v)2S2\log(p(z|v)) = - \frac{ \sum_{k} || z_k - x_k(v) ||^2 }{S^2}
However, in my model described above, it is not a direct function of the parameters θ=[μ,σ]\theta = [\mu, \sigma]. In my model above, I also should have a zero mean complex Gaussian random noise n(k)n(k) added with x(k)x(k). I will re-write the question again. EDIT: I just realized I can't edit the question now.

I am new to statistical inference problems. Could you please elaborate?
Your situation is quite unique, especially dealing involving a complex and non-linear model. I don't have much experience in dealing with complex models. Plus, I do not have the background and a full understanding of the model. I can only point you in a general direction on where to look. I'm afraid I can't answer any specific questions in regards to your model.
I do see that you've posted the thread on several forums and haven't gotten a response. One suggestion I can make is to break your BIG question into smaller questions. You might gain more information that way.
 
Your situation is quite unique, especially dealing involving a complex and non-linear model. I don't have much experience in dealing with complex models. Plus, I do not have the background and a full understanding of the model. I can only point you in a general direction on where to look. I'm afraid I can't answer any specific questions in regards to your model.
I do see that you've posted the thread on several forums and haven't gotten a response. One suggestion I can make is to break your BIG question into smaller questions. You might gain more information that way.
Thank you for the response. I really appreciate it. I am also thinking about different ways of approaching this in a broad sense. Coming from a background of more deterministic things in physics, it has been quite hard on my part to think in terms of estimation and probability theories. On a larger scale, I understand the physical limitations of the problem. The problem here is to find the σ\sigma when there is not enough samples. I have tried many methods but at the end it is a problem of resolution. So, with physics I know it is not achievable by just having a few samples of x x . That is why I chose to model the signal instead and to use some techniques to just get an estimate of this parameter. However, it seems quite tough already. I will try to research more on this.
 
Thank you for the response. I really appreciate it. I am also thinking about different ways of approaching this in a broad sense. Coming from a background of more deterministic things in physics, it has been quite hard on my part to think in terms of estimation and probability theories. On a larger scale, I understand the physical limitations of the problem. The problem here is to find the σ\sigma when there is not enough samples. I have tried many methods but at the end it is a problem of resolution. So, with physics I know it is not achievable by just having a few samples of x x . That is why I chose to model the signal instead and to use some techniques to just get an estimate of this parameter. However, it seems quite tough already. I will try to research more on this.
For my curiosity, what are you modelling? What kind of data do you have? If you can provide a small sample of your data, I might be able to make some recommendations.
 
For my curiosity, what are you modelling? What kind of data do you have? If you can provide a small sample of your data, I might be able to make some recommendations.
I really appreciate your help. I will try to explain it in a bit more detail. Now, my signal or observations are artificial and I make them with a simulation. It is a signal having a lot of frequencies and they in reality form a Gaussian shape distribution. It is the nature of the problem. However, the sensor can detect a limited number of frequencies because of the sampling in time dt dt . So, I assume first that the frequencies, although they make a Gaussian distribution in reality, most part of the Gaussian remain in this limit 0 0 to ωmax \omega_{max} (I am writing in terms of angular frequencies to make the problem simpler to explain). All the sinusoids in this signal have an initial phase that is uniformly distributed from 0 0 to 2π 2 \pi . That is why I write the signal model as following.

x(k)=mMei(ωmk+βm),kϵZ,k0x(k) = \sum_m^{M} e^{i (\omega_m k + \beta_m)} , \quad k \epsilon Z, k \geq 0
Where ωN(μ,σ)\omega \sim \mathcal{N}(\mu, \sigma) and

βU(0,2π)\beta \sim \mathcal{U}(0, 2 \pi)
Then, I add a complex Gaussian noise to x x to create the observations.

z=x+nz = x + n
(n)12N(0,S2)\Re(n) \sim \frac{1}{\sqrt{2}} \mathcal{N} (0, S^2)(n)12N(0,S2)\Im(n) \sim \frac{1}{\sqrt{2}} \mathcal{N} (0, S^2)
I am pasting the code that generates that signal (MATLAB).

One more detail: The observations are not available continuously. They are available periodically with a period much larger than the sampling interval. For example, 5 5 (in the code it is Nt Nt ) observations are available and then a gap equivalent to 95 95 (in the code it is Ngap Ngap ) observation interval and again 5 5 observations are available and so on.

This type of observations with gap makes it difficult for any frequency domain techniques (like DFT ) to blindly estimate the frequency statistics for the signal. That is why I wanted to model this signal and find the parameters of the model. So, the objective is to find an estimate of (a posterior if that is the correct jargon) for μ \mu and σ \sigma . Especially the σ \sigma , that is more difficult.

car;MU=π;SIG=0.133π;M=1e6;Ω=rnd(MU,SIG,[1M]);β=2πrand([1M]);Nt=5;Ngap=95;K=[0:1:Nt-1Ngap+Nt+1:1:Ngap+2Nt2(Ngap+Nt)+1:1:2(Ngap+Nt)+Nt3(Ngap+Nt)+1:1:3(Ngap+Nt)+Nt];fork=1:n>h(K)x(k)=(exp(1j.Ω.K(k)+1j.β));endSn=0.01;z=x+12.(rnd(0,Sn,[1n>h(K)])+1j.rnd(0,Sn,[1n>h(K)]));figure;plot(real(z));holdon;plot(imag(z));
 
Last edited:
For my curiosity, what are you modelling? What kind of data do you have? If you can provide a small sample of your data, I might be able to make some recommendations.
As the code didn't format well and i can't edit it again, I am writing another response.

The code is originally in MATLAB but I can't find a format for MATLAB in this editor so I used python.

I really appreciate your help. I will try to explain it in a bit more detail. Now, my signal or observations are artificial and I make them with a simulation. It is a signal having a lot of frequencies and they in reality form a Gaussian shape distribution. It is the nature of the problem. However, the sensor can detect a limited number of frequencies because of the sampling in time dt dt . So, I assume first that the frequencies, although they make a Gaussian distribution in reality, most part of the Gaussian remain in this limit 0 0 to ωmax \omega_{max} (I am writing in terms of angular frequencies to make the problem simpler to explain). All the sinusoids in this signal have an initial phase that is uniformly distributed from 0 0 to 2π 2 \pi . That is why I write the signal model as following.

x(k)=mMei(ωmk+βm),kϵZ,k0x(k) = \sum_m^{M} e^{i (\omega_m k + \beta_m)} , \quad k \epsilon Z, k \geq 0
Where ωN(μ,σ)\omega \sim \mathcal{N}(\mu, \sigma) and

βU(0,2π)\beta \sim \mathcal{U}(0, 2 \pi)
Then, I add a complex Gaussian noise to x x to create the observations.

z=x+nz = x + n
(n)12N(0,S2)\Re(n) \sim \frac{1}{\sqrt{2}} \mathcal{N} (0, S^2)(n)12N(0,S2)\Im(n) \sim \frac{1}{\sqrt{2}} \mathcal{N} (0, S^2)
I am pasting the code that generates that signal (MATLAB).

One more detail: The observations are not available continuously. They are available periodically with a period much larger than the sampling interval. For example, 5 5 (in the code it is Nt Nt ) observations are available and then a gap equivalent to 95 95 (in the code it is Ngap Ngap ) observation interval and again 5 5 observations are available and so on.

This type of observations with gap makes it difficult for any frequency domain techniques (like DFT ) to blindly estimate the frequency statistics for the signal. That is why I wanted to model this signal and find the parameters of the model. So, the objective is to find an estimate of (a posterior if that is the correct jargon) for μ \mu and σ \sigma . Especially the σ \sigma , that is more difficult.

`
Python:
clear;

MU = pi;
SIG = 0.133 * pi;

M = 1e6;

Omega = normrnd(MU, SIG, [1 M]);
beta = 2*pi*rand([1 M]);
Nt = 5;
Ngap = 95;

K = [0:1:Nt-1 Ngap+Nt+1:1:Ngap+2*Nt 2*(Ngap+Nt)+1:1:2*(Ngap+Nt)+Nt 3*(Ngap+Nt)+1:1:3*(Ngap+Nt)+Nt];

for k = 1:length(K)


    x(k) = sum( exp(1j .* Omega .* K(k) + 1j .* beta) );
end

S_n = 0.01;

z = x + 1/sqrt(2) .* (normrnd(0, S_n, [1 length(K)]) + 1j .* normrnd(0, S_n, [1 length(K)]));

figure; plot(real(z)); hold on; plot(imag(z));
 
As the code didn't format well and i can't edit it again, I am writing another response.

The code is originally in MATLAB but I can't find a format for MATLAB in this editor so I used python.

I really appreciate your help. I will try to explain it in a bit more detail. Now, my signal or observations are artificial and I make them with a simulation. It is a signal having a lot of frequencies and they in reality form a Gaussian shape distribution. It is the nature of the problem. However, the sensor can detect a limited number of frequencies because of the sampling in time dt dt . So, I assume first that the frequencies, although they make a Gaussian distribution in reality, most part of the Gaussian remain in this limit 0 0 to ωmax \omega_{max} (I am writing in terms of angular frequencies to make the problem simpler to explain). All the sinusoids in this signal have an initial phase that is uniformly distributed from 0 0 to 2π 2 \pi . That is why I write the signal model as following.

x(k)=mMei(ωmk+βm),kϵZ,k0x(k) = \sum_m^{M} e^{i (\omega_m k + \beta_m)} , \quad k \epsilon Z, k \geq 0
Where ωN(μ,σ)\omega \sim \mathcal{N}(\mu, \sigma) and

βU(0,2π)\beta \sim \mathcal{U}(0, 2 \pi)
Then, I add a complex Gaussian noise to x x to create the observations.

z=x+nz = x + n
(n)12N(0,S2)\Re(n) \sim \frac{1}{\sqrt{2}} \mathcal{N} (0, S^2)(n)12N(0,S2)\Im(n) \sim \frac{1}{\sqrt{2}} \mathcal{N} (0, S^2)
I am pasting the code that generates that signal (MATLAB).

One more detail: The observations are not available continuously. They are available periodically with a period much larger than the sampling interval. For example, 5 5 (in the code it is Nt Nt ) observations are available and then a gap equivalent to 95 95 (in the code it is Ngap Ngap ) observation interval and again 5 5 observations are available and so on.

This type of observations with gap makes it difficult for any frequency domain techniques (like DFT ) to blindly estimate the frequency statistics for the signal. That is why I wanted to model this signal and find the parameters of the model. So, the objective is to find an estimate of (a posterior if that is the correct jargon) for μ \mu and σ \sigma . Especially the σ \sigma , that is more difficult.

`
Python:
clear;

MU = pi;
SIG = 0.133 * pi;

M = 1e6;

Omega = normrnd(MU, SIG, [1 M]);
beta = 2*pi*rand([1 M]);
Nt = 5;
Ngap = 95;

K = [0:1:Nt-1 Ngap+Nt+1:1:Ngap+2*Nt 2*(Ngap+Nt)+1:1:2*(Ngap+Nt)+Nt 3*(Ngap+Nt)+1:1:3*(Ngap+Nt)+Nt];

for k = 1:length(K)


    x(k) = sum( exp(1j .* Omega .* K(k) + 1j .* beta) );
end

S_n = 0.01;

z = x + 1/sqrt(2) .* (normrnd(0, S_n, [1 length(K)]) + 1j .* normrnd(0, S_n, [1 length(K)]));

figure; plot(real(z)); hold on; plot(imag(z));
Ok, I have a little better understanding of the situation. Let's try something simple i.e. the Maximum Likelihood Estimator (MLE).
For ωN(μ,σ)\omega \sim N(\mu,\sigma), we can estimate the parameters μ^\hat{\mu} and σ^2\hat{\sigma}^2 as follows.

u^==1nxinσ^2=i=1n(xiu^)2n\hat{u}=\frac{\sum_{=1}^{n}x_i}{n}\\ \hat{\sigma}^2=\frac{\sum_{i=1}^n(x_i-\hat{u})^2}{n}
 
Ok, I have a little better understanding of the situation. Let's try something simple i.e. the Maximum Likelihood Estimator (MLE).
For ωN(μ,σ)\omega \sim N(\mu,\sigma), we can estimate the parameters μ^\hat{\mu} and σ^2\hat{\sigma}^2 as follows.

u^==1nxinσ^2=i=1n(xiu^)2n\hat{u}=\frac{\sum_{=1}^{n}x_i}{n}\\ \hat{\sigma}^2=\frac{\sum_{i=1}^n(x_i-\hat{u})^2}{n}
But that would just give me the statistics of x x and not the variable ω \omega as far as I understand. For a large M M , as I explain with central limit theorem, μx=0 \mu_x =0 and σx2=M2 \sigma_x^2 = \frac{M}{2} . But these are not the parameters I want to estimate. I want to estimate the parameters of the distribution of ω \omega
 
But that would just give me the statistics of x x and not the variable ω \omega as far as I understand. For a large M M , as I explain with central limit theorem, μx=0 \mu_x =0 and σx2=M2 \sigma_x^2 = \frac{M}{2} . But these are not the parameters I want to estimate. I want to estimate the parameters of the distribution of ω \omega
I might have reused xix_i as an observation of x(k)x(k), which confused you.
It's a similar idea for ω\omega. Use the observations ωi\omega_i instead.
Omega = normrnd(MU, SIG, [1 M]);
 
I might have reused xix_i as an observation of x(k)x(k), which confused you.
It's a similar idea for ω\omega. Use the observations ωi\omega_i instead.
For me, the observation is not done for ω \omega directly. If it was, then it was much easier. The observations I would have in reality is z z .
 
For me, the observation is not done for ω \omega directly. If it was, then it was much easier. The observations I would have in reality is z z .
I'm not too familiar with MATLAB, but aren't you generating an array of random samples for omega here?
Omega = normrnd(MU, SIG, [1 M]);
Or you're saying you won't be generating random omega in reality?
 
I'm not too familiar with MATLAB, but aren't you generating an array of random samples for omega here?

Or you're saying you won't be generating random omega in reality?
So to explain it clearly, the code I present here is the code to generate the observations manually. Consider this as a black box and you get the observations z z out of it. The only thing known to a user (prior information) is that ω \omega is Gaussian distributed and β \beta is uniformly distributed and the model of the signal is x(k)=mMei(ωmk+βm) x(k) = \sum_m^{M} e^{i(\omega_m k + \beta_m)} and that the noise variance σn2 \sigma_n^2 is known. The question is to estimate the statistics of ω \omega from the observations z z . The MATLAB code I present is just for generating the measurements or observations. Treat it as a sensor output.
 
So to explain it clearly, the code I present here is the code to generate the observations manually. Consider this as a black box and you get the observations z z out of it. The only thing known to a user (prior information) is that ω \omega is Gaussian distributed and β \beta is uniformly distributed and the model of the signal is x(k)=mMei(ωmk+βm) x(k) = \sum_m^{M} e^{i(\omega_m k + \beta_m)} and that the noise variance σn2 \sigma_n^2 is known. The question is to estimate the statistics of ω \omega from the observations z z . The MATLAB code I present is just for generating the measurements or observations. Treat it as a sensor output.
Mathematically speaking, if z=x+nz=x+n, and you only know zz, then you have 1 equation with 2 unknowns. Especially for xCx \in \mathbb{C}, there would be infinitely many solutions. Am I getting it correctly?

Edit: However, you can estimate the variance of Gaussian white noise. See MSE, but I'm not 100% familiar with the method.
 
Last edited:
Mathematically speaking, if z=x+nz=x+n, and you only know zz, then you have 1 equation with 2 unknowns. Especially for xCx \in \mathbb{C}, there would be infinitely many solutions. Am I getting it correctly?

Edit: However, you can estimate the variance of Gaussian white noise. See MSE, but I'm not 100% familiar with the method.
Let's for example say that I know the noise variance σn \sigma_n . I forgot to mention this in the original question.
 
Let's for example say that I know the noise variance σn \sigma_n . I forgot to mention this in the original question.
Your situation is difficult because your observation isn't a direct observation of the variable you want to estimate. Now, if you know zz and nn, then we can solve for xx. However, we run into the same problem again where xx is in terms of β\beta and UU. One equation 2 unknowns.
 
Your situation is difficult because your observation isn't a direct observation of the variable you want to estimate. Now, if you know zz and nn, then we can solve for xx. However, we run into the same problem again where xx is in terms of β\beta and UU. One equation 2 unknowns.
One solution I can think of is to let β\beta be its expected value instead i.e. E[β]=πE[\beta]=\pi.
 
Top