在統計學裡, Maximum Likelihood Estimation (MLE) 是常見對於母體參數的估計,本文詳細介紹它數學的詳細大致長相,以及它是一個無約束最佳化問題(unconstrained optimization problem) ,所以可以用最佳化演算法去逼近 MLE
[Notation]
母體未知參數為 $\theta = (\theta_1,\theta_2,...\theta_m) \in \Theta \subset \mathbb{R}^{m} $
隨機向量為 $X = (X_1,X_2,....X_p)$
給定向量為 $x = (x_1,x_2,...,x_p)$
母體分布 (joint probability density function) $f = f(x,\theta) $
$$(1)\forall \theta \in \Theta, \int_{x\in \mathbb{R}^{p}} f(x,\theta) = 1 \qquad (2) \forall (x , \theta) \in \mathbb{R}^{p}\times \Theta \quad f(x,\theta) \in [0,1]$$
[樣本的聯合分配]
考慮獨立同分配 identical independent distributed (iid) 抽樣,$n$ 個樣本為 $\mathcal{D}=\{(X^k)\}^{n}_{k=1}$,$d = \{(x^k)\}^{n}_{k=1}$,其中 $X^{k} = (X^{k}_1,X^{k}_2,...X^{k}_p)$ $$\{(X^k)\}^{n}_{k=1} \sim \prod^{n}_{k=1} f(x^{k},\theta) = \underbrace{f(x^{1},\theta) \cdot f(x^{2},\theta) ... \cdot f(x^{k},\theta)}_{\text{number of variables } np + m} =(*) $$
[Likelihood Function]
當樣本已知 $\mathcal{D} = d $ 給定,視 $\theta$ 為變數,則統計學術語稱$(*)$為 Likelihood function $L(\theta|d) $ ,則我們可以找極大化這個函數,找到最大的$\theta$,即 $$\theta^{*}(d):=\underset{\theta \in \Theta}{argmax}L(\theta|d)$$ 。
而當視為 $\mathcal{D}$ (樣本尚未給定)時,我們稱之為Maximum Likelihood Estimation(MLE) $$\theta_{MLE}(\mathcal{D}):=\underset{\theta \in \Theta}{argmax}L(\theta|\mathcal{D}) = \underset{\theta \in \Theta}{argmax}\prod^{n}_{k=1} f(X^{k},\theta) $$
其中 $\theta_{MLE}(\mathcal{D}) = \theta_{MLE}(X^1,X^2,.....X^{n})$ 是隨機變數
[計算 MLE 的值]
當 $X^{k} = x^k$ (樣本觀察到的時候),我們要計算統計量相當於解一個最佳化問題即
$$\theta_{MLE}(\mathcal{D}=d) = \underset{\theta \in \Theta}{argmax}\prod^{n}_{k=1} f(x^{k},\theta) $$
我們可以取對數 log 讓連乘變連加(p.s 因為 log 是嚴格遞增函數)
$$\underset{\theta \in \Theta}{argmax}\prod^{n}_{k=1} f(x^{k},\theta) = \underset{\theta \in \Theta}{argmax}\sum^{n}_{k=1} log f(x^{k},\theta) $$
我們分別對 $\theta_i$ 微分 =0 (first derivative test),寫成聯立方程組即
$$\forall i =1,2,....m \quad \frac{\partial}{\partial \theta_i}\sum^{n}_{k=1} log f(x^{k},\theta) = \underbrace{\sum^{n}_{k=1}\frac{\frac{\partial}{\partial \theta_i}f(x^{k},\theta)}{f(x^{k},\theta)}}_{g_i(\theta)} = 0 $$
相當於 $m$ 個 unknowns , $m$ 個 constraints 的非線性方程組(nonlinear system)
$$\left\{\begin{array}{c}
g_1(\theta_1,....\theta_m) = 0 \\
g_2(\theta_1,....\theta_m) = 0 \\
g_3(\theta_1,....\theta_m) = 0\\
g_4(\theta_1,....\theta_m) = 0\\
..... \\
g_m(\theta_1,....\theta_m) = 0\\
\end{array}\right.$$
可以使用牛頓法(Newton Method)計算近似的 MLE 值 !!
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[Classical Example : p = 1 , m =2 ]
我們用以上的例子證明 一維的常態分布(Normal Distribution) $(\mu,\sigma^2)$ 的 MLE 是 $(\bar{X},\frac{n-1}{n}S^2)$ ,其中 $\theta = (\mu , \sigma^2)$ ,$\theta^{max}(\mathcal{D})= (\bar{X},\frac{n-1}{n}S^2)$ ,
其中 $\bar{X} =\frac{\sum^{n}_{k=1}X^k}{n}$ (sample mean) ,
其中 $S^2 = \frac{1}{n-1}\sum (X^k- \bar{X})^2$ (unbiased sample variance)
考慮 Normal Distribution $$f(x^k,\mu,\sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left[{-\frac{(x^k-\mu)^2}{2\sigma^2}}\right]$$
計算 $\frac{\partial}{\partial \theta_i}f(x^{k},\theta)$ 即
$\left\{\begin{array}{l}
\frac{\partial}{\partial \mu}f(x^k,\mu,\sigma^2) = \frac{\partial}{\partial \mu}\left( \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left[{-\frac{(x^k-\mu)^2}{2\sigma^2}}\right] \right) \\ = \frac{1}{\sqrt{2\pi \sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}} \cdot \left(\frac{x^k-\mu}{\sigma^2}\right) = \left(\frac{x^k-\mu}{\sigma^2}\right) f(x^k,\mu,\sigma^2) \\
\frac{\partial}{\partial \sigma^2}f(x^k,\mu,\sigma^2) = \frac{\partial}{\partial \sigma^2} \left( \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left[{-\frac{(x^k-\mu)^2}{2\sigma^2}}\right]\right) \\ = \frac{1}{\sqrt{2\pi \sigma^2}} e^{-\frac{(x^k-\mu)^2}{2\sigma^2}}\left[ \frac{(x^k-\mu)^2}{\sigma^3}-\frac{1}{\sigma} \right] =\left[ \frac{(x^k-\mu)^2}{\sigma^3}-\frac{1}{\sigma} \right] f(x^k,\mu,\sigma^2) \\ \end{array}\right. $
會得到兩個式子 :
$\begin{array}{l} (1) \sum^{n}_{k=1}\left(\frac{x^k-\mu}{\sigma^2}\right) = 0 \overset{\sigma > 0}{\Longrightarrow} \mu^{max}(d) = \frac{1}{n}{\sum_{k=1} x^k} \\ \Longrightarrow \bar{X}(\mathcal{D}) := \frac{1}{n}\sum_{k=1} X^k\text{ 為 } \mu \text{ 的 MLE} \\ \\
(2) \sum^{n}_{k=1}\left[ \frac{(x^k-\mu)^2}{\sigma^3}-\frac{1}{\sigma} \right] = 0 \Longrightarrow {(\sigma^{max})}^2 = \sum^{n}_{k=1}(x^k-\mu)^2 \\ \underset{by (1)}{\Longrightarrow} \frac{n-1}{n}S^2(\mathcal{D}):= \frac{1}{n}\sum^{n}_{k=1}(X^k-\bar{X})^2 \text{ 為 } \sigma^2 \text{ 的 MLE} \\ \end{array}$
注意 : MLE of $\sigma^2$ 就是 $\frac{1}{n}(X^k-\bar{X})^2$ 但它是 biased estimator !!
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[以上純為學術經驗交流知識分享,如有錯誤或建議可留言~~]
by Plus & Minus 2017.08
留言
張貼留言