A Gaussian process in a nutshell
likehihood P (t | X, θ, φ) = N [m(X, φ), K] multivariate Gaussian
t : output vector (e.g. fluxes)
X : input matrix (e.g. time, pointing, detector temperature...)
θ, φ : (hyper-)parameters
m : mean function
φ : mean function parameters that one of (hyper-)parameters above
K : covariance matrix -> Knm ≡ cov [xn, xm] = k(xn, xm, θ)
k : kernal function -> k(xn, xm, θ)
θ : kernal parameters that one of (hyper-)parameters above
두 확률변수의 공분산(covariance) : E -> Expectation(기대값)
cov(X, Y) = E[(X-ux)(Y-uy)]
= ΣxΣy(x-ux)(y-uy)P[X=x, Y=y]
단, P[X=x, Y=y]는 X와 Y의 결합확률분포에서 X=x 그리고 Y=y일 확률을 나타낸다.
공분산의 간편식 :
cov(X, Y) = E(XY) - uxuy
= ΣxΣy xyP[X=x, Y=y] - uxuy
두 확률변수의 상관계수(correlation coefficient) : SD --> Standard Deviation (표준편차)
Corr(X, Y) = Cov(X, Y) / SD(X)SD(Y)
reference : http://www.robots.ox.ac.uk/~mosb/public/pdf/165/algrain_suzanne.pdf
likehihood P (t | X, θ, φ) = N [m(X, φ), K] multivariate Gaussian
t : output vector (e.g. fluxes)
X : input matrix (e.g. time, pointing, detector temperature...)
θ, φ : (hyper-)parameters
m : mean function
φ : mean function parameters that one of (hyper-)parameters above
K : covariance matrix -> Knm ≡ cov [xn, xm] = k(xn, xm, θ)
k : kernal function -> k(xn, xm, θ)
θ : kernal parameters that one of (hyper-)parameters above
두 확률변수의 공분산(covariance) : E -> Expectation(기대값)
cov(X, Y) = E[(X-ux)(Y-uy)]
= ΣxΣy(x-ux)(y-uy)P[X=x, Y=y]
단, P[X=x, Y=y]는 X와 Y의 결합확률분포에서 X=x 그리고 Y=y일 확률을 나타낸다.
공분산의 간편식 :
cov(X, Y) = E(XY) - uxuy
= ΣxΣy xyP[X=x, Y=y] - uxuy
두 확률변수의 상관계수(correlation coefficient) : SD --> Standard Deviation (표준편차)
Corr(X, Y) = Cov(X, Y) / SD(X)SD(Y)
reference : http://www.robots.ox.ac.uk/~mosb/public/pdf/165/algrain_suzanne.pdf
댓글 없음:
댓글 쓰기