deodorant.covar-functions

Covariance functions for Deodorant.

matern-for-vector-input

(matern-for-vector-input dim K)
Converts a covariance function that takes pairs of log-sig-f
 and log-rho as inputs and converts them to one that accepts
 a vector with correctly ordered hyperparameters.

Accepts: dim           - Dimension of data
         K             - Relevant kernel function

Return: K-vec - the kernel function that now accepts x-diff-squared
                followed by a vector

matern32-grad-K

(matern32-grad-K x-diff-squared log-sig-f log-rho)
Gradient for matern32.  Syntax as per matern32
except returns a DxNxN array giving derivatives
in the different directions.  The first entry
of the first dimension corresponds to the derivative
with respect to log-sig-f, with the others wrt
log-rho

matern32-grad-z

(matern32-grad-z xs-z-diff log-sig-f log-rho)
Jacobian of side kernel matrix w.r.t. new data point z for Matern 32.
If using a gradient based solver for the acquisition funciton, then
needed for calculating derivative of Expected Improvement, EI(z), as outlined
on page 3 of
http://homepages.mcs.vuw.ac.nz/~marcus/manuscripts/FreanBoyle-GPO-2008.pdf.

Accepts:
xs-z-diff   - NxD matrix whose (i, j)th entry is x_ij - z_j
log-sig-f   - scalar; parameter of kernel function
log-rho     - D-dimensional parameter of kernel function

Returns:
[NxD] Jacobian of side kernel matrix w.r.t. new data point where
(i, j)th entry is d(kernel(x_i, z)) / d(z_j).

matern32-K

(matern32-K x-diff-squared log-sig-f log-rho)
Covariance function for matern-32.

 Accepts: x-diff-squared - a NxDxN matrix of squared distances
                           or NxD matrix of squared distances of old points
                           and new point
          log-sig-f      - a scalar
          log-rho        - a vector

Returns: A matrix K

matern32-plus-matern52-grad-K

(matern32-plus-matern52-grad-K x-diff-squared log-sig-f-32 log-rho-32 log-sig-f-52 log-rho-52)
Gradient of compound covariance function for matern-32 and matern-52.

Accepts: x-diff-squared   - a NxDxN matrix of squared distances
         log-sig-f-32     - a scalar
         log-rho-32       - a vector
         log-sig-f-52     - a scalar
         log-rho-52       - a vector

Returns: An DxNxN array grad-K giving derivatives
         in the different directions.  The first entry
         of the first dimension corresponds to the derivative
         with respect to log-sig-f, with the others wrt
         log-rho

matern32-plus-matern52-grad-z

(matern32-plus-matern52-grad-z x-z-diff log-sig-f-32 log-rho-32 log-sig-f-52 log-rho-52)
Jacobian of side kernel matrix w.r.t. new data point z for Matern 32 + Matern 52.
If using a gradient based solver for the acquisition funciton, then
needed for calculating derivative of Expected Improvement, EI(z), as outlined
on page 3 of
http://homepages.mcs.vuw.ac.nz/~marcus/manuscripts/FreanBoyle-GPO-2008.pdf.

Accepts:
xs-z-diff   - NxD matrix whose (i, j)th entry is x_ij - z_j
log-sig-f   - scalar; parameter of kernel function
log-rho     - D-dimensional parameter of kernel function

Returns:
[NxD] Jacobian of side kernel matrix w.r.t. new data point where
(i, j)th entry is d(kernel(x_i, z)) / d(z_j).

matern32-plus-matern52-K

(matern32-plus-matern52-K x-diff-squared log-sig-f-32 log-rho-32 log-sig-f-52 log-rho-52)
Compound covariance function for matern-32 and matern-52.

Accepts: x-diff-squared   - a NxDxN matrix of squared distances
         log-sig-f-32     - a scalar
         log-rho-32       - a vector
         log-sig-f-52     - a scalar
         log-rho-52       - a vector

Returns: A matrix K

matern32-xs-z

(matern32-xs-z xs z log-sig-f log-rho)
Side covariance matrix for matern-32, i.e. vector k where
k_i = kernel(x_i, z).

Accepts:
xs         - a NxD vector of vectors of xs
z          - [Dx1] vector of new data point
log-sig-f  - a scalar
log-rho    - a vector

Returns: A vector k sized N.

matern52-grad-K

(matern52-grad-K x-diff-squared log-sig-f log-rho)
Gradient for matern52.  Syntax as per matern52
except returns a DxNxN array giving derivatives
in the different directions.  The first entry
of the first dimension corresponds to the derivative
with respect to log-sig-f, with the others wrt
log-rho

matern52-grad-z

(matern52-grad-z xs-z-diff log-sig-f log-rho)
Jacobian of side kernel matrix w.r.t. new data point z for Matern 52.
If using a gradient based solver for the acquisition funciton, then
needed for calculating derivative of Expected Improvement, EI(z), as outlined
on page 3 of
http://homepages.mcs.vuw.ac.nz/~marcus/manuscripts/FreanBoyle-GPO-2008.pdf.

Accepts:
xs-z-diff   - NxD matrix whose (i, j)th entry is x_ij - z_j
log-sig-f   - scalar; parameter of kernel function
log-rho     - D-dimensional parameter of kernel function

Returns:
[NxD] Jacobian of side kernel matrix w.r.t. new data point where
(i, j)th entry is d(kernel(x_i, z)) / d(z_j).

matern52-K

(matern52-K x-diff-squared log-sig-f log-rho)
Covariance function for matern-52.

 Accepts: x-diff-squared - a NxDxN matrix of squared distances
                           or NxD matrix of squared distances of old points
                           and new point
          log-sig-f      - a scalar
          log-rho        - a vector

Returns: A matrix K

matern52-xs-z

(matern52-xs-z xs z log-sig-f log-rho)
Side covariance matrix for matern-52, i.e. vector k where
k_i = kernel(x_i, z).

Accepts:
xs         - a NxD vector of vectors of xs
z          - [Dx1] vector of new data point
log-sig-f  - a scalar
log-rho    - a vector

Returns: A vector k sized N.