Skip to content

BOptimal

sgptools.objectives.BOptimal

Bases: Objective

Computes the B-optimal design metric.

Refer to the following paper for more details
  • Ott et al., 2024. Approximate Sequential Optimization for Informative Path Planning.

B-optimality minimizes the trace of the inverse of the covariance matrix \(-Tr(K(X, X)^{-1})\). Since optimization algorithms typically minimize a function, this objective returns \(Tr(K(X, X)^{-1})\), which is then maximized.

Source code in sgptools/objectives.py
class  BOptimal(Objective):     
    """
    Computes the B-optimal design metric.

    Refer to the following paper for more details:
        - Ott et al., 2024. *Approximate Sequential Optimization for Informative Path Planning.*

    B-optimality minimizes the trace of the inverse of the covariance matrix 
    $-Tr(K(X, X)^{-1})$. Since optimization 
    algorithms typically minimize a function, this objective returns 
    $Tr(K(X, X)^{-1})$, which is then maximized.
    """       
    def __call__(self, X: tf.Tensor) -> tf.Tensor:
        """
        Computes the trace of the inverse of the covariance matrix $Tr(K(X, X)^{-1})$.

        Args:
            X (tf.Tensor): The input points (e.g., sensing locations) for which
                           the objective is to be computed. Shape: (M, D).

        Returns:
            tf.Tensor: The computed B-optimal metric value.

        Usage:
            ```python
            import gpflow
            import numpy as np
            # Assume kernel is defined
            # X_objective = np.random.rand(100, 2) # Not used by B-Optimal but required by base class
            # kernel = gpflow.kernels.SquaredExponential()
            # noise_variance = 0.1

            b_optimal_objective = BOptimal(
                X_objective=X_objective,
                kernel=kernel,
                noise_variance=noise_variance
            )
            X_sensing = tf.constant(np.random.rand(10, 2))
            b_optimal_value = b_optimal_objective(X_sensing)
            ```
        """
        # K(X, X)
        K_X_X = self.kernel(X)
        inv_K_X_X = tf.linalg.inv(self.jitter_fn(K_X_X))
        trace_inv_K_X_X = tf.linalg.trace(inv_K_X_X)
        return trace_inv_K_X_X

__call__(X)

Computes the trace of the inverse of the covariance matrix \(Tr(K(X, X)^{-1})\).

Parameters:

Name Type Description Default
X Tensor

The input points (e.g., sensing locations) for which the objective is to be computed. Shape: (M, D).

required

Returns:

Type Description
Tensor

tf.Tensor: The computed B-optimal metric value.

Usage
import gpflow
import numpy as np
# Assume kernel is defined
# X_objective = np.random.rand(100, 2) # Not used by B-Optimal but required by base class
# kernel = gpflow.kernels.SquaredExponential()
# noise_variance = 0.1

b_optimal_objective = BOptimal(
    X_objective=X_objective,
    kernel=kernel,
    noise_variance=noise_variance
)
X_sensing = tf.constant(np.random.rand(10, 2))
b_optimal_value = b_optimal_objective(X_sensing)
Source code in sgptools/objectives.py
def __call__(self, X: tf.Tensor) -> tf.Tensor:
    """
    Computes the trace of the inverse of the covariance matrix $Tr(K(X, X)^{-1})$.

    Args:
        X (tf.Tensor): The input points (e.g., sensing locations) for which
                       the objective is to be computed. Shape: (M, D).

    Returns:
        tf.Tensor: The computed B-optimal metric value.

    Usage:
        ```python
        import gpflow
        import numpy as np
        # Assume kernel is defined
        # X_objective = np.random.rand(100, 2) # Not used by B-Optimal but required by base class
        # kernel = gpflow.kernels.SquaredExponential()
        # noise_variance = 0.1

        b_optimal_objective = BOptimal(
            X_objective=X_objective,
            kernel=kernel,
            noise_variance=noise_variance
        )
        X_sensing = tf.constant(np.random.rand(10, 2))
        b_optimal_value = b_optimal_objective(X_sensing)
        ```
    """
    # K(X, X)
    K_X_X = self.kernel(X)
    inv_K_X_X = tf.linalg.inv(self.jitter_fn(K_X_X))
    trace_inv_K_X_X = tf.linalg.trace(inv_K_X_X)
    return trace_inv_K_X_X

__init__(X_objective, kernel, noise_variance, jitter=1e-06, **kwargs)

Initializes the base objective. This constructor primarily serves to define the expected parameters for all objective subclasses.

Parameters:

Name Type Description Default
X_objective ndarray

The fixed set of data points (e.g., candidate locations or training data points) against which MI is computed. Shape: (N, D).

required
kernel Kernel

The GPflow kernel function to compute covariances.

required
noise_variance float

The observed data noise variance, which is added to the jitter.

required
jitter float

A small positive value to add for numerical stability to covariance matrix diagonals. Defaults to 1e-6.

1e-06
**kwargs Any

Arbitrary keyword arguments.

{}
Source code in sgptools/objectives.py
def __init__(self,
             X_objective: np.ndarray,
             kernel: gpflow.kernels.Kernel,
             noise_variance: float,
             jitter: float = 1e-6,
             **kwargs: Any):
    """
    Initializes the base objective. This constructor primarily serves to define
    the expected parameters for all objective subclasses.

    Args:
        X_objective (np.ndarray): The fixed set of data points (e.g., candidate locations
                                  or training data points) against which MI is computed.
                                  Shape: (N, D).
        kernel (gpflow.kernels.Kernel): The GPflow kernel function to compute covariances.
        noise_variance (float): The observed data noise variance, which is added to the jitter.
        jitter (float): A small positive value to add for numerical stability to covariance
                        matrix diagonals. Defaults to 1e-6.
        **kwargs: Arbitrary keyword arguments.
    """
    self.X_objective = tf.constant(X_objective)
    self.kernel = kernel
    self.noise_variance = noise_variance
    # Total jitter includes the noise variance
    self._base_jitter = jitter
    self.jitter_fn = lambda cov: jitter_fn(
        cov, jitter=self._base_jitter + self.noise_variance)

update(kernel, noise_variance)

Updates the kernel and noise variance for the MI objective. This method is crucial for optimizing the GP hyperparameters externally and having the objective function reflect those changes.

Parameters:

Name Type Description Default
kernel Kernel

The updated GPflow kernel function.

required
noise_variance float

The updated data noise variance.

required
Source code in sgptools/objectives.py
def update(self, kernel: gpflow.kernels.Kernel,
           noise_variance: float) -> None:
    """
    Updates the kernel and noise variance for the MI objective.
    This method is crucial for optimizing the GP hyperparameters externally
    and having the objective function reflect those changes.

    Args:
        kernel (gpflow.kernels.Kernel): The updated GPflow kernel function.
        noise_variance (float): The updated data noise variance.
    """
    # Update kernel's trainable variables (e.g., lengthscales, variance)
    for self_var, var in zip(self.kernel.trainable_variables,
                             kernel.trainable_variables):
        self_var.assign(var)

    self.noise_variance = noise_variance
    # Update the jitter function to reflect the new noise variance
    self.jitter_fn = lambda cov: jitter_fn(
        cov, jitter=self._base_jitter + self.noise_variance)