# Extreme deconvolution: inferring complete distribution functions from noisy, heterogeneous and incomplete observations

###### Abstract

We generalize the well-known mixtures of Gaussians approach to density estimation and the accompanying Expectation-Maximization technique for finding the maximum likelihood parameters of the mixture to the case where each data point carries an individual -dimensional uncertainty covariance and has unique missing data properties. This algorithm reconstructs the error-deconvolved or “underlying” distribution function common to all samples, even when the individual data points are samples from different distributions, obtained by convolving the underlying distribution with the unique uncertainty distribution of the data point and projecting out the missing data directions. We show how this basic algorithm can be extended with Bayesian priors on all of the model parameters and a “split-and-merge” procedure designed to avoid local maxima of the likelihood. We apply this technique to a few typical astrophysical applications.

^{1}

^{1}affiliationtext: Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003

^{2}

^{2}affiliationtext: To whom correspondence should be addressed:

^{3}

^{3}affiliationtext: Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany

^{4}

^{4}affiliationtext: Department of Computer Science, University of Toronto, 6 King’s College Road, Toronto, Ontario, M5S 3G4 Canada

^{5}

^{5}affiliationtext: Google Inc., Mountain View, CA

## 1 Introduction

Inferring a distribution function given a finite set of samples from this distribution function is a problem of considerable general interest. The literature contains many density estimation techniques, ranging from simply binning the data in a histogram and smoother versions of this collectively known as kernel density estimation, to more sophisticated techniques such as non-parametric (penalized) maximum likelihood fitting (see, e.g., Silverman, 1986, for a review of all of the previous methods) and full Bayesian analyses (e.g., Diebolt and Robert, 1994; Richardson and Green, 1997). These techniques perform well in the high signal-to-noise regime, i.e., when one has “good data”, however, in scientific applications the data generally come with large and heterogeneous uncertainties and one often only has access to lower dimensional projections of the full data, i.e., there are often missing data. As a scientist, you are not interested in the observed distribution, what you really want to know is the underlying distribution, i.e., the distribution you would have if the data had vanishingly small uncertainties and no missing data. In this paper we describe a general approach for inferring distribution functions when these complications are present.

A frequently used density estimation technique is to model the distribution function as a sum of Gaussian distributions by optimizing the likelihood of this model given the data (e.g. McLachlan and Basford, 1988). We show that this approach can be generalized in the presence of noisy, heterogeneous, and incomplete data. The likelihood of the model for each data point is given by the model convolved with the (unique) uncertainty distribution of that data point; the objective function is obtained by simply multiplying these individual likelihoods together for the various data points. Optimizing this objective function one obtains a maximum likelihood estimate of the distribution (more specifically, of its parameters).

While optimization of this objective function can, in principle, be performed by a generic optimizer, we develop an Expectation-Maximization (EM) algorithm that optimizes the objective function. This algorithm works in much the same way as the normal EM algorithm for mixture of Gaussians density estimation, except that an additional degree of incompleteness is given by the actual values of the observables, since we only have access to noisy projections of the actual observables; in the expectation step these actual values are estimated based on the noisy and projected measured values and the current estimate of the distribution function. In the limit in which the noise is absent but the data are lower dimensional projections of the quantities of interest, this algorithm reduces to the algorithm described in Ghahramani and Jordan (1994a, b).

We also show how Bayesian priors on all of the parameters of the model can be naturally included in this algorithm as well as how a split-and-merge procedure that heuristically searches parameter space for better approximations to the global maximum can also be incorporated in this approach. These priors and the split-and-merge procedure can be important when applying the EM algorithm developed here in situations with real data where the likelihood surface can have a very complicated structure. We also discuss briefly the practical issues having to do with model selection in the mixture model approach.

Applications to real data sets are discussed in detail in Section 6, both in general terms, i.e., why the approach we put forward here is more appropriate when dealing with noisy, heterogeneous data than the more traditional density estimation techniques, as well as in some concrete examples. These concrete examples show that the technique developed in this paper performs extremely well even when the underlying distribution function has a complicated structure.

The technique we describe below has many applications besides returning a maximum likelihood fit to the error-deconvolved distribution function of a data sample. For instance, when an estimate of the uncertainty in the estimated parameters or distribution function is desired or when a full Bayesian analysis of the mixture model preferred, the outcome of the maximum likelihood technique developed here can be used as a seed for Markov Chain Monte Carlo (MCMC) methods for finite mixture modeling (e.g., Diebolt and Robert, 1994; Richardson and Green, 1997). Another possible application concerns fitting a linear relationship to a data set when the data has non-negligible errors both in as well as in , which can be correlated. This problem can be thought of as fitting the underlying, error-deconvolved distribution of the points with a Gaussian; the linear relationship then corresponds to the direction of the largest eigenvalue of the underlying distribution’s covariance matrix. We describe this application in a specific case in section 6.2.

## 2 Likelihood of a mixture of Gaussian distributions given a set of heterogeneous, noisy samples

Our goal is to fit a model for the distribution of a -dimensional quantity using a set of observational data points . Therefore, we need to write down the probability of the data under the model for the distribution. The observations are assumed to be noisy projections of the true values

(1) |

where the noise is drawn from a Gaussian with zero mean and known covariance tensor . The case in which there is missing data occurs when the projection matrix is rank-deficient. Alternatively, we can handle the missing data case by describing the missing data as directions of the covariance matrix that have a formally infinite eigenvalue; In practice we use very large eigenvalues in the noise-matrix. When the data has only a small degree of incompleteness, i.e., when each data point has only a small number of unmeasured dimensions, this latter approach is often the best choice, since one often has some idea about the unmeasured values. For example, in the example given below of inferring the velocity distribution of stars near the Sun we know that the stars are moving at velocities that do not exceed the speed of light, which is not very helpful, but also that none of the velocities exceed the local Galactic escape speed, since we can safely assume that all the stars are bound to the Galaxy. However, in situations in which each data point has observations of a dimensionality using the projections matrices will greatly reduce the computational cost, since, as will become clear below, the most computationally expensive operations all take place in the lower dimensional space of the observations.

We will model the distribution of the true values as a mixture of Gaussians:

(2) |

where the amplitudes sum to unity and the function is the Gaussian distribution with mean and variance tensor :

(3) |

For a given observation the likelihood of the model parameters given that observation and the noise covariance , which we will write as , can be written as:

(4) | |||||

where

(5) |

This likelihood works out to be itself a mixture of Gaussians—in order not to clutter the algebra we set all of the projection matrices equal to the unit matrix here—

This integral works out to be

which simplifies to

(6) | |||||

where we have defined

(7) | |||||

Restoring the projection matrices , we have

(8) |

where

(9) |

The free parameters of this model can now be chosen such as to maximize an explicit, justified, scalar objective function , given here by the logarithm (log) likelihood of the model given the data, i.e.,

(10) |

This function can be optimized in several ways, one of which is to calculate the gradients and use a generic optimizer to increase the likelihood until it reaches a maximum. This approach is complicated by parameter constraints (e.g., the amplitudes must all be non-negative and add up to one, the variance tensors must be positive definite and symmetric). In what follows we will describe a different approach: An EM algorithm that iteratively maximizes the likelihood, while naturally respecting the restrictions on the parameters.

## 3 Fitting Mixtures with heterogeneous, noisy data using an EM algorithm

The problem of finding a maximum likelihood estimate of the parameters of the mixture of Gaussians model by optimizing the total log likelihood given in equation (10) is not a problem with missing data. However, optimization of the total log likelihood is difficult and an analytical solution is not possible for . An analytical solution does exist when (when the error covariances are equal; see below). Therefore, if we knew which Gaussian a specific data point was sampled from, optimization would be simple. The formulation of the Gaussian mixture density estimation as a hidden data problem takes advantage of this fact (Dempster et al., 1977).

When data is actually missing and/or when the data is noisy (error covariances not equal to zero), an analytical solution does not exist anymore. As we will show below, in this case formulating the problem as a missing data problem can also lead to a simple, iterative algorithm that leads to a maximum likelihood estimate of the model parameters. First we will recapitulate how the EM algorithm for Gaussian mixtures works by applying it to the basic problem of fitting a set of data points with a mixture of Gaussians, thus we will set all the error covariances equal to zero. Then we will investigate how the problem can be solved by a similar EM algorithm when we have incomplete and/or noisy data. A short summary of the general properties of the EM methodology is given in appendix C.

### 3.1 The EM algorithm with complete, precise observations

In the case of complete, precise observations (i.e., , , ) the log likelihood of the model given the data from equation (10) reduces to the following log likelihood:

(11) |

Formulating this problem as a missing data problem introduces the indicator variables which indicate whether a data point was sampled from Gaussian , i.e.,

(12) |

This variable can take on values between these extreme values, in which case corresponds to the probability that data point was generated by Gaussian . In any case, for every data point we have that .

Using this hidden indicator variable we can write the “full-data” log likelihood as

(13) |

Using Jensen’s inequality (e.g., MacKay, 2003) it is easy to see that optimizing this full-data log likelihood also optimizes the original log likelihood equation (11). Jensen’s inequality for a concave function , numbers , and weights can be stated as

(14) |

The logarithm is a concave function and, defining , if we choose and weights , the indicator variables defined above, we find (since )

(15) | |||||

with equality when we set

(16) |

since then

(17) | |||||

This proves that the EM algorithm applied to the full-data log likelihood in equation (13) leads to a maximum likelihood estimate of the model parameters for the original likelihood, since the calculation of the E-step, i.e., taking the expectation of the full-data log likelihood, essentially reduces to calculating the expectation of the indicator variables given the data and the current model estimate, which is exactly setting the equal to the posterior probability of the data point belonging to Gaussian , as given in equation (16). Optimization of the -function from equation (15) with respect to the model parameters is equivalent to optimization of the full-data log likelihood, since the second term in the definition of does not depend on the model parameters and the first term is equal to the full-data log likelihood. This view of the EM algorithm for a mixture of Gaussians (Hathaway, 1986) can also be used to argue for different E- and/or M-steps in order to speed up convergence (Neal and Hinton, 1998). For example, one can choose particular data points to target in the E-step or specific model parameters in the M-step.

In the M step we maximize with respect to the model parameters . The constraint on the amplitudes (i.e., that they add up to one) can be implemented by adding a Lagrange multiplier and an extra term to . Taking the derivative of this with respect to then leads to

(18) | |||||

The requirement that the add up to one leads to

(19) | |||||

where we have used the fact that and is the number of data points. Therefore the optimal value of is

(20) |

The rest of the optimization reduces to optimizing the reduced log likelihood

which we can simplify by using (1) that we can write that , and (3) the cyclical property of the trace: , (2) that for any number

(21) |

where we have used the fact that ). This is equal to zero when (which one can derive by differentiating

(22) |

To summarize, the likelihood can be optimized by alternating the following E- and M-steps:

(23) |

where .

A fatal flaw of the maximum likelihood technique described here, and we must emphasize that this is a flaw of the objective function and not of the EM algorithm, is that the likelihood is unbounded. Indeed, when one of the means is set equal to one of the data points, reducing the covariance matrix of that Gaussian will lead to an unbounded increase in the probability of that data point, i.e., the Gaussian becomes a delta distribution centered on a particular data point and the model therefore has a infinite likelihood. This problem can be dealt with in a couple of different ways, some of which will be described below. We will describe a solution which assumes a prior for the model covariances, which leads to a regularization of the covariances at every step such that they cannot become zero. However, we will use this technique to deal with the related problem of the model covariances reaching a maximum likelihood at the edge of their domain, i.e., when the covariance becomes zero without making the likelihood infinite. In the next section we will describe a solution to the unbounded likelihood problem that is especially well motivated when dealing with experimental or observational data, i.e., taking into account the measurement uncertainties.

### 3.2 The EM algorithm with heterogeneous, noisy data

The problem we would like to solve has an additional component of
incompleteness. The data we are using are noisy, as described by their
covariance matrices . This noise can vary between small
fluctuations to a complete lack of information for certain components
of the underlying quantity (as indicated by a formally infinite
contribution to the covariance matrix). We can use the EM algorithm to
deal with this second kind of incomplete data as well^{1}^{1}1This
algorithm was developed independently before (Diebolt & Celeux, 1989,
unpublished; Diebolt & Celeux, 1990, unpublished)..

In the case of full-rank projection matrices, we could try to proceed exactly as we did in the case of complete data with noise covariance matrices equal to the zero matrix. The E-step remains the same and the part of the M-step that updates the amplitudes will also be the same as before, however, in order to optimize the means and covariances, we would have to solve an equation similar to equation (21), i.e.,

(24) |

in which we simply use instead of . This equation cannot be solved analytically to give us update steps for the means and covariances of the Gaussians when the noise covariances are different for each observation. A reasonable way to deal with this would be to use a generic optimizer to perform the M-step optimization, which would still give a better result than using a generic optimizer for the whole problem since the dimensionality of the problem is greatly reduced, however, we will describe a different procedure which deals with this problem in a similar way to how the EM algorithm simplified the complete data case.

Essentially, what we will do is consider the situation of noisy measurements, which may or may not be projections of higher dimensional quantities, as a missing data problem in itself. That is, we will consider the true values as extra missing data (in addition to the indicator variables ). This allows us to write down the “full data” log likelihood as follows

(25) |

We will now show how we can use the EM methodology to find straightforward update steps that maximize the full data likelihood of the model. Then we will prove that these updates also maximize the likelihood of the model given the noisy observations.

The E-step consists as usual of taking the expectation of the full data likelihood with respect to the current model parameters . Writing out the full data log likelihood from equation (25) we find

(26) |

which shows that in addition to the expectation of the indicator variables for each component we also need the expectation of the terms and the expectation of the terms given the data, the current model estimate and the component . The expectation of the is again equal to the posterior probability that a data point was drawn from the component (see equation 16). The expectation of the and the can be found as follows: Consider the probability distribution of the vector given the model estimate and the component . From the description of the problem we can see that this vector is distributed normally with mean

(27) |

and covariance matrix

(28) |

The conditional distribution of the given the data is normal with mean (see appendix B)

(29) |

and covariance matrix

(30) |

Thus we see that the expectation of given the data , the model estimate, and the component is given by , whereas the expectation of the given the same is given by .

Given this the expectation of the full data log likelihood is given by

(31) | |||||

The update step for the amplitudes is given as before by equation (20). Dropping the term the differential of the reduced expectation of the full data log likelihood is given by

(32) |

The complete EM algorithm given incomplete, noisy observations is then given by

(33) |

where, as before, .

We can prove that this procedure for maximizing the full data likelihood also maximizes the log likelihood of the data given the model. We use Jensen’s inequality in the continuous case (e.g., MacKay, 2003), i.e.,

(34) |

for a concave function and a non-negative integrable function , where we have assumed that is normalized, i.e., is a probability distribution. For each observation we can then introduce a function such that

(35) |

where , as before, represents the set of model parameters, and is the entropy of the distribution . This inequality becomes an equality when we take

(36) |

since then

(37) | |||||

The above holds for each data point, and we can write

(38) |

The last factor reduces to calculating the posterior probabilities and we can write the function as (we drop the entropy term here, since it plays no role in the optimization, as it does not depend on the model parameters)

where we dropped the terms since they don’t depend on the model parameters directly. This shows that this reduces exactly to the procedure described above, i.e., to taking the expectation of the and terms with respect to the distribution of the given the data , the current parameter estimate, and the component . We conclude that the E-step as described above ensures that the expectation of the full data log likelihood becomes equal to the log likelihood of the model given the observed data. Optimizing this log likelihood in the M-step then also increases the log likelihood of the model given the observations. Therefore the EM algorithm we described will increase the likelihood of the model in every iteration, and the algorithm will approach local maxima of the likelihood. Convergence is identified, as usual, as extremely small incremental improvement in the log likelihood per iteration.

Some care must be taken to implement these equations in a numerically stable way. In particular, care should be taken to avoid underflow when computing the ratio of a small probability over the sum of other small probabilities (see appendix A). Notice that we don’t have to explicitly enforce constraints on parameters, e.g., keeping covariances symmetric and positive definite, since this is taken care of by the updates. For example, the update equation for is guaranteed by its form to produce a symmetric positive-semidefinite matrix.

In some cases we might want to keep some of the model parameters fixed during the EM steps, and for the means and the covariances this can simply be achieved by not updating them during the M step. However, if we want to keep certain of the amplitudes fixed we have to be more careful, as we have to satisfy the constraint that the amplitudes add up to one at all times. Therefore, if indexes the free amplitudes and the set of amplitudes we want to keep fixed, the Lagrange multiplier term we have to add to the functional is , which leads to the following update equations for the amplitudes

(40) |

## 4 Extensions to the basic algorithm

Singularities and local maxima are two problems, that can severely limit the generalization capabilities of the computed density estimates for inferring the densities of unknown data points. These are commonly encountered when using the EM algorithm to iteratively compute the maximum likelihood estimates of Gaussian mixtures. Singularities arise when the covariance in equation (3.1) or equation (3.2) becomes singular, i.e., when a Gaussian in the fit becomes very peaked or elongated. A naive way of dealing with this problem is by artificially enforcing a lower bound on the variances of the Gaussians in the mixture, but this usually results in very low inference capabilities of the resulting mixture estimate. We will present a regularization scheme based on defining a Bayesian prior distribution on the parameter space below.

The problem of local maxima is a natural consequence of the EM algorithm as presented above: as the EM procedure ensures a monotonic increase in log likelihood of the model, the algorithm will exit when reaching a local maximum, since it cannot reach the true maximum without passing through lower likelihood regions. A solution to this problem therefore has to discontinuously change the model parameters, thereby quite possibly ending up in a region of parameter space with a lower likelihood than the first estimate. As a result the log likelihood is no longer guaranteed to increase monotonically, however, decreases in log likelihood are rare and concentrated at these discontinuous jumps. A well-motivated algorithm based on moving Gaussians from overpopulated regions to underpopulated regions of parameter-space is described below. However, instead of this deterministic approach, a stochastic EM procedure could also be used to explore the likelihood surface (Broniatowski et al., 1983; Celeux and Diebolt, 1985, 1986).

We also briefly discuss methods to set the remaining free parameters, the number of Gaussians and the hyperparameters introduced in the Bayesian regularization described below.

### 4.1 Bayesian regularization

A general Bayesian regularization scheme consists of putting prior distributions on the Gaussian mixtures parameters space (Ormoneit and Tresp, 1996). A conjugate prior distribution of a single multivariate normal distribution is the product of normal density and a Wishart density (Gelman et al., 2000), where

(41) |

with a normalization constant. The proper prior distribution on the amplitudes is a Dirichlet density , given by

(42) |

where is a normalizing factor, , and (as is the case for the amplitudes in the density mixture). The log likelihood we want to maximize then becomes

(43) |

We can again use the EM algorithm to find a local maximum to this function. The E-step is the same E-step as before (eq. (3.2)). In the M step the functional we have to maximize is the same functional as in the unregularized case (given in eq. (31)) with added terms:

(44) |

of which we can take derivatives with respect to the model parameters again, which leads to the following update equations

(45) |

These update equations have many free hyperparameters without well-motivated values. All of these could be used in principle to regularize the model parameters. For example, the could be used to keep the amplitudes from becoming very small. When one does not want to set these hyperparameters by hand, their optimal values can all be obtained by the model selection techniques described below. However, in what follows we will focus on the regularization of the covariances. It is easy to see from the update equation for the variances that the updated variances are bound from below, since the numerator is greater or equal to and the denominator has an upper limit. Therefore we can focus on the matrix for the purpose of the regularization by setting the other parameters to the uninformative values:

(46) |

We can further reduce the complexity to one free parameter by setting , where is the -dimensional unit matrix. The update equations in the M step then reduce to

(47) |

The best value to use for the parameter is not known a priori but can be determined, e.g., by jackknife leave-one-out cross validation.

### 4.2 Split and merge algorithm for avoiding local maxima

The split and merge algorithm starts from the basic EM algorithm, with or without the Bayesian regularization of the variances, and jumps into action after the EM algorithm has reached a maximum, which more often than not will only be a local maximum. At this point three of the Gaussians in the mixture are singled out, based on criteria detailed below, and two of these Gaussians are merged while the third Gaussian is split into two Gaussians (Ueda et al., 1998). Let us denote the indices of the three selected Gaussians as and , where the former two are to be merged while will be split.

The Gaussians corresponding to the indices and will be merged as follows: the model parameters of the merged Gaussian are

(48) |

where stands for and . Thus, the mean and the variance of the new Gaussian is a weighted average of the means and variances of the two merging Gaussians.

The Gaussian corresponding to is split as follows:

(49) |

Thus, the Gaussian is split into equally contributing Gaussians with each new Gaussian having a covariance matrix that has the same volume as . The means and can be initialized by adding a random perturbation vector to , e.g.,

(50) |

where and .

After this split and merge initialization the parameters of the three affected Gaussians need to be re-optimized in a model in which the parameters of the unaffected Gaussians are held fixed. This can be done by using the M step in equation (4.1) for the parameters of the three affected Gaussians, while keeping the parameters of the other Gaussians fixed, including the amplitudes, i.e., we need to use the update equation (40) for the amplitudes. This ensures that the sum of the amplitudes of the three affected Gaussians remains fixed. This procedure is called the partial EM procedure. After convergence this is then followed by the full EM algorithm on the resulting model parameters. Finally the resulting parameters are accepted if the total log likelihood of this model is greater than the log likelihood before the split and merge step. If the likelihood doesn’t increase the same split and merge procedure is performed on the next triplet of split and merge candidates.

The question that remains to be answered is how to choose the 2
Gaussians that should be merged and the Gaussian that should be
split. In general there are possible triplets like
this which quickly reaches a large number when the number of Gaussians
gets larger. In order to rank these triplets one can define a
*merge criterion* and a *split criterion*.

The merge criterion is constructed based on the observation that if many data points have equal posterior probabilities for two Gaussians, these Gaussians are good candidates to be merged. Therefore one can define the merge criterion:

(51) |

where is the -dimensional vector of posterior probabilities for the th Gaussian. Pairs of Gaussians with larger are good candidates for a merger.

We can define a split criterion based on the Kullback-Leibler distance between the local data density around the th Gaussian, which can be written in the case of complete data as