1 Introduction
Nonparametric Bayesian modeling techniques, especially Dirichlet process mixture (DPM) models, have become very popular in statistics over the last few years, for performing nonparametric density estimation
[1, 2, 3]. This theory is based on the observation that an infinite number of component distributions in an ordinary finite mixture model (clustering model) tends on the limit to a Dirichlet process (DP) prior [2, 4]. Eventually, the nonparametric Bayesian inference scheme induced by a DPM model yields a posterior distribution on the proper number of model component densities (inferred clusters)
[5], rather than selecting a fixed number of mixture components. Hence, the obtained nonparametric Bayesian formulation eliminates the need of doing inference (or making arbitrary choices) on the number of mixture components (clusters) necessary to represent the modeled data.An interesting alternative to the Dirichlet process prior for nonparametric Bayesian modeling is the PitmanYor process (PYP) prior [6]. PitmanYor processes produce powerlaw distributions that allow for better modeling populations comprising a high number of clusters with low popularity and a low number of clusters with high popularity [7]. Indeed, the PitmanYor process prior can be viewed as a generalization of the Dirichlet process prior, and reduces to it for a specific selection of its parameter values. In [8], a Gaussian processbased coupled PYP method for joint segmentation of multiple images is proposed.
A different perspective to the problem of nonparametric data modeling was introduced in [9]
, where the authors proposed the kernel stickbreaking process (KSBP). The KSBP imposes the assumption that clustering is more probable if two feature vectors are close in a prescribed (general) space, which may be associated explicitly with the spatial or temporal position of the modeled data. This way, the KSBP is capable of exploiting available prior information regarding the spatial or temporal relations and dependencies between the modeled data.
Inspired by these advances, and motivated by the interesting properties of the PYP, in this paper we come up with a different approach towards predictordependent random probability measures for nonparametric Bayesian clustering. We first introduce an infinite sequence of random spatial or temporal locations. Then, based on the stickbreaking construction of the PitmanYor process, we define a predictordependent random probability measure by considering that the discount hyperparameters of the Betadistributed random weights (stick variables) of the process are not uniform among the weights, but controlled by a kernel function expressing the proximity between the location assigned to each weight and the given predictors. The obtained random probability measure is dubbed the kernel PitmanYor process (KPYP) for nonparametric clustering of data with general spatial or temporal interdependencies. We empirically study the performance of the KPYP prior in unsupervised image segmentation and textdependent speaker identification, and compare it to the kernel stickbreaking process, and the Dirichlet process prior.
The remainder of this paper is organized as follows: In Section 2, we provide a brief presentation the PitmanYor process, as well as the kernel stickbreaking process, and its desirable properties in clustering data with spatial or temporal dependencies. In Section 3, the proposed nonparametric prior for clustering data with temporal or spatial dependencies is introduced, its relations to existing methods are discussed, and an efficient variational Bayesian algorithm for model inference is derived.
2 Theoretical Background
2.1 The PitmanYor Process
Dirichlet process (DP) models were first introduced by Ferguson [11]. A DP is characterized by a base distribution and a positive scalar , usually referred to as the innovation parameter, and is denoted as . Essentially, a DP is a distribution placed over a distribution. Let us suppose we randomly draw a sample distribution from a DP, and, subsequently, we independently draw random variables from :
(1) 
(2) 
Integrating out
, the joint distribution of the variables
can be shown to exhibit a clustering effect. Specifically, given the first samples of , , it can be shown that a new sample is either (a) drawn from the base distribution with probability , or (b) is selected from the existing draws, according to a multinomial allocation, with probabilities proportional to the number of the previous draws with the same allocation [12]. Let be the set of distinct values taken by the variables . Denoting as the number of values in that equal to , the distribution of given can be shown to be of the form [12](3)  
where denotes the distribution concentrated at a single point .
The PitmanYor process [6] functions similar to the Dirichlet process. Let us suppose we randomly draw a sample distribution from a PYP, and, subsequently, we independently draw random variables from :
(4) 
with
(5) 
where is the discount parameter of the PitmanYor process, is its innovation parameter, and the base distribution. Integrating out , similar to Eq. (3), we now yield
(6)  
As we observe, the PYP yields an expression for quite similar to that of the DP, also possessing the richgetsricher clustering property, i.e., the more samples have been assigned to a draw from , the more likely subsequent samples will be assigned to the same draw. Further, the more we draw from , the more likely a new sample will again be assigned to a new draw from . These two effects together produce a powerlaw distribution where many unique values are observed, most of them rarely [6]. In particular, for , the number of unique values scales as , where is the total number of draws. Note also that, for , the PitmanYor process reduces to the Dirichlet process, in which case the number of unique values grows more slowly at [13].
A characterization of the (unconditional) distribution of the random variable drawn from a PYP, , is provided by the stickbreaking construction of Sethuraman [14]. Consider two infinite collections of independent random variables , , where the are drawn from a Beta distribution, and the are independently drawn from the base distribution . The stickbreaking representation of is then given by [13]
(7) 
where
(8) 
(9) 
(10) 
and
(11) 
2.2 The Kernel StickBreaking Process
An alternative to the above approaches, allowing for taking into account additional prior information regarding spatial or temporal dependencies in the modeled datasets, is the kernel stickbreaking process introduced in [9]. The basic notion in the formulation of the KSBP consists in the introduction of a predictordependent prior, which promotes clustering of adjacent data points in a prescribed (general) space.
Let us consider that the observed data points are associated with positions where measurement was taken , arranged on a dimensional lattice. For example, in cases of sequential data modeling, the observed data points
are naturally associated with an onedimensional lattice that depicts their temporal succession, i.e. the time point these measurements were taken. In cases of computer vision applications, we might be dealing with observations
measured on different locations on a twodimensional or threedimensional space .To take this prior information into account, the KSBP postulates that the random process in (1) comprises a function of the predictors related to the observable data points , expressing their location in the prescribed space . Specifically, it is assumed that
(12) 
where
(13) 
(14) 
(15) 
(16) 
and is a kernel function centered at with hyperparameter .
By selecting an appropriate form of the kernel function
, KSBP allows for obtaining prior probabilities
for the derived clusters that depend on the values of the predictors (spatial or temporal locations) . Indeed, the closer the location of an observation is to the location assigned to the th cluster, the higher the prior probability becomes. Thus, the KSBP prior promotes by construction clustering of (spatially or temporally) adjacent data points. For example, a typical selection for the kernelis the radial basis function (RBF) kernel
(17) 
3 Proposed Approach
3.1 Model Formulation
We aim to obtain a clustering algorithm which takes into account the prior information regarding the (temporal or spatial) adjacencies of the observed data in the locations space , promoting clustering of data adjacent in the space , and discouraging clustering of data points relatively near in the feature space but far in the locations space . For this purpose, we seek to provide a locationdependent nonparametric prior for clustering the observed data .
Motivated by the definition and the properties of the PitmanYor process discussed in the previous section, to effect these goals, in this work we introduce a random probability measure under which, given the first samples drawn from , a new sample associated with a measurement location is distributed according to
(18)  
where is the number of values in that equal to , is the set of distinct values taken by the variables , is the employed base measure, is the location assigned to the th cluster, , is a bounded kernel function taking values in the interval , such that
(19) 
(20) 
is the innovation parameter of the process, conditioned to satisfy , and is the distance metric used by the employed kernel function. We dub this random probability measure the kernel PitmanYor process, and we denote
(21) 
with
(22) 
The stickbreaking construction of the KPYP follows directly from the above definition (18), and the relevant discussions of section 2. Considering a KPYP with cluster locations set , kernel function satisfying the constraints (19) and (20), and innovation parameter , we have
(23) 
where
(24) 
and
(25) 
Proposition 1. The stochastic process
defined in (23)(25) is a valid random probability measure.
Proof. We need to show that
(26) 
For this purpose, we follow an approach similar to [9]. From (25), we have
(27) 
Then, in the limit as , and taking logs in both sides of (27), we have
(28) 
Based on Kolmogorov three series theorem, the summation on the right
is over independent random variables and is equal to if
and only if .
However, follows a Beta distribution, which
means , thus ,
and hence its expectation is negative; thus, the condition is satisfied,
and (26) holds true.
3.2 Relation to the KSBP
Indeed, the proposed KPYP shares some common ideas with the KSBP of [9]. The KSBP considers that
(29) 
where
(30) 
(31) 
(32) 
From this definition, we observe that there is a key difference between the KPYP and the KSBP: the KSBP multiplies stick variables sharing the same Beta prior with a bounded kernel function centered at a location unique for each stick, to obtain a predictor (location)dependent random probability measure. Instead, the KPYP considers stick variables with different Beta priors, with the prior of each stick variable employing a different “discount hyperparameter,” defined as a bounded kernel centered at a location unique for each stick. This way, the KPYP controls the assignment of observations to clusters by discounting clusters the centers of which are too far from the clustered data points in the locations space .
It is interesting to compute the mean and variance of the stick variables
for these two stochastic processes, for a given observation location and cluster center . In the case of the KPYP, we have(33) 
(34) 
where
(35) 
On the contrary, for the KSBP we have
(36) 
(37) 
From (33) and (36), we observe that the for a given observation location and cluster center , same increase in the value of the kernel function induces a much greater increase in the expected value of the stick variable employed by the KPYP compared to the increase in the expectation of the stick variable employed by the KSBP. Hence, the predictor (location)dependent prior probabilities of cluster assignment of the KPYP appear to vary more steeply with the employed kernel function values compared to the KSBP.
3.3 Variational Bayesian Inference
Inference for nonparametric models can be conducted under a Bayesian setting, typically by means of variational Bayes (e.g., [15]), or Monte Carlo techniques (e.g., [16]). Here, we prefer a variational Bayesian approach, due to its better computational costs. For this purpose, we additionally impose a Gamma prior over the innovation parameter , with
(38) 
Let us a consider a set of observations with corresponding locations . We postulate for our observed data a likelihood function of the form
(39) 
where the hidden variables are defined such that if the th data point is considered to be derived from the th cluster. We impose a multinomial prior over the hidden variables , with
(40) 
where the are given by (25), with the prior over the given by (24). We also impose a suitable conjugate exponential prior over the likelihood parameters .
Our variational Bayesian inference formalism consists in derivation of a family of variational posterior distributions which approximate the true posterior distribution over , , and , and the innovation parameter . Apparently, under this infinite dimensional setting, Bayesian inference is not tractable. For this reason, we fix a value and we let the variational posterior over the have the property , i.e. we set equal to zero for , .
Let be the set of the parameters of our truncated model over which a prior distribution has been imposed, and be the set of the hyperparameters of the model, comprising the and the hyperparameters of the priors imposed over the innovation parameter and the likelihood parameters of the model. Variational Bayesian inference consists in derivation of an approximate posterior by maximization (in an iterative fashion) of the variational free energy
(41) 
Having considered a conjugate exponential prior configuration, the variational posterior is expected to take the same functional form as the prior, [17]. The variational free energy of our model reads
(42) 
3.4 Variational Posteriors
Let us denote as the posterior expectation of a quantity. We have
(43) 
where
(44) 
(45)  
and
(46) 
where
(47) 
(48) 
denotes the Digamma function, and
(49) 
Further, the cluster assignment variables yield
(50) 
where
(51) 
(52) 
and
(53) 
(54) 
Regarding the parameters , we obtain
(55) 
Finally, regarding the model hyperparameters , we obtain the hyperparameters of the employed kernel functions by maximization of the lower bound
, and we heuristically select the values of the rest.
3.5 Learning the cluster locations
Regarding determination of the locations assigned to the obtained clusters, , these can be obtained by either random selection or maximization of the variational free energy over them. The latter procedure can be conducted by means of any appropriate iterative maximization algorithm; here, we employ the popular LBFGS algorithm [18] for this purpose. Both random selection and estimation by means of variational free energy optimization, using the LBFGS algorithm, shall be evaluated in the experimental section of our paper.
Acknowledgment
The authors would like to thank Dr. David B. Dunson for the enlightening discussion regarding the correct way to implement the MCMC sampler for the KSBP.
References
 [1] S. Walker, P. Damien, P. Laud, and A. Smith, “Bayesian nonparametric inference for random distributions and related functions,” J. Roy. Statist. Soc. B, vol. 61, no. 3, pp. 485–527, 1999.

[2]
R. Neal, “Markov chain sampling methods for Dirichlet process mixture models,”
J. Comput. Graph. Statist., vol. 9, pp. 249–265, 2000.  [3] P. Muller and F. Quintana, “Nonparametric Bayesian data analysis,” Statist. Sci., vol. 19, no. 1, pp. 95–110, 2004.
 [4] C. Antoniak, “Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems.” The Annals of Statistics, vol. 2, no. 6, pp. 1152–1174, 1974.

[5]
D. Blei and M. Jordan, “Variational methods for the Dirichlet process,” in
21st Int. Conf. Machine Learning
, New York, NY, USA, July 2004, pp. 12–19.  [6] J. Pitman and M. Yor, “The twoparameter PoissonDirichlet distribution derived from a stable subordinator,” in Annals of Probability, vol. 25, 1997, pp. 855–900.

[7]
S. Goldwater, T. Griffiths, and M. Johnson, “Interpolating between types and tokens by estimating powerlaw generators,” in
Advances in Neural Information Processing Systems, vol. 18, 2006.  [8] E. B. Sudderth and M. I. Jordan, “Shared segmentation of natural scenes using dependent pitmanyor processes,” in Advances in Neural Information Processing Systems, 2008, pp. 1585–1592.
 [9] D. B. Dunson and J.H. Park, “Kernel stickbreaking processes,” Biometrika, vol. 95, pp. 307–323, 2007.
 [10] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei, “Sharing clusters among related groups: Hierarchical Dirichlet processes,” in Advances in Neural Information Processing Systems (NIPS), 2005, pp. 1385–1392.
 [11] T. Ferguson, “A Bayesian analysis of some nonparametric problems,” The Annals of Statistics, vol. 1, pp. 209–230, 1973.
 [12] D. Blackwell and J. MacQueen, “Ferguson distributions via Pólya urn schemes,” The Annals of Statistics, vol. 1, no. 2, pp. 353–355, 1973.
 [13] Y. W. Teh, “A hierarchical Bayesian language model based on PitmanYor processes,” in Proc. Association for Computational Linguistics, 2006, pp. 985–992.
 [14] J. Sethuraman, “A constructive definition of the Dirichlet prior,” Statistica Sinica, vol. 2, pp. 639–650, 1994.
 [15] D. M. Blei and M. I. Jordan, “Variational inference for Dirichlet process mixtures,” Bayesian Analysis, vol. 1, no. 1, pp. 121–144, 2006.
 [16] Y. Qi, J. W. Paisley, and L. Carin, “Music analysis using hidden Markov mixture models,” IEEE Transactions on Signal Processing, vol. 55, no. 11, pp. 5209–5224, 2007.
 [17] C. M. Bishop, Pattern Recognition and Machine Learning. New York: Springer, 2006.
 [18] D. Liu and J. Nocedal, “On the limited memory method for large scale optimization,” Mathematical Programming B, vol. 45, no. 3, pp. 503–528, 1989.
 [19] R. Caruana, “Multitask learning,” Machine Learning, vol. 28, pp. 41–75, 1997.

[20]
J. L. i. r. Baxter, “Learning internal representations,” in
COLT: Proceedings of the workshop on computational learning theory
, 1995.  [21] T. Evgeniou, C. Micchelli, and M. Pontil, “Learning multiple tasks with kernel methods,” Journal of Machine Learning Research, vol. 6, pp. 615–637, 2005.
 [22] N. Lawrence and J. Platt, “Learning to learn with the informative vector machine,” in In Proceedings of the 21st International Conference on Machine Learning, 2004.

[23]
K. Yu, A. Schwaighofer, V. Tresp, W.Y. Ma, and H. Zhang, “Collaborative
ensemble learning: Combining collaborative and contentbased information
filtering via hierarchical bayes,” in
In Proceedings of the 19th conference on uncertainty in artificial intelligence
, 2003.  [24] K. Yu, A. Schwaighofer, and V. Tresp, “Learning gaussian processes from multiple tasks,” in In Proceedings of the 22nd international conference on machine learning, 2005.
 [25] D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proc. 8th Int’l Conf. Computer Vision, Vancouver, Canada, July 2001, pp. 416–423.
 [26] Q. An, C. Wang, I. Shterev, E. Wang, L. Carin, and D. B. Dunson, “Hierarchical kernel stickbreaking process for multitask image analysis,” in Proceedings of the 25th international conference on Machine learning  ICML ’08, pp. 17–24, 2008.
 [27] R. Unnikrishnan, C. Pantofaru, and M. Hebert, “A measure for objective evaluation of image segmentation algorithms,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, San Diego, CA, USA, June 2005, pp. 34–41.
 [28] ——, “Toward objective evaluation of image segmentation algorithms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 929–944, 2007.
 [29] G. Mori, “Guiding model search using segmentation.” in Proc. 10th IEEE Int. Conf. on Computer Vision (ICCV), 2005.

[30]
M. Varma and A. Zisserman, “Classifying images of materials: Achieving viewpoint and illumination independence.” in
Proc. 7th IEEE European Conf. on Computer Vision (ECCV), 2002.  [31] M. Kudo, J. Toyama, and M. Shimbo, “Multidimensional curve classification using passingthrough regions,” Pattern Recognition Letters, vol. 20, no. 1113, pp. 1103–1111, 1999.
 [32] A. Asuncion and D. Newman, “UCI machine learning repository,” 2007. [Online]. Available: http://www.ics.uci.edu/$∼$mlearn/{MLR}epository.html
Comments
There are no comments yet.