|Stochastic Systems Group|
Augmenting the HDP-HMM to Approximate Semi-Markov Processes
Emily B. Fox
Many real-world processes, as diverse as speech signals, the human genome, and financial time-series, can be described via hidden Markov models (HMMs) or variants therefrom. The HMM assumes that there is a set of observations of an underlying (hidden) discrete-valued Markov process representing the state evolution of the system. For many applications, the Markov structure on the state sequence approximates the temporal state persistence of a truly semi-Markov process.
Recently, the hierarchical Dirichlet process (HDP) has been applied to the problem of learning HMMs with unknown state space cardinality, and is referred to as a HDP-HMM. One of the main limitations of the original HDP-HMM formulation is that it cannot be biased towards learning models with high probabilities of self-transition. This results in large sensitivities to noise since the HDP-HMM can explain the data by fast state-switching, thus obscuring the learning procedure.
In this talk we revisit the HDP-HMM and address how to augment the formulation to efficiently and effectively learn HDP-HMMs approximating semi-Markov processes. In addition, this augmented formulation allows us to further extend the model to capture arbitrarily complex observation likelihood densities.
Problems with this site should be emailed to email@example.com