We present conditional random felds, a framework for building probabilistic models to segment and label
sequence data. Conditional random felds offer several advantages over hidden Markov models and stochastic
grammars for such tasks, including the ability to relax strong independence assumptions made in those
models. Conditional random felds also avoid a fundamental limitation of maximum entropy Markov models
(MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased
towards states with few successor states. We present iterative parameter estimation algorithms for conditional
random felds and compare the performance of the resulting models to HMMs and MEMMs on synthetic and
natural-language data.