Stanford.NLP.Segmenter 3.2.0

Tokenization of raw text is a standard pre-processing step for many NLP tasks. For English, tokenization usually involves punctuation splitting and separation of some affixes like possessives. Other languages require more extensive token pre-processing, which is usually called segmentation.

There is a newer version of this package available.
See the version list below for details.
Install-Package Stanford.NLP.Segmenter -Version 3.2.0
dotnet add package Stanford.NLP.Segmenter --version 3.2.0
paket add Stanford.NLP.Segmenter --version 3.2.0
The NuGet Team does not provide support for this client. Please contact its maintainers for support.

    • IKVM (>= 7.3.4830)

Version History

Version Downloads Last updated
3.9.1 913 3/3/2018
3.8.0 702 9/9/2017
3.7.0.1 444 4/9/2017
3.7.0 421 1/13/2017
3.6.0 1,023 12/28/2015
3.5.2.1 432 10/5/2015
3.5.2 693 5/22/2015
3.5.1 633 2/14/2015
3.5.0 579 12/3/2014
3.4.0 762 6/21/2014
3.3.1.1 378 5/2/2014
3.3.0 555 11/27/2013
3.2.0 468 9/9/2013
Show less