1. Overview
Apache OpenNLP is an open source Natural Language Processing Java library.
It features an API for use cases like Named Entity Recognition, Sentence Detection, POS tagging and Tokenization.
In this tutorial, we’ll have a look at how to use this API for different use cases.
2. Maven Setup
First, we need to add the main dependency to our pom.xml:
<dependency>
<groupId>org.apache.opennlp</groupId>
<artifactId>opennlp-tools</artifactId>
<version>1.8.4</version>
</dependency>
The latest stable version can be found over on Maven Central.
Some use cases need trained models. You can download pre-defined models here and detailed information about these models here.
3. Sentence Detection
Let’s start with understanding what a sentence is.
Sentence detection is about identifying the start and the end of a sentence, which usually depends on the language at hand. This is also called “Sentence Boundary Disambiguation” (SBD).
In some cases, sentence detection is quite challenging because of the ambiguous nature of the period character. A period usually denotes the end of a sentence but can also appear in an email address, an abbreviation, a decimal, and a lot of other places.
As for most NLP tasks, for sentence detection, we need a trained model as input, which we expect to reside in the /resources folder.
To implement sentence detection, we load the model and pass it into an instance of SentenceDetectorME. Then, we simply pass a text into the sentDetect() method to split it at the sentence boundaries:
@Test
public void givenEnglishModel_whenDetect_thenSentencesAreDetected()
throws Exception {
String paragraph = "This is a statement. This is another statement."
+ "Now is an abstract word for time, "
+ "that is always flying. And my email address is [email protected].";
InputStream is = getClass().getResourceAsStream("/models/en-sent.bin");
SentenceModel model = new SentenceModel(is);
SentenceDetectorME sdetector = new SentenceDetectorME(model);
String sentences[] = sdetector.sentDetect(paragraph);
assertThat(sentences).contains(
"This is a statement.",
"This is another statement.",
"Now is an abstract word for time, that is always flying.",
"And my email address is [email protected].");
}
Note: the suffix “ME” is used in many class names in Apache OpenNLP and represents an algorithm that is based on “Maximum Entropy”.
4. Tokenizing
Now that we can divide a corpus of text into sentences, we can start analyzing a sentence in more detail.
The goal of tokenization is to divide a sentence into smaller parts called tokens. Usually, these tokens are words, numbers or punctuation marks.
There’re three types of tokenizers available in OpenNLP.
4.1. Using TokenizerME
In this case, we first need to load the model. We can download the model file from here, put it in the /resources folder and load it from there.
Next, we’ll create an instance of TokenizerME using the loaded model, and use the tokenize() method to perform tokenization on any String:
@Test
public void givenEnglishModel_whenTokenize_thenTokensAreDetected()
throws Exception {
InputStream inputStream = getClass()
.getResourceAsStream("/models/en-token.bin");
TokenizerModel model = new TokenizerModel(inputStream);
TokenizerME tokenizer = new TokenizerME(model);
String[] tokens = tokenizer.tokenize("Baeldung is a Spring Resource.");
assertThat(tokens).contains(
"Baeldung", "is", "a", "Spring", "Resource", ".");
}
As we can see, the tokenizer has identified all words and the period character as separate tokens. This tokenizer can be used with a custom trained model as well.
4.2. WhitespaceTokenizer
As the name suggests, this tokenizer simply splits the sentence into tokens using whitespace characters as delimiters:
@Test
public void givenWhitespaceTokenizer_whenTokenize_thenTokensAreDetected()
throws Exception {
WhitespaceTokenizer tokenizer = WhitespaceTokenizer.INSTANCE;
String[] tokens = tokenizer.tokenize("Baeldung is a Spring Resource.");
assertThat(tokens)
.contains("Baeldung", "is", "a", "Spring", "Resource.");
}
We can see that the sentence has been split by white spaces and hence we get “Resource.” (with the period character at the end) as a single token instead of two different tokens for the word “Resource” and the period character.
4.3. SimpleTokenizer
This tokenizer is a little more sophisticated than WhitespaceTokenizer and splits the sentence into words, numbers, and punctuation marks. It’s the default behavior and doesn’t require any model:
@Test
public void givenSimpleTokenizer_whenTokenize_thenTokensAreDetected()
throws Exception {
SimpleTokenizer tokenizer = SimpleTokenizer.INSTANCE;
String[] tokens = tokenizer
.tokenize("Baeldung is a Spring Resource.");
assertThat(tokens)
.contains("Baeldung", "is", "a", "Spring", "Resource", ".");
}
5. Named Entity Recognition
Now that we have understood tokenization, let’s take a look at a first use case that is based on successful tokenization: named entity recognition (NER).
The goal of NER is to find named entities like people, locations, organizations and other named things in a given text.
OpenNLP uses pre-defined models for person names, date and time, locations, and organizations. We need to load the model using TokenNameFinderModel and pass it into an instance of NameFinderME. Then we can use the find() method to find named entities in a given text:
@Test
public void
givenEnglishPersonModel_whenNER_thenPersonsAreDetected()
throws Exception {
SimpleTokenizer tokenizer = SimpleTokenizer.INSTANCE;
String[] tokens = tokenizer
.tokenize("John is 26 years old. His best friend's "
+ "name is Leonard. He has a sister named Penny.");
InputStream inputStreamNameFinder = getClass()
.getResourceAsStream("/models/en-ner-person.bin");
TokenNameFinderModel model = new TokenNameFinderModel(
inputStreamNameFinder);
NameFinderME nameFinderME = new NameFinderME(model);
List<Span> spans = Arrays.asList(nameFinderME.find(tokens));
assertThat(spans.toString())
.isEqualTo("[[0..1) person, [13..14) person, [20..21) person]");
}
As we can see in the assertion, the result is a list of Span objects containing the start and end indices of the tokens which compose named entities in the text.
6. Part-of-Speech Tagging
Another use case that needs a list of tokens as input is part-of-speech tagging.
A part-of-speech (POS) identifies the type of a word. OpenNLP uses the following tags for the different parts-of-speech:
- NN – noun, singular or mass
- DT – determiner
- VB – verb, base form
- VBD – verb, past tense
- VBZ – verb, third person singular present
- IN – preposition or subordinating conjunction
- NNP – proper noun, singular
- TO – the word “to”
- JJ – adjective
These are same tags as defined in the Penn Tree Bank. For a complete list please refer to this list.
Similar to the NER example, we load the appropriate model and then use POSTaggerME and its method tag() on a set of tokens to tag the sentence:
@Test
public void givenPOSModel_whenPOSTagging_thenPOSAreDetected()
throws Exception {
SimpleTokenizer tokenizer = SimpleTokenizer.INSTANCE;
String[] tokens = tokenizer.tokenize("John has a sister named Penny.");
InputStream inputStreamPOSTagger = getClass()
.getResourceAsStream("/models/en-pos-maxent.bin");
POSModel posModel = new POSModel(inputStreamPOSTagger);
POSTaggerME posTagger = new POSTaggerME(posModel);
String tags[] = posTagger.tag(tokens);
assertThat(tags).contains("NNP", "VBZ", "DT", "NN", "VBN", "NNP", ".");
}
The tag() method maps the tokens into a list of POS tags. The result in the example is:
- “John” – NNP (proper noun)
- “has” – VBZ (verb)
- “a” – DT (determiner)
- “sister” – NN (noun)
- “named” – VBZ (verb)
- “Penny” – NNP (proper noun)
- “.” – period
7. Lemmatization
Now that we have the part-of-speech information of the tokens in a sentence, we can analyze the text even further.
Lemmatization is the process of mapping a word form that can have a tense, gender, mood or other information to the base form of the word – also called its “lemma”.
A lemmatizer takes a token and its part-of-speech tag as input and returns the word’s lemma. Hence, before Lemmatization, the sentence should be passed through a tokenizer and POS tagger.
Apache OpenNLP provides two types of lemmatization:
- Statistical – needs a lemmatizer model built using training data for finding the lemma of a given word
- Dictionary-based – requires a dictionary which contains all valid combinations of a word, POS tags, and the corresponding lemma
For statistical lemmatization, we need to train a model, whereas for the dictionary lemmatization we just need a dictionary file like this one.
Let’s look at a code example using a dictionary file:
@Test
public void givenEnglishDictionary_whenLemmatize_thenLemmasAreDetected()
throws Exception {
SimpleTokenizer tokenizer = SimpleTokenizer.INSTANCE;
String[] tokens = tokenizer.tokenize("John has a sister named Penny.");
InputStream inputStreamPOSTagger = getClass()
.getResourceAsStream("/models/en-pos-maxent.bin");
POSModel posModel = new POSModel(inputStreamPOSTagger);
POSTaggerME posTagger = new POSTaggerME(posModel);
String tags[] = posTagger.tag(tokens);
InputStream dictLemmatizer = getClass()
.getResourceAsStream("/models/en-lemmatizer.dict");
DictionaryLemmatizer lemmatizer = new DictionaryLemmatizer(
dictLemmatizer);
String[] lemmas = lemmatizer.lemmatize(tokens, tags);
assertThat(lemmas)
.contains("O", "have", "a", "sister", "name", "O", "O");
}
As we can see, we get the lemma for every token. “O” indicates that the lemma could not be determined as the word is a proper noun. So, we don’t have a lemma for “John” and “Penny”.
But we have identified the lemmas for the other words of the sentence:
- has – have
- a – a
- sister – sister
- named – name
8. Chunking
Part-of-speech information is also essential in chunking – dividing sentences into grammatically meaningful word groups like noun groups or verb groups.
Similar to before, we tokenize a sentence and use part-of-speech tagging on the tokens before the calling the chunk() method:
@Test
public void
givenChunkerModel_whenChunk_thenChunksAreDetected()
throws Exception {
SimpleTokenizer tokenizer = SimpleTokenizer.INSTANCE;
String[] tokens = tokenizer.tokenize("He reckons the current account
deficit will narrow to only 8 billion.");
InputStream inputStreamPOSTagger = getClass()
.getResourceAsStream("/models/en-pos-maxent.bin");
POSModel posModel = new POSModel(inputStreamPOSTagger);
POSTaggerME posTagger = new POSTaggerME(posModel);
String tags[] = posTagger.tag(tokens);
InputStream inputStreamChunker = getClass()
.getResourceAsStream("/models/en-chunker.bin");
ChunkerModel chunkerModel
= new ChunkerModel(inputStreamChunker);
ChunkerME chunker = new ChunkerME(chunkerModel);
String[] chunks = chunker.chunk(tokens, tags);
assertThat(chunks).contains(
"B-NP", "B-VP", "B-NP", "I-NP",
"I-NP", "I-NP", "B-VP", "I-VP",
"B-PP", "B-NP", "I-NP", "I-NP", "O");
}
As we can see, we get an output for each token from the chunker. “B” represents the start of a chunk, “I” represents the continuation of the chunk and “O” represents no chunk.
Parsing the output from our example, we get 6 chunks:
- “He” – noun phrase
- “reckons” – verb phrase
- “the current account deficit” – noun phrase
- “will narrow” – verb phrase
- “to” – preposition phrase
- “only 8 billion” – noun phrase
9. Language Detection
Additionally to the use cases already discussed, OpenNLP also provides a language detection API that allows to identify the language of a certain text.
For language detection, we need a training data file. Such a file contains lines with sentences in a certain language. Each line is tagged with the correct language to provide input to the machine learning algorithms.
A sample training data file for language detection can be downloaded here.
We can load the training data file into a LanguageDetectorSampleStream, define some training data parameters, create a model and then use the model to detect the language of a text:
@Test
public void
givenLanguageDictionary_whenLanguageDetect_thenLanguageIsDetected()
throws FileNotFoundException, IOException {
InputStreamFactory dataIn
= new MarkableFileInputStreamFactory(
new File("src/main/resources/models/DoccatSample.txt"));
ObjectStream lineStream = new PlainTextByLineStream(dataIn, "UTF-8");
LanguageDetectorSampleStream sampleStream
= new LanguageDetectorSampleStream(lineStream);
TrainingParameters params = new TrainingParameters();
params.put(TrainingParameters.ITERATIONS_PARAM, 100);
params.put(TrainingParameters.CUTOFF_PARAM, 5);
params.put("DataIndexer", "TwoPass");
params.put(TrainingParameters.ALGORITHM_PARAM, "NAIVEBAYES");
LanguageDetectorModel model = LanguageDetectorME
.train(sampleStream, params, new LanguageDetectorFactory());
LanguageDetector ld = new LanguageDetectorME(model);
Language[] languages = ld
.predictLanguages("estava em uma marcenaria na Rua Bruno");
assertThat(Arrays.asList(languages))
.extracting("lang", "confidence")
.contains(
tuple("pob", 0.9999999950605625),
tuple("ita", 4.939427661577956E-9),
tuple("spa", 9.665954064665144E-15),
tuple("fra", 8.250349924885834E-25)));
}
The result is a list of the most probable languages along with a confidence score.
And, with rich models, we can achieve a very higher accuracy with this type of detection.
10. Conclusion
We explored a lot here, from the interesting capabilities of OpenNLP. We focused on some interesting features to perform NLP tasks like lemmatization, POS tagging, Tokenization, Sentence Detection, Language Detection and more.
As always, the complete implementation of all above can be found over on GitHub.