When the basic spa needs a lift

Picture: Brett Jordan on Loosen

Using SpaCy to identify named entities works well, but not in all situations, especially when it comes to the names of individuals. thanks anyway Hug your face you can use Google BERT models as an ML engineer (not a data scientist) to easily increase people’s NER accuracy.

DISCLAIMER: spaCy may contain techniques that are similar to what I’m going to describe, so don’t calculate spaCy, this article is simply an alternative method to incorporate the technique.

A few words on BERT and a hugging face

Hugging Face describes iteIf a community where we can “build, train, and adopt cutting-edge models based on open source sources in natural language processing”.

It’s a place to build models or use models built by others – this last bit is especially important.

Google BERT (Transformers Bidirectional Encoder Representations from Transformers) is a technology created by Google for NLP without interfering with its operation. Check out this piece by BERT Architects that digs a little bit out of how BERT works. The short version is that the models built with Google BERT work well. Yes, I know, it’s a ridiculous oversimplification, but this article is about using BERT, not creating BERT-based models.

Sample with spaCy results

Here is a list of names that throw in the (basic) spaCy loop:

names_string = “Hi there Jon Jones Jon Jones Jr. Jon Paul Jones John D. Jones”

Now here are some Python codes that use the base name spaCy with that list.

The result is:

It’s not very bad, but if you need it to be gorgeous, it will fall short.

Another option is to use the Transformer Python package, which (thanks to Hugging Face) gives you access to templates like Google BERT.

transformers Python package

The transformer package is a key part of using Hugging Face models. The installation can be a little awkward depending on whether you already have PyTorch and / or TensorFlow installed. On one of my computers, I already had TensorFlow installed and everything I needed to do (to run examples that might not be a perfect fit for the situation) was done:

pip install transformers[torch]

I confirmed the installation by opening the interactive Python command line by typing:

A few important things happened:

  1. The mood analysis model was automatically downloaded from the Hugging Face site (this is important).
  2. I got the following results from the second line of code:

[{‘label’: ‘POSITIVE’, ‘score’: 0.9998704791069031}]

This is how the results of the model return (at least for all the NLP models I tried): a list of dictionaries. Ok, great. Time to get to the names of the people.

NER Google BERT template

The Hugging Face model has several models that are used to identify designated units (NER). I’m only focused on one – and it’s called dslim/bert-large-NER. You will need everything you need (in addition to installation) the following type of code.

Imports bring some helper objects to bring models with the pipeline. pipeline lets me determine which model to import from the Hugging Face product.

When the first three lines of code run, multiple files are automatically downloaded from Hugging Face (they can be downloaded manually if you have firewall restrictions – leave a comment if you need it). IMPORTANT: The BERT model is 1.3GB and takes little to load. Even if you download it only once, it will always take 4-6 seconds for the template to manifest (you may want to find out how to keep the template local in memory).

pipeline puts the template on and returns an object that is your “nlp” engine. Note that I give the object a string for which I want to get the named entities. Just like a simple example of opinion analysis, nlp object returns a list of dictionaries, except that this is larger.

[{‘entity’: ‘B-PER’, ‘score’: 0.99802744, ‘index’: 3, ‘word’: ‘Jon’, ‘start’: 9, ‘end’: 12}, {‘entity’: ‘I-PER’, ‘score’: 0.9969795, ‘index’: 4, ‘word’: ‘Jones’, ‘start’: 13, ‘end’: 18}, {‘entity’: ‘B-PER’, ‘score’: 0.99703735, ‘index’: 5, ‘word’: ‘Jon’, ‘start’: 19, ‘end’: 22}, {‘entity’: ‘I-PER’, ‘score’: 0.99666214, ‘index’: 6, ‘word’: ‘Jones’, ‘start’: 23, ‘end’: 28}, {‘entity’: ‘I-PER’, ‘score’: 0.8733999, ‘index’: 7, ‘word’: ‘Jr’, ‘start’: 29, ‘end’: 31}, {‘entity’: ‘B-PER’, ‘score’: 0.9935467, ‘index’: 9, ‘word’: ‘Jon’, ‘start’: 33, ‘end’: 36}, {‘entity’: ‘I-PER’, ‘score’: 0.9749524, ‘index’: 10, ‘word’: ‘Paul’, ‘start’: 37, ‘end’: 41}, {‘entity’: ‘I-PER’, ‘score’: 0.9960336, ‘index’: 11, ‘word’: ‘Jones’, ‘start’: 42, ‘end’: 47}, {‘entity’: ‘B-PER’, ‘score’: 0.9971614, ‘index’: 12, ‘word’: ‘John’, ‘start’: 48, ‘end’: 52}, {‘entity’: ‘I-PER’, ‘score’: 0.9858689, ‘index’: 13, ‘word’: ‘D’, ‘start’: 53, ‘end’: 54}, {‘entity’: ‘I-PER’, ‘score’: 0.4625939, ‘index’: 14, ‘word’: ‘.’, ‘start’: 54, ‘end’: 55}, {‘entity’: ‘I-PER’, ‘score’: 0.9968941, ‘index’: 15, ‘word’: ‘Jones’, ‘start’: 56, ‘end’: 61}]

Here is a big offer

For me, this BERT model makes special B-PER entity. From documents, the BERT model can identify several different types of entities:

Image by the author https://huggingface.co/dslim/bert-base-NER

The description of B-PER is only slightly misleading because B-PER is not only the beginning of a person’s name after another person’s name, it is also a first name in the first person’s name. The very big thing is that B-PER is a separator for each name in the name list. (This same concept applies to other types of entities, which is just as valuable.)

Allow me to cut the list of names and NER entities from BERT.

Image by the author

It’s great; each actual first name is a B-PER whole. All I have to do is put the names together by iterating in dictionaries.

Departure isn’t exactly what I’m looking for (it makes more sense soon), but it’s close:

[[[‘Jon’, ‘Jones’]], [[‘Jon’, ‘Jones’, ‘Jr’]], [[‘Jon’, ‘Paul’, ‘Jones’]], [[‘John’, ‘D’, ‘.’, ‘Jones’]]]

I could just join them and be ready, but it’s not quite that simple (but almost).

Odd results for unusual names

One of the “weird” things I’ve seen (i.e., I don’t know why it does this, but is probably a completely logical reason) coming back from BERT is a list of dictionary items that seem to break up unusual names. Take, for example, my last name – Neugebauer.

[{‘entity’: ‘I-PER’, ‘score’: 0.9993844, ‘index’: 2, ‘word’: ’N’, ‘start’: 6, ‘end’: 7}, {‘entity’: ‘I-PER’, ‘score’: 0.99678946, ‘index’: 3, ‘word’: ‘##eu’, ‘start’: 7, ‘end’: 9}, {‘entity’: ‘I-PER’, ‘score’: 0.9744214, ‘index’: 4, ‘word’: ‘##ge’, ‘start’: 9, ‘end’: 11}, {‘entity’: ‘I-PER’, ‘score’: 0.92124516, ‘index’: 5, ‘word’: ‘##ba’, ‘start’: 11, ‘end’: 13}, {‘entity’: ‘I-PER’, ‘score’: 0.82978016, ‘index’: 6, ‘word’: ‘##uer’, ‘start’: 13, ‘end’: 16}]

BERT seems to break my name into pieces and I guess (without evidence) that this is because of how transformers work at BERT. Whatever the reason, this situation needs to be addressed, which is why I did not complete the last part of my code.

Here is the last bit of the code that builds on the last snippet without modifying the last snippet.

This not only gets rid of hashtags of unusual names, but also merges the text together. (Replacing a period is like replacing hashtags and offers in a way like John J. Jones, for example.)

Running this code will give me what I want:

[[‘Jon Jones’], [‘Jon Jones Jr’], [‘Jon Paul Jones’], [‘John D. Jones’]]

I have to admit out loud that when I first saw this run, I was shocked that it got all the names right. Well played by Google, Hugging Face and anyone who built this name model with BERT. Well played.

Conclusions and improvements

I have found the BERT NER engine to be very accurate, but note that it is not always accurate, it’s just more accurate. But speaking more directly, it means the difference between good and great.

You can improve my code by using dots and keeping the start and end string locations returned from BERT (i.e. Transformers). However, you’ll probably want to make some changes:

  • The number of points must be the average of each symbol in the name sequence bounded by the points B-PER type
  • The beginning is B-PER and the end is take I-PER in personal order

I hope you enjoy this and get a chance to work transformers. It’s really something. I would also like to pay tribute to all the respects of the good people at the spa and say once again that what I just showed is simply another way of doing things (which can probably be incorporated directly into the spa).

References

[1] HuggingFace.org. (ND) hugging face – The artificial intelligence community is building the future. https: //huggingface.co/

[2] Devlin, Jacob and Chan, gMing-Wei. (November 2, 2018) Open Procurement BERT: Top-level pre-training for natural language processing. https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html

[3] HuggingFace.org. (ND) dslim / Bert-base-NER. https://huggingface.co/dslim/bert-base-NER

LEAVE A REPLY

Please enter your comment!
Please enter your name here