BERT Top Builders

Explore the top contributors showcasing the highest number of BERT app submissions within our community.

BERT

The BERT paper by Jacob Devlin was released not long after the publication of the first GPT model. It achieved significant improvements on many important NLP benchmarks, such as GLUE. Since then, their ideas have influenced many state-of-the-art models in language understanding. Bidirectional Encoder Representations from Transformers (BERT) is a natural language processing technique (NLP) that was proposed in 2018. (NLP is the field of artificial intelligence aiming for computers to read, analyze, interpret and derive meaning from text and spoken words. This practice combines linguistics, statistics, and Machine Learning to assist computers in ‘understanding’ human language.) BERT is based on the idea of pretraining a transformer model on a large corpus of text and then fine-tuning it for specific NLP tasks. The transformer model is a deep learning model that is designed to handle sequential data, such as text. The bidirectional transformer architecture stacks encoders from the original transformer on top of each other. This allows the model to better capture the context of the text.

General
Relese date2018
AuthorGoogle
Repositoryhttps://github.com/google-research/bert
Typemasked-language models

Libraries