BERT AI technology page Top Builders
Explore the top contributors showcasing the highest number of BERT AI technology page app submissions within our community.
BERT
The BERT paper by Jacob Devlin was released not long after the publication of the first GPT model. It achieved significant improvements on many important NLP benchmarks, such as GLUE. Since then, their ideas have influenced many state-of-the-art models in language understanding. Bidirectional Encoder Representations from Transformers (BERT) is a natural language processing technique (NLP) that was proposed in 2018. (NLP is the field of artificial intelligence aiming for computers to read, analyze, interpret and derive meaning from text and spoken words. This practice combines linguistics, statistics, and Machine Learning to assist computers in ‘understanding’ human language.) BERT is based on the idea of pretraining a transformer model on a large corpus of text and then fine-tuning it for specific NLP tasks. The transformer model is a deep learning model that is designed to handle sequential data, such as text. The bidirectional transformer architecture stacks encoders from the original transformer on top of each other. This allows the model to better capture the context of the text.
General | |
---|---|
Relese date | 2018 |
Author | |
Repository | https://github.com/google-research/bert |
Type | masked-language models |
Libraries
- BERT Model Get the basic BERT pre-trained model from TensorFlowHub and fine tune to your needs
- Text Classification with BERT How to leverage a pre-trained BERT model from Hugging Face to classify text of news articles
- Question Answering with a fine-tuned BERT using Hugging Face Transformers and PyTorch on CoQA dataset by Stanford
BERT AI technology page Hackathon projects
Discover innovative solutions crafted with BERT AI technology page, developed by our community members during our engaging hackathons.