BERT Top Builders
Explore the top contributors showcasing the highest number of BERT app submissions within our community.
The BERT paper by Jacob Devlin was released not long after the publication of the first GPT model. It achieved significant improvements on many important NLP benchmarks, such as GLUE. Since then, their ideas have influenced many state-of-the-art models in language understanding. Bidirectional Encoder Representations from Transformers (BERT) is a natural language processing technique (NLP) that was proposed in 2018. (NLP is the field of artificial intelligence aiming for computers to read, analyze, interpret and derive meaning from text and spoken words. This practice combines linguistics, statistics, and Machine Learning to assist computers in ‘understanding’ human language.) BERT is based on the idea of pretraining a transformer model on a large corpus of text and then fine-tuning it for specific NLP tasks. The transformer model is a deep learning model that is designed to handle sequential data, such as text. The bidirectional transformer architecture stacks encoders from the original transformer on top of each other. This allows the model to better capture the context of the text.
- BERT Model Get the basic BERT pre-trained model from TensorFlowHub and fine tune to your needs
- Text Classification with BERT How to leverage a pre-trained BERT model from Hugging Face to classify text of news articles
- Question Answering with a fine-tuned BERT using Hugging Face Transformers and PyTorch on CoQA dataset by Stanford
BERT Hackathon projects
Discover innovative solutions crafted with BERT, developed by our community members during our engaging hackathons.
Recommendations cold-start problem is not actually a problem, if you leverage content and item metadata to build your recommendations. To showcase this idea we build a movie recommender, so you can visually see the difference between collaborative-filtering and content recommendations. We made two PRs to an existing open-source project Metarank: * support semantic recommendations with cohere-ai and sentence-transformers embeddings * use qdrant as a vector search engine to quickly perform vector similarity search With these two PRs merged building such a recommender is just a matter of a few lines of YAML code. But the semantic-similarity approach is not only about movies, but can be applied more generically in traditional places like e-commerce. For example, in fashion with high inventory churn, being able to recommend something for new clothes having zero feedback is really valuable.