lablab.ai logo - Community innovating and building with artificial intelligence
AI HackathonsAI AppsAI TechAI TutorialsBlogSurgeSponsor

Footer navigation

Community innovating and building with artificial intelligence

Unlocking state-of-the-art artificial intelligence and building with the world's talent

  • Instagram
  • Reddit
  • Twitter/X
  • GitHub
  • Discord
  • HackerNoon

Other group brands:

https://nativelyai.comhttps://surge.lablab.ai/
Links
  • AI Tech
  • AI Hackathons
  • AI Tutorials
  • AI Applications
  • Surge
  • Blog
lablab
  • About
  • Brand
  • Hackathon Guidelines
  • Terms of Use
  • Code of Conduct
  • Privacy Policy
Get in touch
  • Discord
  • Sponsor
  • Cooperation
  • Contribute
  • [email protected]

Ā© 2026 NativelyAI Inc. All rights reserved.

1.4.0

tocsa

Csaba Toth@tocsa

5

Events attended

2

Submissions made

United States

8+ years of experience

About me

Full stack engineer for SportsBoard (Director of Product Engineering) and ThruThink (CTO) startups. Hobbies: Generative AI, AI/ML, AR/VR, Flutter, wearable GDG Fresno lead WTM Fresno ambassador Tech meetup junkie

Socials

šŸ¤ Top Collaborators

Tinashehh img

Tinashe Chingwaru

K3nny97 img

Kenny Chirombo

šŸ¤“ Latest Submissions

    RAG Fusion with Cohere and Weaviate

    RAG Fusion with Cohere and Weaviate

    RAG Fusion generates variations of the user's question under the hood. It retrieves matching documents for each variation and performs a fusion between them with re-ranking. A variation may match better into a small DB than the original question. First I used a synthetic data enrichment technique: since I already generated QnA for the Cohere Command fine tuning, with some extra scripts I processed that data further for ingestion into the Weaviate platform. This step involved LangChain and it's context aware Markdown chunker and I contributed back crucial features to a related open source project (https://github.com/CsabaConsulting/question_extractor ). I then worked out the details of the RAG Fusion, I used LangChain in some stages. 1. First I use the fine tuned Cohere Command model to generate variations of the user's question 2. I then retrieve matching documents for each variant and fuse them together with reciprical re-ranking 3. I then use the top K of the fused ranked list to augment two final queries. 3.1. One is a document mode co.chat request. 3.2. The other is a web connector mode co.chat request. In both cases I take advantage of co.chat's excellent conversation management feature. 4. Present the results in a highly customized and advanced (as far as Streamlit goes) UI. Citations are interpolated and referred. In the future I could improve on the UX, integrate it into ThruThink. Also I'll evaluate the results and possibly introduce PaLM2 harmful content protection.

    Hackathon link

    18 Nov 2023

    QnA Boosted RAG with Vectara

    QnA Boosted RAG with Vectara

    A company's knowledge bases often times don't answer the wide variety of questions a user could come up with. A Customer Support system ideally could answer specific (but wide variety) questions about the company's systems and knowledge (example: "How can I enter Cash Flow in ThruThink?"). But sometimes the user asks generic questions, such as "What is Cash Flow?" which could be sourced from the mind of a giant LLM model and / or the internet. My idea is to help and boost the performance by leveraging Question and Answer generation techniques - normally used for fine tuning but in this case - for knowledge base augmentation / indexing enrichment. The generated questions could support specific user queries potentially better matching than a "non focused" indexed generic knowledge base.

    Hackathon link

    9 Nov 2023

šŸ‘Œ Attended Hackathons

    Cohere Coral Hackathon

    Cohere Coral Hackathon

    šŸ” Exclusive Access: Only 1,000 slots available for Coral model use. šŸ’” Spotlight Your Idea: Showcase in this limited-entry event. šŸ’° Win Big: Cash prizes and Cohere credits up for grabs. 🌐 Collaborate Globally: Team up with AI enthusiasts worldwide.

    RAG: LLMs with your data

    RAG: LLMs with your data

    Discover RAG: Enhance LLMs with fresh, trusted data using Retrieval-Augmented Generation (RAG). Vectara's Power: Explore Vectara, the all-in-one platform for innovative AI integration. Get Started: Dive into the week-long Hackathon Challenge and craft RAG-based applications. Choose Your Challenge: Opt for Customer Support, Legal Space, Financial Services, or Healthcare to showcase your skills. Join us in taking AI innovation to new heights.

    TruLens Hackathon

    TruLens Hackathon

    TruLens provides a robust suite of tools for neural network development and monitoring, including LLMs. You can utilize TruLens-Eval for evaluation and TruLens-Explain for deep learning explainability, independently. Explore these valuable tools while taking on the challenge!

šŸ“ Certificates

    RAG: LLMs with your data

    RAG: LLMs with your data | Certificate

    View Certificate
    Cohere Coral Hackathon

    Cohere Coral Hackathon | Certificate

    View Certificate