80% of scientific publication cannot be replicated at Pfizer, $28 Billion+ wasted each year on bad science. Experimental procedures require a new medium of communication rather than texts-based publications. We bring experiment instructions to Augmented Reality and blend those instructions into the laboratory environment. This is extensively enabled by AI and LLM pipelines that breaks down text based instructions into specific actionable steps. During this hack, we made pipeline even more robust via Llama models capable of image and vide understanding, because often text instructions lack specificity in technical or geometric-dependent steps. We demonstrated this by having Llama vision, with help of RAG from OpenTrons documents, understanding of a pseudo-incision made on a plastic bottle and guiding a student to make the same incision. We also tested Llama vision's understanding on considerable amount of video demonstration data on Jove and documented it being capable at technical video understanding.