3
1
Poland
2 years of experience
I am a last-year student with a strong passion for creating innovative solutions that drive change. With experience in a diverse technology stack, including Python, Go, Vue.js, Docker, Flutter, Kubernetes, and more, I am always eager to expand my skill set and learn new technologies. I believe in the power of collaboration and design thinking to solve complex problems. My approach emphasizes creative problem-solving and user-centered design, which is why I advocate for regular planning meetings to ensure the best solutions emerge through collective input and iterative thinking. As I approach the final stages of my academic journey, I am excited to continue developing my skills and contribute to projects that push the boundaries of technology and innovation.
We propose using the LLaMA 3.1:1B model as a local proxy server to manage caches with JSON responses. Here's how the llama model can help us do just that: Query analysis and optimization Smart data management in the cache Optimization of communication with API Create intelligent cache management policies. Enrich responses and adding a layer of security and privacy to the application. Understand user behavior and tailor data to their needs. We can also approach the problem using the cloud model, when we do not have enough RAM to be able to run llama 3.1:1B. We can then send queries from time to time to the server, which would decide on the cache hierarchy, which would be the most important, and which items would already have a deletion time.