Will AI stop us from destroying the world?

Artificial Intelligence, as one of the few topics, arouses both excitement and fear. Despite the fact that some of us try to use it for disreputable purposes, Kacper presents those who believe AI can serve us to solve pressing social problems and argues that we actually have nothing to fear.

Best practices for building LLM-based applications

Large Language Models (LLMs) are revolutionizing various industries, but their integration comes with challenges. Kacper Łukawski from Qdrant will guide attendees through the intricacies of building, deploying, and scaling LLM-based applications, including the Retrieval Augmented Generation (RAG). This talk will provide a concise roadmap to ensure a smooth transition from development to production, emphasizing model selection, computational demands, data privacy, and fine-tuning.

ChatGPT is lying, how can we fix it?

ChatGPT was a revolution nobody was ready for. All the social channels have been flooded with prompts and answers which look ok at first glance but turn out to be counterfeit. Factuality is the biggest concern about Large Language Models, not only the OpenAI product. If you build an app with LLMs, you need to be aware of this. Retrieval Augmented Language Models seem to be the solution to overcome that issue. They combine LLMs’ language capabilities and the knowledge base’s accuracy. The talk will review possible ways to implement it with humans in the loop.

What Is This Vector Search Buzz All About?

Everybody in the search world talks about vectors. They’re everywhere, but will they surpass the old keyword-based methods? The talk will be an introduction to vector search for those unfamiliar with them, but also some lessons on how to start implementing those methods. We’ll talk about overcoming some common pitfalls and present the toolkit that may support you while implementing neural vector-based search mechanisms.

The Current State of Multilingual Semantic Search

A lot has been going on regarding multilingual semantic search, and many available tools claim to solve that problem. We will review the available models, including SaaS and Open Source solutions, with their pros and cons. On top of that, we will discuss how to approach semantic search in less-popular languages.

The Challenges of Making Vector Search Billion-scale

Semantic search based on vector similarity is crucial for various modern applications, including those based on Large Language Models, such as GPT. Things become challenging when we go from thousands to millions or even billions of embeddings, and we still want to keep the same performance. We came through the harsh lessons of scaling a vector database at Qdrant and would like to share what we’ve learned on that journey. This talk will be a detailed description of our design choices and the infrastructure that backs them up.

Trends in designing scalable data workflows