Tag Archives: Ollama

Building RAG Apps With Apache Cassandra, Python, and Ollama

Building RAG Apps With Apache Cassandra, Python, and Ollama

Retrieval-augmented generation (RAG) is the most popular approach for obtaining real-time data or updated data from a data source based on text input by users. Thus empowering all our search applications with state-of-the-art neural search.  In RAG search systems, each user request is converted into a vector representation by embedding model, and this vector comparison is performed using various algorithms …

Read More »

Implement RAG With PGVector, LangChain4j, and Ollama

Implement RAG With PGVector, LangChain4j, and Ollama

In this blog, you will learn how to implement retrieval-augmented generation (RAG) using PGVector, LangChain4j, and Ollama. This implementation allows you to ask questions about your documents using natural language. Enjoy! Introduction In a previous blog, RAG was implemented using Weaviate, LangChain4j, and LocalAI. Now, one year later, it is interesting to find out how this has evolved. E.g.:

Read More »

Enhance Your Workflow With Ollama, LangChain, and RAG

Enhance Your Workflow With Ollama, LangChain, and RAG

As developers, we always look for ways to make our development workflows smoother and more efficient. With the new year unfolding, the landscape of AI-powered code assistants is evolving at a rapid pace. It is projected that, by 2028, 75% of enterprise software engineers will use AI code assistants, a monumental leap from less than 10% in early 2023. Tools …

Read More »

2 Must See Ollama WordPress Plugins for LLMs

2 Must See Ollama WordPress Plugins for LLMs

Ollama makes it easy to get up and running for large language models. It comes with everything you need to run Llama 3.1, Phi, Mistral, Gemma, and other models. You can always create your own. Here are 2 Ollama WordPress plugins you shouldn’t miss: You may also want to see: Alpaca Bot: this plugin lets you create new content on …

Read More »