Image by author

Large Language Models (LLMs) have been the talk of the town due to their various capabilities that can almost replace humans. While models from OpenAI and Google have made a significant impact, there is another contender for the throne: Anthropic. Anthropic is an AI company that is building some amazing models to compete with popular models from major providers. Anthropic recently released Claude 3, a multimodal LLM that is getting attention in the AI market. We’ll take a closer look at how the Claude 3 model works, and its multimodal capabilities.

What is a multimodal model?

We all agree that the LLMs have limitations in terms of creating/providing accurate information, which can be tracked through different techniques. But, what if we have models that take not just natural language as input, but also images and videos to provide accurate information for users? Isn’t this a fantastic idea? Models that take more than one form of input and understand different modalities (text, image, audio, video, etc.) are known as multimodal models. Google, OpenAI and Anthropic each have their multimodal models with better capabilities already in the market.

Image credits: ResearchGate

OpenAI’s GPT-4V(ision), Google’s Gemini and Anthropic’s Claude-3 series are some notable examples of multimodal models that are revolutionizing the AI industry.

Claude 3

Anthropic’s Claude 3 is a family of three AI models: Claude 3 Opus, Sonnet and Haiku. These models have the multimodal capabilities that closely compete with GPT-4 and Gemini-ultra. These Claude 3 models have the right balance of cost, safety and performance.

Image credits: Anthropic [comparison of the Claude 3 models with other models]

The image displays a comparison chart of various LLMs’ performance across different tasks, including math problem-solving, coding and knowledge-based question answering. The models include Claude variants, GPT-4, GPT-3.5 and Gemini, with their respective scores in percentages. As you can see, the Claude 3 models perform robustly across a range of tasks, often outperforming other AI models presented in the comparison.

Claude’s Haiku is the most affordable — yet effective — model that can be used for real-time response generation in customer service use cases, content moderation, etc. with higher efficiency.

You can easily access Claude models from Anthropic’s official website to see how it works for your queries. You can transcribe handwritten notes, understanding how objects are used and complex equations.

I tried an example by sharing an image I found online to see if the Claude model can respond accurately.

Amazing, that is an impressive response with all the minute details about the image shared.

Claude 3 multimodal tutorial

We will be using the SingleStore database as our vector store, and SingleStore Notebooks (just like Jupyter notebooks) to run all our code.

Also we will be using LlamaIndex, an AI framework to build LLM-powered applications. It provides a complete toolkit for developers building AI applications by providing them with different APIs and the ability to integrate data sources, various file formats, databases to store vector data, applications, etc.

We will be importing the libraries required, running the multimodal model from Anthropic [claude-3-haiku-20240307], storing the data in the SingleStore and retrieving it to see the power of multimodality through text and image.

Let’s explore the capabilities of the model on text and vision tasks.

Get started by signing up for SingleStore. We are going to use SingleStore’s Notebook feature to run our commands, and SingleStore as the database to store our data.

Once you sign up for the first time, you need to create a workspace and a database to store your data. Later in this tutorial, you will also see how you can store data as embeddings.

Creating a database is easy. Just go to your workspace and click Create Database as shown here.

Go back to the main SingleStore dashboard. Click on Develop to create a Notebook.

Next, click on New Notebook > New Notebook.

Create a new Notebook and name it whatever you’d like.

Now, get started with the notebook code shared below.

Make sure to select your workspace and database you created.

Start adding all the code shown below step-by-step in the notebook you just created.

Install LlamaIndex and other libraries if required

!pip install llama-index –quiet
!pip install llama-index-llms-anthropic –quiet
!pip install llama-index-multi-modal-llms-anthropic –quiet
!pip install llama-index-vector-stores-singlestoredb –quiet
!pip install matplotlib –quietfrom llama_index.llms.anthropic import Anthropic
from llama_index.multi_modal_llms.anthropic import AnthropicMultiModal

Set API keys: Add both OpenAI and Anthropic API keys.

import os
os.environ[“ANTHROPIC_API_KEY”] = “”
os.environ[“OPENAI_API_KEY”] = “”

Let’s use the Claude 3’s Haiku model for chat completion

llm = Anthropic(model=”claude-3-haiku-20240307″)
response = llm.complete(“SingleStore is “)
print(response)

Multimodal scenario using an image

Let’s download the image first

!wget ‘https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/images/prometheus_paper_card.png’ -O ‘prometheus_paper_card.png’
from PIL import Image
import matplotlib.pyplot as plt

img = Image.open(“prometheus_paper_card.png”)
plt.imshow(img)

Load the image

from llama_index.core import SimpleDirectoryReader

# put your local directore here
image_documents = SimpleDirectoryReader(
input_files=[“prometheus_paper_card.png”]).load_data()

# Initiated Anthropic MultiModal class
anthropic_mm_llm = AnthropicMultiModal(
model=”claude-3-haiku-20240307″, max_tokens=300
)

Test the query on the image

response = anthropic_mm_llm.complete(
prompt=”Describe the images as an alternative text”,
image_documents=image_documents,
)

print(response)

You can test many image examples

from PIL import Image
import requests
from io import BytesIO
import matplotlib.pyplot as plt
from llama_index.core.multi_modal_llms.generic_utils import load_image_urls

image_urls = [
“https://images.template.net/99535/free-nature-transparent-background-7g9hh.jpg”,
]

img_response = requests.get(image_urls[0])
img = Image.open(BytesIO(img_response.content))
plt.imshow(img)

image_url_documents = load_image_urls(image_urls)

Ask the model to describe the mentioned image

response = anthropic_mm_llm.complete(
prompt=”Describe the images as an alternative text”,
image_documents=image_url_documents,
)

print(response)

RAG Pipeline: Index into a vector store

In this section, we’ll show you how to use Claude 3 to build a RAG pipeline over image data. We first use Claude to extract text from a set of images, then index the text with an embedding model. Finally, we build a query pipeline over the data.

from llama_index.multi_modal_llms.anthropic import AnthropicMultiModal
anthropic_mm_llm = AnthropicMultiModal(max_tokens=300)

!wget “https://www.dropbox.com/scl/fi/c1ec6osn0r2ggnitijqhl/mixed_wiki_images_small.zip?rlkey=swwxc7h4qtwlnhmby5fsnderd&dl=1” -O mixed_wiki_images_small.zip
!unzip mixed_wiki_images_small.zip

from llama_index.core import VectorStoreIndex, StorageContext
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.anthropic import Anthropic
from llama_index.vector_stores.singlestoredb import SingleStoreVectorStore
from llama_index.core import Settings
from llama_index.core import StorageContext
import singlestoredb

# Create a SingleStoreDB vector store
os.environ[“SINGLESTOREDB_URL”] = f'{connection_user}:{connection_password}@{connection_host}:{connection_port}/{connection_default_database}’

vector_store = SingleStoreVectorStore()

# Using the embedding model to Gemini
embed_model = OpenAIEmbedding()
anthropic_mm_llm = AnthropicMultiModal(max_tokens=300)

storage_context = StorageContext.from_defaults(vector_store=vector_store)

index = VectorStoreIndex(
nodes=nodes,
storage_context=storage_context,
)

Testing the query

from llama_index.llms.anthropic import Anthropic

query_engine = index.as_query_engine(llm=Anthropic())
response = query_engine.query(“Tell me more about the porsche”)

print(str(response))

The complete notebook code is here for your reference.

Multimodal LLMs: Industry specific use cases

There are many use cases where multimodal plays an important role and enhances LLM applications — here are the most top of mind.

Healthcare. Since multimodal models can process various input types, these can be highly effective in the healthcare industry. The models can take prescriptions, procedures, X-rays and other imagery as input,combining both the text and image to come up with high quality responses and remedies.Customer support chatbots. Multimodal models can be highly effective as sometimes text might not be sufficient in resolving an issue, so companies can ask for screenshots and images. This way, using a multimodal-powered application can enhance answering capabilities.Social media sentiment analysis. This can be cleverly done by multimodal-powered applications since they can identify images and text as well.eCommerce. Product search and recommendations can be simplified using multimodal models, since they get more contextual information from the user with many input capabilities.

Multimodal models are revolutionizing the AI industry, and we will only see more powerful models in the future. While unimodal models have the limitations of processing only one type of input (mostly text), multimodal models gain an added advantage here by processing various input types (text, image, audio, video, etc). Multimodal models are paving the future for building LLM-powered applications for more context and high performance.

Multimodal RAG Using LlamaIndex, Claude 3 and SingleStore! was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.

​ Level Up Coding – Medium

about Infinite Loop Digital

We support businesses by identifying requirements and helping clients integrate AI seamlessly into their operations.

Gartner
Gartner Digital Workplace Summit Generative Al

GenAI sessions:

  • 4 Use Cases for Generative AI and ChatGPT in the Digital Workplace
  • How the Power of Generative AI Will Transform Knowledge Management
  • The Perils and Promises of Microsoft 365 Copilot
  • How to Be the Generative AI Champion Your CIO and Organization Need
  • How to Shift Organizational Culture Today to Embrace Generative AI Tomorrow
  • Mitigate the Risks of Generative AI by Enhancing Your Information Governance
  • Cultivate Essential Skills for Collaborating With Artificial Intelligence
  • Ask the Expert: Microsoft 365 Copilot
  • Generative AI Across Digital Workplace Markets
10 – 11 June 2024

London, U.K.