Build a Retrieval Augmented Generation (RAG) AI
This guide will instruct you through setting up and deploying your first application with Cloudflare AI. You will build a fully-featured AI-powered application, using tools like Workers AI, Vectorize, D1, and Cloudflare Workers.
At the end of this tutorial, you will have built an AI tool that allows you to store information and query it using a Large Language Model. This pattern, known as Retrieval Augmented Generation, or RAG, is a useful project you can build by combining multiple aspects of Cloudflare’s AI toolkit. You do not need to have experience working with AI tools to build this application.
- Sign up for a Cloudflare account.
- Install
npm
. - Install
Node.js
.
Node.js version manager
Use a Node version manager like Volta or nvm to avoid permission issues and change Node.js versions. Wrangler, discussed later in this guide, requires a Node version of 16.17.0
or later.
You will also need access to Vectorize.
C3 (create-cloudflare-cli
) is a command-line tool designed to help you setup and deploy Workers to Cloudflare as fast as possible.
Open a terminal window and run C3 to create your Worker project:
For setup, select the following options:
- For What would you like to start with?, choose
Hello World example
. - For Which template would you like to use?, choose
Hello World Worker`
. - For Which language do you want to use?, choose
JavaScript`
. - For Do you want to use git for version control?, choose
Yes
. - For Do you want to deploy your application?, choose
No
(we will be making some changes before deploying).
In your project directory, C3 has generated several files.
What files did C3 create?
wrangler.toml
: Your Wrangler configuration file.worker.js
(in/src
): A minimal'Hello World!'
Worker written in ES module syntax.package.json
: A minimal Node dependencies configuration file.package-lock.json
: Refer tonpm
documentation onpackage-lock.json
.node_modules
: Refer tonpm
documentationnode_modules
.
Now, move into your newly created directory:
The Workers command-line interface, Wrangler, allows you to create, test, and deploy your Workers projects. C3 will install Wrangler in projects by default.
After you have created your first Worker, run the wrangler dev
command in the project directory to start a local server for developing your Worker. This will allow you to test your Worker locally during development.
You will now be able to go to http://localhost:8787 to see your Worker running. Any changes you make to your code will trigger a rebuild, and reloading the page will show you the up-to-date output of your Worker.
To begin using Cloudflare’s AI products, you can add the ai
block to wrangler.toml
. This will set up a binding to Cloudflare’s AI models in your code that you can use to interact with the available AI models on the platform.
This example features the @cf/meta/llama-3-8b-instruct
model, which generates text.
Now, find the src/index.js
file. Inside the fetch
handler, you can query the LLM
binding:
By querying the LLM binding, we can interact directly with the Cloudflare AI large language model directly in our code.
You can deploy your Worker using wrangler
:
Making a request to your Worker will now return a response from the LLM binding.
Embeddings allow you to add additional capabilities to the language models you can use in your Cloudflare AI projects. This is done via Vectorize, Cloudflare’s vector database.
To begin using Vectorize, create a new embeddings index using wrangler
. This index will store vectors with 768 dimensions, and will use cosine similarity to determine which vectors are most similar to each other:
Then, add the configuration details for your new Vectorize index to wrangler.toml
:
A vector index allows you to store a collection of dimensions, which are floating point numbers used to represent your data. When you want to query the vector database, you can also convert your query into dimensions. Vectorize is designed to efficiently determine which stored vectors are most similar to your query.
To implement the searching feature, you must set up a D1 database from Cloudflare. In D1, you can store your app’s data. Then, you change this data into a vector format. When someone searches and it matches the vector, you can show them the matching data.
Create a new D1 database using wrangler
:
Then, paste the configuration details output from the previous command into wrangler.toml
:
In this application, we’ll create a notes
table in D1, which will allow us to store notes and later retrieve them in Vectorize. To create this table, run a SQL command using wrangler d1 execute
:
Now, we can add a new note to our database using wrangler d1 execute
:
To expand on your Workers function in order to handle multiple routes, we will add hono
, a routing library for Workers. This will allow us to create a new route for adding notes to our database. Install hono
using npm
:
Then, import hono
into your src/index.js
file. You should also update the fetch
handler to use hono
:
This will establish a route at the root path /
that is functionally equivalent to the previous version of your application. Now, we can add a new route for adding notes to our database.
This example features the @cf/baai/bge-base-en-v1.5
model, which can be used to create an embedding. Embeddings are stored and retrieved from our vector database Vectorize. The user’s query is also turned into an embedding so that it can be used for searching within Vectorize.
This function does the following things:
- Parse the JSON body of the request to get the
text
field. - Insert a new row into the
notes
table in D1, and retrieve theid
of the new row. - Convert the
text
into a vector using theembeddings
model of the LLM binding. - Upsert the
id
andvectors
into thevector-index
index in Vectorize. - Return the
id
andtext
of the new note as JSON.
By doing this, you will create a new vector representation of the note, which can be used to retrieve the note later.
To complete your code, you can update the root path (/
) to query Vectorize. You will convert the query into a vector, and then use the vector-index
index to find the most similar vectors.
Since we are using cosine similarity, the vectors with the highest cosine similarity will be the most similar to the query. We can introduce a SIMILIARITY_CUTOFF
to only return vectors that are above a certain similarity threshold. In this case, we will use a cutoff of 0.75
, but you can adjust this value to suit your needs.
We will also specify the topK
parameter as part of the optional parameters to the query
function. The topK
parameter limits the number of vectors returned by the function. For instance, providing a topK
of 1 will only return the most similar vector based on the query. You may customize this for your own needs.
With the list of similar vectors, you can retrieve the notes that match the record IDs stored alongside those vectors. You can insert the text of those notes as context into the prompt for the LLM binding. We’ll update the prompt to include the context, and to ask the LLM to use the context when responding.
Finally, you can query the LLM binding to get a response.
If you did not deploy your Worker during step 1, deploy your Worker via Wrangler, to a *.workers.dev
subdomain, or a Custom Domain, if you have one configured. If you have not configured any subdomain or domain, Wrangler will prompt you during the publish process to set one up.
Preview your Worker at <YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev
.
To do more:
- Explore the reference diagram for a Retrieval Augmented Generation (RAG) Architecture.
- Review Cloudflare’s AI documentation.
- Review Tutorials to build projects on Workers.
- Explore Examples to experiment with copy and paste Worker code.
- Understand how Workers works in Reference.
- Learn about Workers features and functionality in Platform.
- Set up Wrangler to programmatically create, test, and deploy your Worker projects.