Text classification with kluster.ai API and Bespoke Curator¶
This notebook goes through the same example as in our previous Text classification notebook, but this time, we'll be using Bespoke Curator instead of the OpenAI Python library
To recap, the notebook uses kluster.ai batch API to classify a data set based on a predefined set of categories.
The example uses an extract from the IMDB top 1000 movies dataset and categorizes them into "Action," "Adventure," "Comedy," "Crime," "Documentary," "Drama," "Fantasy," "Horror," "Romance," or "Sci-Fi."
You can adapt this example by using your data and categories relevant to your use case. With this approach, you can effortlessly process datasets of any scale, big or small, and obtain categorized results powered by a state-of-the-art language model.
Prerequisites¶
Before getting started, ensure you have the following:
- A kluster.ai account - sign up on the kluster.ai platform if you don't have one
- A kluster.ai API key - after signing in, go to the API Keys section and create a new key. For detailed instructions, check out the Get an API key guide
Setup¶
In this notebook, we'll use Python's getpass
module to input the key safely. After execution, please provide your unique kluster.ai API key (ensure no spaces).
from getpass import getpass
api_key = getpass("Enter your kluster.ai API key: ")
Next, ensure you've the Bespoke Curator Python library:
pip install -q bespokelabs-curator
WARNING: google-cloud-aiplatform 1.71.1 does not provide the extra 'all' Note: you may need to restart the kernel to use updated packages.
Now that we've the library, we can initialize the LLM object for batch. Note that Curator supports kluster.ai natively, so you just need to provide the model to use, API key, and completion window.
This example uses klusterai/Meta-Llama-3.1-8B-Instruct-Turbo
, but feel free to comment it and uncomment any other model you want to try out.
Please refer to the Supported models section for a list of the models we support.
from bespokelabs import curator
# Models
#model="deepseek-ai/DeepSeek-R1"
#model="deepseek-ai/DeepSeek-V3-0324"
model="klusterai/Meta-Llama-3.1-8B-Instruct-Turbo"
#model="klusterai/Meta-Llama-3.3-70B-Instruct-Turbo"
#model="Qwen/Qwen2.5-VL-7B-Instruct"
llm = curator.LLM(
model_name=model,
batch=True,
backend="klusterai",
backend_params={"api_key": api_key, "completion_window": "24h"})
Get the data¶
With the Curator LLM object ready, let's define the data and prompt.
This notebook includes a preloaded sample dataset derived from the Top 1000 IMDb Movies dataset. It contains movie descriptions ready for classification. No additional setup is needed. Proceed to the next steps to begin working with this data.
For this particular scenario, the prompt consists of the request to the model and the data (movie) to be classified. Because this is a batch job, each separate request must contain both.
movies = ["Breakfast at Tiffany's: A young New York socialite becomes interested in a young man who has moved into her apartment building, but her past threatens to get in the way.",
"Giant: Sprawling epic covering the life of a Texas cattle rancher and his family and associates.",
"From Here to Eternity: In Hawaii in 1941, a private is cruelly punished for not boxing on his unit's team, while his captain's wife and second-in-command are falling in love.",
"Lifeboat: Several survivors of a torpedoed merchant ship in World War II find themselves in the same lifeboat with one of the crew members of the U-boat that sank their ship.",
"The 39 Steps: A man in London tries to help a counter-espionage Agent. But when the Agent is killed, and the man stands accused, he must go on the run to save himself and stop a spy ring which is trying to steal top secret information."]
prompts = [f"Classify the main genre of the given movie description based on the following genres (Respond with only the genre): “Action”, “Adventure”, “Comedy”, “Crime”, “Documentary”, “Drama”, “Fantasy”, “Horror”, “Romance”, “Sci-Fi”.\n{movie}" for movie in movies]
# Log the prompt
for prompt in prompts:
print(prompt)
Classify the main genre of the given movie description based on the following genres (Respond with only the genre): “Action”, “Adventure”, “Comedy”, “Crime”, “Documentary”, “Drama”, “Fantasy”, “Horror”, “Romance”, “Sci-Fi”. Breakfast at Tiffany's: A young New York socialite becomes interested in a young man who has moved into her apartment building, but her past threatens to get in the way. Classify the main genre of the given movie description based on the following genres (Respond with only the genre): “Action”, “Adventure”, “Comedy”, “Crime”, “Documentary”, “Drama”, “Fantasy”, “Horror”, “Romance”, “Sci-Fi”. Giant: Sprawling epic covering the life of a Texas cattle rancher and his family and associates. Classify the main genre of the given movie description based on the following genres (Respond with only the genre): “Action”, “Adventure”, “Comedy”, “Crime”, “Documentary”, “Drama”, “Fantasy”, “Horror”, “Romance”, “Sci-Fi”. From Here to Eternity: In Hawaii in 1941, a private is cruelly punished for not boxing on his unit's team, while his captain's wife and second-in-command are falling in love. Classify the main genre of the given movie description based on the following genres (Respond with only the genre): “Action”, “Adventure”, “Comedy”, “Crime”, “Documentary”, “Drama”, “Fantasy”, “Horror”, “Romance”, “Sci-Fi”. Lifeboat: Several survivors of a torpedoed merchant ship in World War II find themselves in the same lifeboat with one of the crew members of the U-boat that sank their ship. Classify the main genre of the given movie description based on the following genres (Respond with only the genre): “Action”, “Adventure”, “Comedy”, “Crime”, “Documentary”, “Drama”, “Fantasy”, “Horror”, “Romance”, “Sci-Fi”. The 39 Steps: A man in London tries to help a counter-espionage Agent. But when the Agent is killed, and the man stands accused, he must go on the run to save himself and stop a spy ring which is trying to steal top secret information.
Perform batch inference with Curator¶
Now that everything is set, we can execute the inference job. With Curator it is extremely simple, we just need to pass the prompts to the LLM object, and log the response.
responses = llm(prompts)
[04/17/25 10:50:47] INFO Running OpenAIBatchRequestProcessor completions with base_request_processor.py:131 model: klusterai/Meta-Llama-3.1-8B-Instruct-Turbo
INFO Using cached requests. If you want to regenerate the base_request_processor.py:212 dataset, disable or delete the cache. See https://docs.bespokelabs.ai/bespoke-curator/tutorials/au tomatic-recovery-and-caching#disable-caching for more information.
INFO Loaded existing tracker from base_batch_request_processor.py:301 /Users/kevin/.cache/curator/a879796c1be047b5/batch _objects.jsonl
Output()
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% • Time Elapsed 0:00:04 • Time Remaining 0:00:00
Curator Viewer: Disabled Set HOSTED_CURATOR_VIEWER=1 to view your data live at https://curator.bespokelabs.ai Batches: Total: 1 • Submitted: 0⋯ • Downloaded: 1✓ Requests: Total: 5 • Submitted: 0⋯ • Succeeded: 5✓ • Failed: 0✗ Tokens: Avg Input: 133 • Avg Output: 3 Cost: Current: $0.000 • Projected: $0.000 • Rate: $0.000/request Model: Name: klusterai/Meta-Llama-3.1-8B-Instruct-Turbo Model Pricing: Per 1M tokens: Input: $0.050 • Output: $0.050
Final Curator Statistics ╭────────────────────────────┬────────────────────────────────────────────╮ │ Section/Metric │ Value │ ├────────────────────────────┼────────────────────────────────────────────┤ │ Model │ │ │ Model │ klusterai/Meta-Llama-3.1-8B-Instruct-Turbo │ │ Batches │ │ │ Total Batches │ 1 │ │ Submitted │ 0 │ │ Downloaded │ 1 │ │ Requests │ │ │ Total Requests │ 5 │ │ Successful │ 5 │ │ Failed │ 0 │ │ Tokens │ │ │ Total Tokens Used │ 0 │ │ Total Input Tokens │ 664 │ │ Total Output Tokens │ 14 │ │ Average Tokens per Request │ 0 │ │ Average Input Tokens │ 132 │ │ Average Output Tokens │ 2 │ │ Costs │ │ │ Total Cost │ $0.000 │ │ Projected Remaining Cost │ $0.000 │ │ Projected Total Cost │ $0.000 │ │ Average Cost per Request │ $0.000 │ │ Input Cost per 1M Tokens │ $0.050 │ │ Output Cost per 1M Tokens │ $0.050 │ │ Performance │ │ │ Total Time │ 53.08s │ │ Average Time per Request │ 10.62s │ │ Requests per Minute │ 5.7 │ │ Input Tokens per Minute │ 750.6 │ │ Output Tokens per Minute │ 15.8 │ ╰────────────────────────────┴────────────────────────────────────────────╯
[04/17/25 10:50:52] INFO Read 5 responses. base_request_processor.py:442
INFO Finalizing writer base_request_processor.py:451
INFO Creating a file with all failed requests base_request_processor.py:460
INFO Created file with failed requests at base_request_processor.py:488 /Users/kevin/.cache/curator/a879796c1be047b5/failed_requ ests.jsonl
Lastly, let's print the response.
responses['response']
['Drama', 'Drama', 'Drama', 'Drama', 'Action']
Summary¶
This tutorial used the chat completion endpoint and Bespoke Curator to perform a simple text classification task with batch inference. This particular example clasified a series of movies based on their description.
Using Curator, submitting a batch job is extremely simple. It handles all the steps of creating the file, uploading it, submitting the batch job, monitoring the job, and retrieving results. Moreover, kluster.ai is natively supported, making things even easier!
Kluster.ai's batch API empowers you to scale your workflows seamlessly, making it an invaluable tool for processing extensive datasets. As next steps, feel free to create your own dataset, or expand on top of this existing example. Good luck!