Text classification with kluster.ai API¶
Welcome to the text classification notebook with the kluster.ai Batch API!
This notebook showcases how to use the kluster.ai Batch API to classify a data set based on a predefined set of categories. In our example, we use an extract from the IMDB top 1000 movies dataset and categorize them into one of “Action”, “Adventure”, “Comedy”, “Crime”, “Documentary”, “Drama”, “Fantasy”, “Horror”, “Romance”, or “Sci-Fi”. We are using a movies dataset but you can adapt this example by using your data and categories relevant for your use case. With this approach, you can effortlessly process datasets of any scale, from small collections to extensive datasets, and obtain categorized results powered by a state-of-the-art language model.
Simply provide your API key and run the preloaded cells to perform the classification. If you don’t have an API key, you can sign up for free on our platform.
Let’s get started!
Setup¶
Enter your personal kluster.ai API key (make sure it has no blank spaces). Remember to sign up if you don't have one yet.
from getpass import getpass
api_key = getpass("Enter your kluster.ai API key: ")
Enter your kluster.ai API key: ········
%pip install -q openai
Note: you may need to restart the kernel to use updated packages.
from openai import OpenAI
import pandas as pd
import time
import json
from IPython.display import clear_output, display
# Set up the client
client = OpenAI(
base_url="https://api.kluster.ai/v1",
api_key=api_key,
)
Get the data¶
This notebook includes a preloaded sample dataset derived from the Top 1000 IMDb Movies dataset. It contains movie descriptions ready for classification. No additional setup is needed—simply proceed to the next steps to begin working with this data.
df = pd.DataFrame({
"text": [
"Breakfast at Tiffany's: A young New York socialite becomes interested in a young man who has moved into her apartment building, but her past threatens to get in the way.",
"Giant: Sprawling epic covering the life of a Texas cattle rancher and his family and associates.",
"From Here to Eternity: In Hawaii in 1941, a private is cruelly punished for not boxing on his unit's team, while his captain's wife and second-in-command are falling in love.",
"Lifeboat: Several survivors of a torpedoed merchant ship in World War II find themselves in the same lifeboat with one of the crew members of the U-boat that sank their ship.",
"The 39 Steps: A man in London tries to help a counter-espionage Agent. But when the Agent is killed, and the man stands accused, he must go on the run to save himself and stop a spy ring which is trying to steal top secret information."
]
})
Batch inference¶
To execute the inference job, we’ll follow three straightforward steps:
- Create the inference file - we’ll generate a file with the desired requests to be processed by the model.
- Upload the inference file - once the file is ready, we’ll upload it to the kluster.ai platform using the API, where it will be queued for processing.
- Start the job - after the file is uploaded, we’ll initiate the job to process the uploaded data.
Everything is set up for you – just run the cells below to watch it all come together!
Create the Batch file¶
This example selects the klusterai/Meta-Llama-3.3-70B-Instruct-Turbo
model. If you'd like to use a different model feel free to change the model's name in the following cell. Please refer to our documentation for a list of the models we support.
def create_inference_file(df):
inference_list = []
for index, row in df.iterrows():
content = row['text']
request = {
"custom_id": f"movie_classification-{index}",
"method": "POST",
"url": "/v1/chat/completions",
"body": {
"model": "klusterai/Meta-Llama-3.3-70B-Instruct-Turbo",
"temperature": 0.5,
"response_format": {"type": "json_object"},
"messages": [
{"role": "system", "content": 'Classify the main genre of the given movie description based on the following genres(Respond with only the genre): “Action”, “Adventure”, “Comedy”, “Crime”, “Documentary”, “Drama”, “Fantasy”, “Horror”, “Romance”, “Sci-Fi”.'},
{"role": "user", "content": content}
],
}
}
inference_list.append(request)
return inference_list
def save_inference_file(inference_list):
filename = f"movie_classification_inference_request.jsonl"
with open(filename, 'w') as file:
for request in inference_list:
file.write(json.dumps(request) + '\n')
return filename
inference_list = create_inference_file(df)
filename = save_inference_file(inference_list)
Let’s preview what that request file looks like:
!head -n 1 movie_classification_inference_request.jsonl
{"custom_id": "movie_classification-0", "method": "POST", "url": "/v1/chat/completions", "body": {"model": "klusterai/Meta-Llama-3.3-70B-Instruct-Turbo", "temperature": 0.5, "response_format": {"type": "json_object"}, "messages": [{"role": "system", "content": "Classify the main genre of the given movie description based on the following genres(Respond with only the genre): \u201cAction\u201d, \u201cAdventure\u201d, \u201cComedy\u201d, \u201cCrime\u201d, \u201cDocumentary\u201d, \u201cDrama\u201d, \u201cFantasy\u201d, \u201cHorror\u201d, \u201cRomance\u201d, \u201cSci-Fi\u201d."}, {"role": "user", "content": "Breakfast at Tiffany's: A young New York socialite becomes interested in a young man who has moved into her apartment building, but her past threatens to get in the way."}]}}
Upload inference file to kluster.ai¶
Now that we’ve prepared our input file, it’s time to upload it to the kluster.ai platform.
inference_input_file = client.files.create(
file=open(filename, "rb"),
purpose="batch"
)
Start the job¶
Once the file has been successfully uploaded, we’re ready to start the inference job.
inference_job = client.batches.create(
input_file_id=inference_input_file.id,
endpoint="/v1/chat/completions",
completion_window="24h"
)
Check job progress¶
Now that the job has been created, your request is now being processed! In the following section, we’ll monitor the status of the job to see how it's progressing. Let’s take a look and keep track of it's status.
def parse_json_objects(data_string):
if isinstance(data_string, bytes):
data_string = data_string.decode('utf-8')
json_strings = data_string.strip().split('\n')
json_objects = []
for json_str in json_strings:
try:
json_obj = json.loads(json_str)
json_objects.append(json_obj)
except json.JSONDecodeError as e:
print(f"Error parsing JSON: {e}")
return json_objects
all_completed = False
while not all_completed:
all_completed = True
output_lines = []
updated_job = client.batches.retrieve(inference_job.id)
if updated_job.status != "completed":
all_completed = False
completed = updated_job.request_counts.completed
total = updated_job.request_counts.total
output_lines.append(f"Job status: {updated_job.status} - Progress: {completed}/{total}")
else:
output_lines.append(f"Job completed!")
# Clear the output and display updated status
clear_output(wait=True)
for line in output_lines:
display(line)
if not all_completed:
time.sleep(10)
'Job completed!'
Get the results¶
With the job completed, we’ll now retrieve the results and review the responses generated for each request.
job = client.batches.retrieve(inference_job.id)
result_file_id = job.output_file_id
result = client.files.content(result_file_id).content
parse_json_objects(result)
[{'id': '67534b14a7b9464cf5d177cb', 'custom_id': 'movie_classification-0', 'response': {'status_code': 200, 'request_id': 'e492e8ff-124a-45a3-bae7-a701b1aa2b18', 'body': {'id': 'chat-7be62aaffc2f4d9bb0a507ddc18ffbd6', 'object': 'chat.completion', 'created': 1733511956, 'model': 'klusterai/Meta-Llama-3.3-70B-Instruct-Turbo', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': 'Romance', 'tool_calls': []}, 'logprobs': None, 'finish_reason': 'stop', 'stop_reason': None}], 'usage': {'prompt_tokens': 130, 'total_tokens': 133, 'completion_tokens': 3}, 'prompt_logprobs': None}}}, {'id': '67534b14a7b9464cf5d177cd', 'custom_id': 'movie_classification-1', 'response': {'status_code': 200, 'request_id': 'f7460587-ca35-458a-bfe9-58a181276bce', 'body': {'id': 'chat-57d71a10548c4f5b8300f7172338be13', 'object': 'chat.completion', 'created': 1733511956, 'model': 'klusterai/Meta-Llama-3.3-70B-Instruct-Turbo', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': 'Drama', 'tool_calls': []}, 'logprobs': None, 'finish_reason': 'stop', 'stop_reason': None}], 'usage': {'prompt_tokens': 116, 'total_tokens': 119, 'completion_tokens': 3}, 'prompt_logprobs': None}}}, {'id': '67534b14a7b9464cf5d177cf', 'custom_id': 'movie_classification-2', 'response': {'status_code': 200, 'request_id': 'f862ff04-6ee8-4844-bac1-2f40f450651e', 'body': {'id': 'chat-1e0f635fcaf94fc1b1b4ffdfb59dd3b6', 'object': 'chat.completion', 'created': 1733511956, 'model': 'klusterai/Meta-Llama-3.3-70B-Instruct-Turbo', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': 'Drama', 'tool_calls': []}, 'logprobs': None, 'finish_reason': 'stop', 'stop_reason': None}], 'usage': {'prompt_tokens': 137, 'total_tokens': 140, 'completion_tokens': 3}, 'prompt_logprobs': None}}}, {'id': '67534b14a7b9464cf5d177d1', 'custom_id': 'movie_classification-3', 'response': {'status_code': 200, 'request_id': '7151e8e3-640d-4ddb-be37-c06157e778e9', 'body': {'id': 'chat-258da8796eaf4b88883ca6fe78fb4c8d', 'object': 'chat.completion', 'created': 1733511956, 'model': 'klusterai/Meta-Llama-3.3-70B-Instruct-Turbo', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': 'Drama', 'tool_calls': []}, 'logprobs': None, 'finish_reason': 'stop', 'stop_reason': None}], 'usage': {'prompt_tokens': 132, 'total_tokens': 135, 'completion_tokens': 3}, 'prompt_logprobs': None}}}, {'id': '67534b14a7b9464cf5d177d3', 'custom_id': 'movie_classification-4', 'response': {'status_code': 200, 'request_id': 'b48af784-5afa-48eb-8d1f-c5572498d7f9', 'body': {'id': 'chat-d595392201034df1a44980dfb85cf650', 'object': 'chat.completion', 'created': 1733511956, 'model': 'klusterai/Meta-Llama-3.3-70B-Instruct-Turbo', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': 'Action', 'tool_calls': []}, 'logprobs': None, 'finish_reason': 'stop', 'stop_reason': None}], 'usage': {'prompt_tokens': 149, 'total_tokens': 151, 'completion_tokens': 2}, 'prompt_logprobs': None}}}]
Conclusion¶
You’ve successfully completed the classification request using the kluster.ai Batch API! This process showcases how you can efficiently handle and classify large amounts of data with ease. The Batch API empowers you to scale your workflows seamlessly, making it an invaluable tool for processing extensive datasets.