Guide: Getting started with Klu Python SDK

by Stephen M. Walker II, Co-Founder / CEO

Step 1: Install the Klu SDK

First, you need to install the Klu SDK. You can do this by running the following command in your terminal:

pip install klu

Step 2: Import Klu

Next, you need to import the Klu module in your Python script:

from klu import Klu

Step 3: Initialize Klu

Now, you need to initialize Klu with your API key. You can find your API key in your Klu workspace settings.

klu = Klu('YOUR_API_KEY')

Step 4: Create an Action

To create an action, you need to define a prompt template, model config, and additional Klu-specific configurations. Here's an example:

action = klu.actions.create(
    name='Content Analyzer',
    description='Analyzes any given piece of text content',
    prompt_template='Analyze the following content: {{content}}',
    action_type="prompt",
    app_guid=app.guid,
    model_guid=models[0].guid
)

Step 5: Run the Action

Finally, you can run the action with your input:

result = klu.actions.prompt(action=action.guid,input={'content': 'Your text here'})
print(result)

This will print the generated text to the console.

And that's it! You've created and run your first Klu action using the Python SDK. Remember to replace 'YOUR_API_KEY' and 'Your text here' with your actual API key and the text you want to analyze, respectively.

If you want to learn more about the Klu Python SDK, you can check out the developer docs.

More articles

Best Open Source LLMs of 2024

Open source LLMs like Gemma 2, Llama 3.1, and Command R+ are bringing advanced AI capabilities into the public domain. This guide explores the best open source LLMs and variants for capabilities like chat, reasoning, and coding while outlining options to test models online or run them locally and in production.

Read more

Evaluating 2024 Frontier Model Capabilities Pt.01

In this article, we'll dive into the current state of frontier models, exploring their capabilities, limitations, and the gap between benchmarks and real-world performance. We'll introduce QUAKE, a new benchmark designed to evaluate LLMs on practical knowledge worker tasks, and share our findings on model performance across various domains.

Read more

It's time to build

Collaborate with your team on reliable Generative AI features.
Want expert guidance? Book a 1:1 onboarding session from your dashboard.

Start for free