Quickstart

Get started with using Composio

This guide shows how to build a workflow to star a GitHub repository using AI. Composio discovers and provides the relevant tools to the LLM and handles its execution.

  1. 🔑 Get your Composio API key
  2. 🔐 Configure GitHub integration
  3. 🛠 Discover and fetch relevant tools
  4. 🧠 Pass tools to an LLM
  5. ⭐ Execute tools to star a repository
Before proceeding, ensure you have installed Composio!

Getting your API key

Before you begin, you’ll need a Composio account. Sign up here if you haven’t yet.

Once done, you can generate the API key through the dashboard or command-line tool.

1

Login

Ensure you have uv installed!
$uvx --from composio-core composio login

To view the API key;

$uvx --from composio-core composio whoami
2

Store the API Key

When building, store the API key in an .env file.

$echo "COMPOSIO_API_KEY=YOUR_API_KEY" >> .env

Or export it to your environment variables.

$export COMPOSIO_API_KEY=YOUR_API_KEY

Make sure to not leak your Composio API key. Anyone with access to your API key can access your authenticated applications.

Setting up the GitHub integration

Before writing any code, you’ll need to connect your GitHub account. Choose your preferred method:

Add GitHub integration through the CLI.

1uvx --from composio-core composio add github

Follow the instructions in the CLI to authenticate and connect your GitHub account.

Building the application

After connecting GitHub, create the LLM workflow:

1

Initialize Clients

1from openai import OpenAI
2from composio_openai import ComposioToolSet
3from dotenv import load_dotenv
4
5load_dotenv()
6
7client = OpenAI()
8toolset = ComposioToolSet()
2

Discover and Fetch Actions

1# Find relevant actions for our task
2actions = toolset.find_actions_by_use_case(
3 use_case="star a repo, print octocat",
4 advanced=True,
5)
6
7# Get the tools for these actions
8tools = toolset.get_tools(actions=actions)
3

Implement Tool Calling

Breaking down the tool calling process into smaller steps helps understand how it works:

  1. First, define the task and set up the conversation with the LLM.
  2. Then, enter a loop to handle the interaction between the LLM and tools.
  3. Finally, process and store the results of each tool call, and exit the loop when the task is complete.
1

Define the Task

1# Define the task for the LLM
2task = "star composiohq/composio and print me an octocat."
3
4# Set up the initial conversation
5messages = [
6 {"role": "system", "content": "You are a helpful assistant."},
7 {"role": "user", "content": task},
8]
2

Send Request to LLM

1response = client.chat.completions.create(
2 model="gpt-4o",
3 tools=tools, # The tools we prepared earlier
4 messages=messages,
5)

The LLM examines the task and available tools, then decides which tools to call and in what order.

3

Handle Tools

1# Check if the LLM wants to use any tools
2if response.choices[0].finish_reason != "tool_calls":
3 # If no tools needed, just print the response
4 print(response.choices[0].message.content)
5 break
6
7# Execute the tool calls
8result = toolset.handle_tool_calls(response)
9
10# Store the conversation history:
11# 1. Store the LLM's tool call request
12messages.append({
13 "role": "assistant",
14 "content": "", # Empty content since we're using tools
15 "tool_calls": response.choices[0].message.tool_calls,
16})
17
18# 2. Store the tool's response
19for tool_call in response.choices[0].message.tool_calls:
20 messages.append({
21 "role": "tool",
22 "content": str(result),
23 "tool_call_id": tool_call.id,
24 })

This process involves three key steps:

  1. Check if the LLM wants to use tools.
  2. Execute the requested tool calls.
  3. Store both the request and result in the conversation history.
4

Create a loop

Here’s how all these pieces work together in a continuous loop:

1# Main interaction loop
2while True:
3 try:
4 # 1. Send request to LLM
5 response = client.chat.completions.create(
6 model="gpt-4o",
7 tools=tools,
8 messages=messages,
9 )
10
11 # 2. Check if LLM wants to use tools
12 if response.choices[0].finish_reason != "tool_calls":
13 print(response.choices[0].message.content)
14 break
15
16 # 3. Execute tool calls
17 result = toolset.handle_tool_calls(response)
18
19 # 4. Store the conversation history
20 messages.append({
21 "role": "assistant",
22 "content": "",
23 "tool_calls": response.choices[0].message.tool_calls,
24 })
25 for tool_call in response.choices[0].message.tool_calls:
26 messages.append({
27 "role": "tool",
28 "content": str(result),
29 "tool_call_id": tool_call.id,
30 })
31 except Exception as error:
32 print(f"Error: {error}")
33 if hasattr(error, 'response'):
34 print(f"Response data: {error.response}")
35 break

The loop continues until either: • The LLM completes the task with no more tool calls • Error handling catches an exception

Full code

Here’s the full code for the workflow.

1from dotenv import load_dotenv
2from openai import OpenAI
3from composio_openai import ComposioToolSet, App
4
5load_dotenv()
6
7client = OpenAI()
8toolset = ComposioToolSet()
9
10actions = toolset.find_actions_by_use_case(
11 use_case="star a repo, print octocat",
12 advanced=True,
13)
14
15tools = toolset.get_tools(actions=actions)
16task = "star composiohq/composio and print me an octocat."
17messages = [
18 {"role": "system", "content": "You are a helpful assistant."},
19 {"role": "user", "content": task},
20]
21
22while True:
23 response = client.chat.completions.create(
24 model="gpt-4o",
25 tools=tools,
26 messages=messages,
27 )
28
29 result = toolset.handle_tool_calls(response)
30
31 if response.choices[0].finish_reason != "tool_calls":
32 print(response.choices[0].message.content)
33 break
34
35 messages.append(
36 {
37 "role": "assistant",
38 "content": "",
39 "tool_calls": response.choices[0].message.tool_calls,
40 }
41 )
42
43 for tool_call in response.choices[0].message.tool_calls:
44 messages.append(
45 {
46 "role": "tool",
47 "content": str(result),
48 "tool_call_id": tool_call.id,
49 }
50 )

Need help? Join the Discord community!

Built with