Outerform Logo
DemoPricingDocsBlogLog inCreate free account
← Back
AI

Using ChatGPT Tool/Function calls with JavaScript and Node.js

At Outerform, we're hard at work revolutionizing how online forms and surveys are built by utilizing the power of modern LLMs like ChatGPT. One of the biggest challenges developers have when working with these models is how to reduce the randomness of the models and trigger repeatable actions that bring external data into the conversation. That's where tool calls come in.

For example, lets say you want to check the price of a stock in a ChatGPT conversation. You could use a tool call to trigger a function that fetches the stock price from an API and returns it to the conversation.

While tool calls are easy to understand conceptually, there are a few key but poorly-documented requirements you need to be aware of to use them correctly in a conversation with the ChatGPT API.

ChatGPT Tool Calls with JavaScript and Node.js

To start, we define the prompt that specifies the conditions under which the tool call should be triggered. In this case, we want to call the get_stock_price function when the user asks to check the price of a stock. We then define the tool call schema with the function name, description, and parameters. Finally, we call the ChatGPT API with the prompt and tool call configuration.

const prompt = `
You are a stock trading bot that can help users trade public stocks.

If the user wants the price of a stock, call \`get_stock_price\` to fetch the price.
`

// Assume we have a ChatGPT Node.js wrapper that calls 
// https://api.openai.com/v1/chat/completions
const completion = await chatGpt.complete(
  [
    {
      role: 'system',
      content: prompt,
    },
    {
      role: 'user',
      content: 'What is the price of AAPL?',
    }
  ],
  {
    tool_choice: 'auto',
    tools: [
      {
        type: 'function',
        function: {
          name: 'get_stock_price',
          description: 'Fetch the current price of the given stock',
          parameters: {
            type: 'object',
            properties: {
              symbol: { type: 'string' },
            }
          }
        }
      }
    ]
  },
);

Note: the tool_choice parameter is set to auto to allow the model to decide when to trigger the tool call, and it may decide not to if the user provided message doesn't really match with the tools we've defined. Other possible values are none to never call a tool, required to always call a tool, and passing a specific tool using {"type": "function", "function": {"name": "my_function"}} to force a tool to be called.

What happens when this resolves is we receive a completion from the API with this general shape (some fields removed for brevity):

{
  "id": "chatcmpl-ID",
  "object": "chat.completion",
  "model": "gpt-4o-2024-05-13",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "tool_calls": [
          {
            "id": "call_ID",
            "type": "function",
            "function": {
              "name": "get_stock_price",
              "arguments": "{ \"symbol\": \"AAPL\" }"
            }
          }
        ]
      },
      "logprobs": null,
      "finish_reason": "tool_calls"
    }
  ]
}

Tool calls aren't actually that magical: the model just tries to return the best match for the tool to call and construct the parameters based on the schema we sent when calling the completions API.

Now that we have the response, all we need to do is check which tool call was triggered and run some corresponding code on our end.

// Don't forget to store the returned tool call message to keep the context of the conversation
// Note: there seems to be a bug in the ChatGPT API where the content field is empty
// for tool call responses but you cannot send a message with content set to null
// in subsequent requests. To work around this, we remove the content field from the
// message before storing it
const { content, ...removedContent } = choice.message;
messages.push({
  ...removedContent,
  id: nanoid(),
});

const choice = completion.choices[0];
if (choice.message.tool_calls) {
  for (const tool of choice.message.tool_calls) {
    if (tool.type === 'function' && tool.function.name === 'get_stock_price') {
      const { symbol } = JSON.parse(tool.function.arguments);

      // Fetch the price from an API
      const res = await fetch('https://api.example.com/stock-price', {
        method: 'POST',
        body: JSON.stringify({ symbol }),
        headers: { 'Content-Type': 'application/json' },
      });

      const { price } = await res.json();
      console.log(`The price of ${symbol} is $${price}`);
    }
  }
}

Using Tool Calls Correctly in Conversations

By far, the biggest challenge with tool calls is how to track calls and responses in the current conversation context window in order to avoid API errors or incorrect responses from the model.

To make sure you're using tool calls correctly, you need to push a new role: 'tool' message to the conversation context window with your tool call response. This way, the model can understand that the tool call has been executed and can continue the conversation from there.

With this, your context should have two messages in a row from the tool call: the first one is the tool call message from the model, and the second one is your tool call response, so it will look like this:

[
  {
    role: 'user',
    content: 'What is the price of AAPL?'
  },
  {
    role: 'assistant',
    content: null, /* See note above about the ChatGPT API bug requiring you to remove this field for future API calls */
    tool_calls: [...]
  },
  {
    role: 'tool',
    tool_call_id: 'call_ID',
    content: 'Your tool call response here'
  }
]

So, with that goal in mind, here's the code to add the tool call response to the conversation context:

// Assuming we have the response from the tool call as shown above
const toolCallId = choice.message.tool_calls[0].id;

messages.push({
  id: nanoid(),
  role: 'tool',
  tool_call_id: toolCallId,

  // Track your tool call response data here, in whatever
  // format you need to process it later. In this example we are
  // tracking the result, the tool that was called, the tool call id,
  // and `result` which is our implementation specific result. This is 
  // just an example and you can structure this data however you like.
  content: JSON.stringify([
    {
      type: 'tool-result',
      toolName,
      toolCallId,
      args: result,
    },
  ]),
});

Debugging Incorrect Tool Calls

Because LLMs are probabilistic models, they may not always trigger the tool call as expected. To make tool calls more reliable, resort to old fashioned prompt engineering to coax the model into calling the tool when you want it to. This can involve adding more context to the prompt, or providing more examples of when the tool should be called.

Unfortunately, there's no silver bullet for improving tool calls. More context, more examples, more specific instructions...these can all help the model make the right decision.

Conclusion

Tool calls are a fantastic way to improve the capability of your AI apps, and we are using them heavily at Outerform to turn natural language instructions from users on how to build and modify their forms and surveys into structured modifications against their forms. We also use them to let the user ask questions about their form responses and to generate intelligent digests of form responses.

If you'd like to see how we're using LLMs to make online form and survey building easier, we hope you'll give Outerform a try!

Start building better forms today