
Send Prompt for JSON response
Sends a prompt to GPT and generate a completion returning a response message in JSON format.
OpenAI ChatGPT Action
This integration allows you to send a prompt to GPT and receive a response in JSON format, facilitating the automation of tasks that require natural language processing.
By using this function, you will obtain structured data such as the JSON response, the finish reason, and token count, thereby optimizing your workflows.
Customization Options
Configurable fields you can adjust in your automation
- Rules (Enable/Disable)
- Filters (Apply only if enabled above)
- System message
- New user message
- Model
- Temperature
- Max Tokens to use
- Remove leading and trailing quotation marks from the response
- Customize output tags
- Rename output variables
Information provided
When executed, this operation delivers the following data, which can be used in the same automatic task.
Tags
-
JSON Response
{{json_response}}
Predicted completion in JSON format.
e.g. {"winner":"Los Angeles Dodgers"} -
Finish reason
{{finish_reason}}
reason why the resulting text has terminated.
e.g. lenght -
Prompt tokens
{{prompt_tokens}}
Number of tokens in the Prompt. 1 token ~= 4 chars in English
-
Completion tokens
{{completion_tokens}}
Number of tokens in the Completion. 1 token ~= 4 chars in English
-
Total tokens
{{total_tokens}}
Prompt tokens + Completion tokens
Let's talk
Choose day and time.
We share the screen and answer all your questions.