
Send Prompt for JSON response
Sends a prompt to LLM and generate a completion returning a response message in JSON format.
Deepseek Action
Automating tasks with the Send Prompt for JSON Response integration from Deepseek allows you to obtain structured JSON-formatted responses from advanced language models. This functionality is ideal for developers and businesses looking to enhance their workflows through the automatic generation of well-organized data.
With customization options such as model selection, temperature adjustment, and maximum token configuration, you can tailor responses to your specific needs. Additionally, the integration provides detailed information about the process, including the finish reason and token count, facilitating precise control over the generated responses.
Customization Options
Configurable fields you can adjust in your automation
- Rules
- Filter Rules
- System message
- New user message
- Model
- Temperature
- Max Tokens
- Remove potential surrounding quotation marks from LLM response
- Customize tags
- Rename output variables configuration
Information provided
When executed, this operation delivers the following data, which can be used in the same automatic task.
Tags
-
JSON Response
{{json_response}}
Predicted completion in JSON format.
e.g. {"winner":"Los Angeles Dodgers"} -
Finish reason
{{finish_reason}}
reason why the resulting text has terminated.
e.g. lenght -
Prompt tokens
{{prompt_tokens}}
Number of tokens in the Prompt. 1 token ~= 4 chars in English
-
Completion tokens
{{completion_tokens}}
Number of tokens in the Completion. 1 token ~= 4 chars in English
-
Total tokens
{{total_tokens}}
Prompt tokens + Completion tokens
Let's talk
Choose day and time.
We share the screen and answer all your questions.