deepseek

Send Prompt for JSON response

Sends a prompt to LLM and generate a completion returning a response message in JSON format.

Deepseek Action








Customization Options

Configurable fields you can adjust in your automation

  • Rules
  • Filter Rules
  • System message
  • New user message
  • Model
  • Temperature
  • Max Tokens
  • Remove potential surrounding quotation marks from LLM response
  • Customize tags
  • Rename output variables configuration



Information provided

When executed, this operation delivers the following data, which can be used in the same automatic task.

  • Tags

  • JSON Response {{json_response}}

    Predicted completion in JSON format.
    e.g. {"winner":"Los Angeles Dodgers"}

  • Finish reason {{finish_reason}}

    reason why the resulting text has terminated.
    e.g. lenght

  • Prompt tokens {{prompt_tokens}}

    Number of tokens in the Prompt. 1 token ~= 4 chars in English

  • Completion tokens {{completion_tokens}}

    Number of tokens in the Completion. 1 token ~= 4 chars in English

  • Total tokens {{total_tokens}}

    Prompt tokens + Completion tokens




Write us

By email or by Telegram.
Monday to Friday from 7 a.m. to 1 p.m. (Spain).

Let's talk

Choose day and time.
We share the screen and answer all your questions.