You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| model | Model | STRING <details> <summary> Depends On </summary> provider </details> | The model name to use. | true |
88
+
| model | Model | STRING | ID of the model to use. | true |
89
+
| model | URL | STRING | Url of the inference endpoint. | true |
90
+
| text | Text | STRING | The text content to extract data from. | true |
91
+
| responseSchema | Response Schema | STRING | Define desired structure for the structured data response. | true |
92
+
| additionalContext | Additional Context | STRING | Extra information to guide the extraction process. | false |
93
+
| maxTokens | Max Tokens | INTEGER | The maximum number of tokens to generate in the chat completion. | false |
94
+
| temperature | Temperature | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. | false |
95
+
96
+
#### Example JSON Structure
97
+
```json
98
+
{
99
+
"label" : "Extract Data",
100
+
"name" : "extractData",
101
+
"parameters" : {
102
+
"provider" : "",
103
+
"model" : "",
104
+
"text" : "",
105
+
"responseSchema" : "",
106
+
"additionalContext" : "",
107
+
"maxTokens" : 1,
108
+
"temperature" : 0.0
109
+
},
110
+
"type" : "aiText/v1/extractData"
111
+
}
112
+
```
113
+
114
+
#### Output
115
+
116
+
The output for this action is dynamic and may vary depending on the input parameters. To determine the exact structure of the output, you need to execute the action.
117
+
118
+
119
+
120
+
121
+
122
+
123
+
### Mask
124
+
Name: mask
125
+
126
+
`Uses AI to detect and redact sensitive content from text.`
| maxTokens | Max Tokens | INTEGER | The maximum number of tokens to generate in the chat completion. | false |
141
+
| temperature | Temperature | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. | false |
142
+
143
+
#### Example JSON Structure
144
+
```json
145
+
{
146
+
"label" : "Mask",
147
+
"name" : "mask",
148
+
"parameters" : {
149
+
"provider" : "",
150
+
"model" : "",
151
+
"text" : "",
152
+
"sensitiveKeywords" : [ "" ],
153
+
"piiDetection" : [ "" ],
154
+
"customRegexPatterns" : [ "" ],
155
+
"maxTokens" : 1,
156
+
"temperature" : 0.0
157
+
},
158
+
"type" : "aiText/v1/mask"
159
+
}
160
+
```
161
+
162
+
#### Output
163
+
164
+
165
+
___Sample Output:___
166
+
167
+
```{text=Hello, my name is [REDACTED_1] and my email is [EMAIL_1]., maskMap={[EMAIL_1]=john@example.com, [REDACTED_1]=John Doe}}```
| model | Model | STRING <details> <summary> Depends On </summary> provider </details> | The model name to use. | true |
206
+
| model | Model | STRING | ID of the model to use. | true |
207
+
| model | URL | STRING | Url of the inference endpoint. | true |
208
+
| text | Text | STRING | The text to process. | true |
209
+
| maskMap | Masked map | OBJECT <details> <summary> Properties </summary> {} </details> | Map of masked entities to replace with values. | false |
210
+
| maxTokens | Max Tokens | INTEGER | The maximum number of tokens to generate in the chat completion. | false |
211
+
| temperature | Temperature | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. | false |
212
+
213
+
#### Example JSON Structure
214
+
```json
215
+
{
216
+
"label" : "Unmask",
217
+
"name" : "unmask",
218
+
"parameters" : {
219
+
"provider" : "",
220
+
"model" : "",
221
+
"text" : "",
222
+
"maskMap" : { },
223
+
"maxTokens" : 1,
224
+
"temperature" : 0.0
225
+
},
226
+
"type" : "aiText/v1/unmask"
227
+
}
228
+
```
229
+
230
+
#### Output
231
+
232
+
233
+
___Sample Output:___
234
+
235
+
```Hello, my name is [REDACTED] and my email is [EMAIL].```
Copy file name to clipboardExpand all lines: docs/content/docs/reference/components/anthropic_v1.mdx
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,8 +55,8 @@ Name: ask
55
55
| messages | Messages | ARRAY <details> <summary> Items </summary> [{STRING\(role), STRING\(content), [FILE_ENTRY]\(attachments)}] </details> | A list of messages comprising the conversation so far. | true |
56
56
| maxTokens | Max Tokens | INTEGER | The maximum number of tokens to generate in the chat completion. | true |
57
57
| response | Response | OBJECT <details> <summary> Properties </summary> {STRING\(responseFormat), STRING\(responseSchema)} </details> | The response from the API. | true |
58
-
| temperature | Temperature | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. | false |
59
-
| topP | Top P | NUMBER |An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. | false |
58
+
| temperature | Temperature | NUMBER | Controls randomness: higher values make the output more random, lower values make it more focused and deterministic. Set either Temperature or Top P, not both. | false |
59
+
| topP | Top P | NUMBER |Nucleus sampling: the model considers tokens whose cumulative probability mass adds up to top_p. Set either Temperature or Top P, not both. | false |
60
60
| topK | Top K | INTEGER | Specify the number of token choices the generative uses to generate the next token. | false |
61
61
| stop | Stop | ARRAY <details> <summary> Items </summary> [STRING] </details> | Up to 4 sequences where the API will stop generating further tokens. | false |
62
62
@@ -126,8 +126,8 @@ Name: streamAsk
126
126
| messages | Messages | ARRAY <details> <summary> Items </summary> [{STRING\(role), STRING\(content), [FILE_ENTRY]\(attachments)}] </details> | A list of messages comprising the conversation so far. | true |
127
127
| maxTokens | Max Tokens | INTEGER | The maximum number of tokens to generate in the chat completion. | true |
128
128
| response | Response | OBJECT <details> <summary> Properties </summary> {STRING\(responseFormat), STRING\(responseSchema)} </details> | The response from the API. | true |
129
-
| temperature | Temperature | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. | false |
130
-
| topP | Top P | NUMBER |An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. | false |
129
+
| temperature | Temperature | NUMBER | Controls randomness: higher values make the output more random, lower values make it more focused and deterministic. Set either Temperature or Top P, not both. | false |
130
+
| topP | Top P | NUMBER |Nucleus sampling: the model considers tokens whose cumulative probability mass adds up to top_p. Set either Temperature or Top P, not both. | false |
131
131
| topK | Top K | INTEGER | Specify the number of token choices the generative uses to generate the next token. | false |
132
132
| stop | Stop | ARRAY <details> <summary> Items </summary> [STRING] </details> | Up to 4 sequences where the API will stop generating further tokens. | false |
| formTitle | Form Title | STRING | The title for the approval form. Displayed as the main heading. | false |
30
+
| formDescription | Form Description | STRING | A description shown under the form title. Use \n or <br> for line breaks. | false |
31
+
| inputs | Form Inputs | ARRAY <details> <summary> Items </summary> [{INTEGER\(fieldType), STRING\(fieldLabel), STRING\(fieldName), STRING\(fieldDescription), STRING\(placeholder), STRING\(defaultValue), STRING\(defaultValue), [{STRING\(label), STRING\(value)}]\(fieldOptions), BOOLEAN\(multipleChoice), INTEGER\(minSelection), INTEGER\(maxSelection), BOOLEAN\(required)}] </details> | Define the form input fields for the approval request. | false |
32
+
33
+
#### Example JSON Structure
34
+
```json
35
+
{
36
+
"label" : "Request Approval",
37
+
"name" : "requestApproval",
38
+
"parameters" : {
39
+
"formTitle" : "",
40
+
"formDescription" : "",
41
+
"inputs" : [ {
42
+
"fieldType" : 1,
43
+
"fieldLabel" : "",
44
+
"fieldName" : "",
45
+
"fieldDescription" : "",
46
+
"placeholder" : "",
47
+
"defaultValue" : "",
48
+
"fieldOptions" : [ {
49
+
"label" : "",
50
+
"value" : ""
51
+
} ],
52
+
"multipleChoice" : false,
53
+
"minSelection" : 1,
54
+
"maxSelection" : 1,
55
+
"required" : false
56
+
} ]
57
+
},
58
+
"type" : "approval/v1/requestApproval"
59
+
}
60
+
```
61
+
62
+
#### Output
63
+
64
+
The output for this action is dynamic and may vary depending on the input parameters. To determine the exact structure of the output, you need to execute the action.
0 commit comments