You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| `token` | Token to use for inference. Typically the GITHUB_TOKEN secret | `github.token` |
282
-
| `prompt` | The prompt to send to the model | N/A |
283
-
| `prompt-file` | Path to a file containing the prompt (supports .txt and .prompt.yml formats). If both `prompt` and `prompt-file` are provided, `prompt-file` takes precedence | `""` |
284
-
| `input` | Template variables in YAML format for .prompt.yml files (e.g., `var1: value1` on separate lines) | `""` |
285
-
| `file_input` | Template variables in YAML where values are file paths. The file contents are read and used for templating | `""` |
286
-
| `system-prompt` | The system prompt to send to the model | `"You are a helpful assistant"` |
287
-
| `system-prompt-file` | Path to a file containing the system prompt. If both `system-prompt` and `system-prompt-file` are provided, `system-prompt-file` takes precedence | `""` |
288
-
| `model` | The model to use for inference. Must be available in the [GitHub Models](http://github.com/marketplace?type=models) catalog | `openai/gpt-4o` |
289
-
| `endpoint` | The endpoint to use for inference. If you're running this as part of an org, you should probably use the org-specific Models endpoint | `https://models.github.ai/inference` |
290
-
| `max-tokens` | The max number of tokens to generate | 200 |
291
-
| `temperature` | The sampling temperature to use (0-1) | `""` |
292
-
| `top-p` | The nucleus sampling parameter to use (0-1) | `""` |
293
-
| `enable-github-mcp` | Enable Model Context Protocol integration with GitHub tools | `false` |
294
-
| `github-mcp-token` | Token to use for GitHub MCP server (defaults to the main token if not specified). | `""` |
295
-
| `custom-headers` | Custom HTTP headers to include in API requests. Supports both YAML format (`header1: value1`) and JSON format (`{"header1": "value1"}`). Useful for API Management platforms, rate limiting, and request tracking. | `""` |
| `token` | Token to use for inference. Typically the GITHUB_TOKEN secret | `github.token` |
309
+
| `prompt` | The prompt to send to the model | N/A |
310
+
| `prompt-file` | Path to a file containing the prompt (supports .txt and .prompt.yml formats). If both `prompt` and `prompt-file` are provided, `prompt-file` takes precedence | `""` |
311
+
| `input` | Template variables in YAML format for .prompt.yml files (e.g., `var1: value1` on separate lines) | `""` |
312
+
| `file_input` | Template variables in YAML where values are file paths. The file contents are read and used for templating | `""` |
313
+
| `system-prompt` | The system prompt to send to the model | `"You are a helpful assistant"` |
314
+
| `system-prompt-file` | Path to a file containing the system prompt. If both `system-prompt` and `system-prompt-file` are provided, `system-prompt-file` takes precedence | `""` |
315
+
| `model` | The model to use for inference. Must be available in the [GitHub Models](http://github.com/marketplace?type=models) catalog | `openai/gpt-4o` |
316
+
| `endpoint` | The endpoint to use for inference. If you're running this as part of an org, you should probably use the org-specific Models endpoint | `https://models.github.ai/inference` |
317
+
| `max-tokens` | The maximum number of tokens to generate (deprecated, use `max-completion-tokens` instead) | 200 |
318
+
| `max-completion-tokens` | The maximum number of tokens to generate | `""` |
319
+
| `temperature` | The sampling temperature to use (0-1) | `""` |
320
+
| `top-p` | The nucleus sampling parameter to use (0-1) | `""` |
321
+
| `enable-github-mcp` | Enable Model Context Protocol integration with GitHub tools | `false` |
322
+
| `github-mcp-token` | Token to use for GitHub MCP server (defaults to the main token if not specified). | `""` |
323
+
| `custom-headers` | Custom HTTP headers to include in API requests. Supports both YAML format (`header1: value1`) and JSON format (`{"header1": "value1"}`). Useful for API Management platforms, rate limiting, and request tracking. | `""` |
0 commit comments