From 94a0bf059b79f6976bd7f147cffe59778c2b3f6a Mon Sep 17 00:00:00 2001 From: Dominik Polakovics Date: Tue, 31 Dec 2024 15:04:23 +0100 Subject: [PATCH] feat: add rust prompt block --- README.md | 23 ++++++++++++++--------- lua/chatgpt_nvim/config.lua | 9 ++++++++- 2 files changed, 22 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index 7e5f3d6..6fdb680 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@ -# ChatGPT NeoVim Plugin (Extensively Updated with Chunking) +# ChatGPT NeoVim Plugin (Extensively Updated with Step-by-Step Prompting) This plugin integrates a ChatGPT O1 model workflow into Neovim. It allows you to: @@ -14,10 +14,15 @@ This plugin integrates a ChatGPT O1 model workflow into Neovim. It allows you to ## New Key Features -- **Chunking** (`enable_chunking: true`): If the combined prompt or file request is too large (exceeds `token_limit`), the plugin automatically splits it into multiple chunks. Each chunk is opened in its own buffer, and the first chunk is copied to your clipboard. You can paste them sequentially into ChatGPT to work around size limitations. +- **Step-by-Step Prompting** (`enable_step_by_step: true`): + If the request grows too large (exceeds `token_limit`), the plugin automatically generates a special prompt asking the model to split the task into smaller steps, working through them one by one. This approach helps you stay within the model’s maximum token limit without having to manually break things apart. + - **Partial Acceptance**: If `partial_acceptance: true`, you can open a buffer that lists the final changes. Remove or comment out lines you don’t want, then only those changes are applied. + - **Preview Changes**: If `preview_changes: true`, you get a buffer showing proposed changes before you apply them. + - **Interactive File Selection**: If `interactive_file_selection: true`, you choose which directories from `.chatgpt_config.yaml` get included in the prompt, reducing token usage. + - **Improved Debug**: If `improved_debug: true`, debug logs go into a dedicated `ChatGPT_Debug_Log` buffer for easier reading. ## Example `.chatgpt_config.yaml` @@ -32,7 +37,7 @@ directories: initial_files: - "README.md" debug: false -enable_chunking: true +enable_step_by_step: true preview_changes: true interactive_file_selection: true partial_acceptance: true @@ -44,14 +49,14 @@ token_limit: 3000 1. **`:ChatGPT`** - If `interactive_file_selection` is on, you’ll pick directories to include in a buffer named `ChatGPT_File_Selection`. - - Save & close with `:wq`, `:x`, or `:bd` (you don’t have to use `:q`). - - If `enable_chunking` is on and the prompt exceeds `token_limit`, it’s split into multiple buffers for you to copy/paste. + - Save & close with `:wq`, `:x`, or `:bd` (you don’t have to use `:q`). + - If `enable_step_by_step` is on and the prompt might exceed `token_limit`, the plugin will generate instructions prompting the model to address each step separately. 2. **Paste Prompt to ChatGPT** - - If multiple chunks exist, copy/paste them one by one in ChatGPT. + - If the task is split into steps, simply copy/paste them one by one into ChatGPT. 3. **`:ChatGPTPaste`** - - The plugin reads the YAML from your clipboard. If it requests more files, it might chunk that request, too. + - The plugin reads the YAML from your clipboard. If it requests more files, it might again suggest a step-by-step approach. - If final changes are provided: - Optionally preview them (`preview_changes`). - Optionally partially accept them (`partial_acceptance`). @@ -60,8 +65,8 @@ token_limit: 3000 ## Troubleshooting & Tips - Adjust `token_limit` in `.chatgpt_config.yaml` as needed. - If partial acceptance is confusing, remember to remove or prepend `#` to lines you don’t want before saving and closing the buffer. -- If chunking occurs, ensure you copy/paste **all chunks** into ChatGPT in the correct order. +- If step-by-step prompting occurs, ensure you follow each prompt the model provides in the correct order. - Check `ChatGPT_Debug_Log` if `improved_debug` is on, or the Neovim messages if `debug` is on, for detailed info. - You can close the selection or prompt buffers at any time with commands like `:bd`, `:x`, or `:wq`. No need to rely on `:q`. -Enjoy your improved, more flexible ChatGPT Neovim plugin with chunking support! \ No newline at end of file +Enjoy your improved, more flexible ChatGPT Neovim plugin with step-by-step support! diff --git a/lua/chatgpt_nvim/config.lua b/lua/chatgpt_nvim/config.lua index ce94a78..ab250af 100644 --- a/lua/chatgpt_nvim/config.lua +++ b/lua/chatgpt_nvim/config.lua @@ -17,6 +17,13 @@ local prompt_blocks = { Your answers should focus on TYPO3 coding guidelines, extension development best practices, and TSconfig or TypoScript recommendations. ]], + ["rust-development"] = [[ + You are a coding assistant specialized in Rust development. + You will receive a project’s context and user instructions related to Rust code, + and you must return the requested modifications or guidance. + When returning modifications, follow the specified YAML structure. + Keep your suggestions aligned with Rust best practices and idiomatic Rust. + ]], ["basic-prompt"] = [[ You are a coding assistant who receives a project's context and user instructions. The user will provide a prompt, and you will guide them through a workflow: @@ -189,4 +196,4 @@ function M.load() return config end -return M \ No newline at end of file +return M