Tips
AI Coding Tips
Requests
Detailed requests work better than very short ones and reduce the number of iterations needed to get the result you want. For example "Create TicTacToe" will work but this is better: "Create a TicTacToe game in Python. Split the code to Main and Config. In Config allow to set the size of the game like 3x3 or 4x4. Draw a red line thru the winning cells. Make the grid lines black and 5 pixels wide. Make cells 100 x 100 pixels. Use a white background. Include a Restart button centered below the cells."
Guidelines
LLMs don't always get it right. If you don't agree with a change or a change did not solve a problem you can:
Manage: Think of yourself as a manager and give guidance or an idea how to solve a problem.
Retry: since LLMs vary their approach (when the temperature setting is above 0).
Revise: your prompt for better clarity or detail.
Kicker: Click the Reference button for a list of "kickers" to add to your prompt.
Comments: Add comments in the code to help guide the LLM.
File selections: Adjust the file selections in the file tree.
Suggest command: Leverage the Suggest command to help provide the LLM *all* the files needed to solve a task - this is essential.
Clear names: Use clear file, function and variable names.
Iterate
Don't be surprised if you have to make multiple attempts to solve a problem or get a feature right. Often goals are met on 1 or 2 tries but sometimes can take up to 10 or even more. Help the LLM by improving prompts, giving your tips and ideas, etc. LLMs are getting better and better but they still have a way to go.
Delegate but don't abdicate
Delegate to AI but don't completely relinquish oversight or control. Check the final results.
Tokens
The token count shown on the status bar or in a pop is an estimate of the tokens (basic unit of text) to be used for a call to the LLM but actual tokens can differ substantially. The actual tokens are shown at menu Log | Stats. Tokens for the Suggest command are usually lower and not reflected by the status bar value. There are several ways to reduce token usage:
Click the Prep button and remove irrelevant code before clicking Send.
Select only the needed files.
Make code more modular with smaller files.
Use detailed clear prompts.
Chatless Philosophy
While most chatbots feed your conversation history back in to the LLM for each submission, we found that this creates several problems when used for coding. It reduces transparency as to what the LLM sees; uses more tokens; can confuse the LLM, etc. So we don't do that. We found it better to modify the prompt or the code to record anything you want the LLM to consider with each submission.
Asking LLM general questions
To ask a general question, deselect all files and enter your question in the Request box.
Project-Planning
In addition to coding, Click-Coder works well for project planning (of coding and non-coding projects). For example, suppose you are designing an electronic circuit but not sure which microprocessor to use, which type of circuit to use, which other components, which software tools, etc. So you start a conversation with a chatbot. As the chat develops it starts to get long and disorganized. You'll waste time scrolling back to prior sections and end up with a document that is difficult to use. Instead create some files to give structure such as Circuits, Components and Software. Then use Click-Coder. When you ask a question about Components for example, select only the Component file as the context and ask the question in a way that modifies the document. Along the way, delete text you don't need or edit as needed. This way you will build an organized plan that is easy to reference and put into action without scrolling thru a long cumbersome chat.