A console application to reduce bugs, improve performance and improve readability of your code.
refactorcode.mp4
- Checks for any bugs and corrects them (out of bounds, performance issues, logical bugs).
- Removes commented out and unreachable code.
- Adds comments to explain existing code.
- Splits very large functions into smaller functions for better modularity.
refactorcode ./yourfile
The refactored code is displayed in the console. To specify an output file, use -o
. See Options for more details.
Ensure you have npm and node.js installed on your computer: Node.js
Install the package from npm, either for the project or globally
npm install refactorcode
OR
npm install -g refactorcode
Get an API Key from here: https://ai.google.dev/aistudio
To configure your application, there are 2 options, creating a .env
file or a .toml
file:
Option 1: Create a .env
file in your project root directory, and add the API key like this:
API_KEY=YOURAPIKEYHERE
Option 2: Create a .toml
file named .refactorcode.toml
in your home directory, and add your API key and/or preferences:
-
Create the TOML File:
Open your terminal and run the following command to create a new TOML file in your home directory:touch ~/.refactorcode.toml
-
Copy the Sample Configuration:
Next, copy the sample configuration from.refactorcode.toml.example
into your newly created.refactorcode.toml
file:cp .refactorcode.toml.example ~/.refactorcode.toml
-
Edit the Configuration:
Open the.refactorcode.toml
file in your preferred text editor, and add your API key value, and any other preferences (e.g. MODEL) you need.
If you want to contribute to project or make a custom version of the library, here are the instructions:
git clone https://github.com/brokoli777/RefactorCode.git
pnpm install
OR
npm install
npm link
refactorcode examples/test.py
-m or --model: Allows to specify the model
Choices:
- 1.5f (gemini-1.5-flash) (default)
- 1.5p (gemini-1.5-pro)
refactorcode examples/test.py -m 1.5p
-o or --output- Allows to set the output file
refactorcode examples/test.py -o hello.py
-t or --token-usage: Allows get information on the tokens used
-s or --stream Streams the response as it is received