The Rokovo CLI brings a simple AI agent on your command line to help you with documenting pure source codes for both developers and end users.
You can install the CLI using PIP:
pip install rokovoEach time you call the Rokoco CLI, you need to provide some basic information context file path and model configs using proper flags defined in rokovo --help. Other way is to define a rokovo.toml for configs, rokovo_context.md for agent context and a .rokovoignore for ignored files (default fallback is .gitignore). You can use init command to get the basic template:
rokovo init --root-dir .Context file is a markdown file that contains a high level description of your project. This will help the AI agent to understand the project better and generate more accurate documentation. A basic model of writing a context file is as follows:
- Project name and a brief description.
- Project structure and important files.
- Important keywords and terminologies.
- Usage specific context, for example if you want to to auto extract FAQs: set of questions as example to help agent with finding questions and answers.
An example context file can be found on examples_context.md.
To extract FAQs from your codebase, run the following command:
rokovo --context-dir ./context.md --root-dir ./path/to/project --api-key <your-openrouter-api-key> --re-index true # if you have changed your code base use --re-indexNote
If you already have a rokovo.toml in --root-dir, you don't need to provide those flags. For API key you can use ROKOVO_API_KEY environment variable.
You can use:
rokovo interactive <flags (if config file is not exist)>To run a REPL that let you to chat with the agent and get answers over your codebase in simple user language.
We use the OpenRouter API as default and you can use --api-key flag to pass your API key. If you are using another provider feel free to pass ``--base-urland--model` flags to configure your provider.
Any provider compatible with OpenAI API is accepted and you can use local models too.
This code base never passes your code base or any metadata to the Rokovo servers. which is verifiable by looking at the code. Only the LLM provider would access the code base which we explain about it:
If you are using a local LLM, then your code won't be accessed by any cloud AI provider which is obvious. If you are using a LLM provider like OpenAI, then they have access to files that your agent reads and to find out what exactly happens to your data in this case, you can to read the provider privacy policy and check their configs to find out since each provider can have different rules and policies.
If you are worried about secret files or specific directories, you can use .rokovoignore to ignore and exclude this files which will remove the file/directory from agents access and it can't read it by any chance.
MIT License - see LICENSE.
