b6cf0ea2d03da8cf1936e37076b530588e66e3f3
ShellGenius
AI-Powered Local Shell Script Assistant using Ollama.
Overview
ShellGenius is a CLI tool that uses local LLMs (Ollama) to generate, explain, and refactor shell scripts interactively. Developers can describe what they want in natural language, and the tool generates safe, commented shell commands with explanations.
Features
- Natural Language to Shell Generation: Convert natural language descriptions into shell commands
- Interactive TUI Interface: Rich terminal UI with navigation, command history, and suggestions
- Script Explanation Mode: Parse and explain existing shell scripts line-by-line
- Safe Refactoring Suggestions: Analyze scripts and suggest safer alternatives
- Command History Learning: Learn from your command history for personalized suggestions
- Multi-Shell Support: Support for bash, zsh, and sh scripts
Installation
# Install from source
pip install .
# Install with dev dependencies
pip install -e ".[dev]"
Requirements
- Python 3.10+
- Ollama running locally (https://ollama.ai)
- Recommended models: codellama, llama2, mistral
Configuration
ShellGenius uses a config.yaml file for configuration. See .env.example for environment variables.
ollama:
host: "localhost:11434"
model: "codellama"
timeout: 120
safety:
level: "moderate"
warn_patterns:
- "rm -rf"
- "chmod 777"
- "sudo su"
Usage
Interactive Mode
shellgenius
Generate Shell Commands
shellgenius generate "find all Python files modified in the last 24 hours"
Explain a Script
shellgenius explain script.sh
Refactor with Safety Checks
shellgenius refactor script.sh --suggestions
Commands
| Command | Description |
|---|---|
shellgenius |
Start interactive TUI |
shellgenius generate <description> |
Generate shell commands |
shellgenius explain <script> |
Explain a shell script |
shellgenius refactor <script> |
Analyze and refactor script |
shellgenius history |
Show command history |
shellgenius models |
List available Ollama models |
Safety
ShellGenius includes safety features:
- Destructive command warnings
- Dry-run mode for testing
- Permission checks
- Safety level configuration
Use --force flag to bypass warnings if confident.
Troubleshooting
Ollama connection failed
- Run
ollama serveto start Ollama - Check
OLLAMA_HOSTenvironment variable
Model not found
- Pull required model:
ollama pull <model_name> - Change
OLLAMA_MODELsetting
Timeout during generation
- Increase timeout in config.yaml
- Simplify the request
License
MIT