Initial commit: ShellGen CLI tool with LLM backends

This commit is contained in:
2026-01-29 12:41:00 +00:00
parent 6d7e7dccd5
commit 4439161783

261
app/README.md Normal file
View File

@@ -0,0 +1,261 @@
# ShellGen - Shell Command Generator CLI
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
A CLI tool that converts natural language descriptions into shell commands using local LLMs (Ollama, Llama.cpp). Describe what you want in plain English, and ShellGen generates the appropriate shell command with explanations.
## Features
- **Natural Language to Command**: Convert plain English descriptions to shell commands
- **Multiple LLM Backends**: Support for both Ollama and Llama.cpp
- **Safety First**: Dangerous commands are flagged before execution
- **Command History**: Track generated commands and learn from corrections
- **Shell Compatibility**: Generate commands for Bash and Zsh
- **Local Processing**: All processing happens locally - no external API calls
## Installation
### From Source
```bash
git clone https://github.com/yourusername/shell-command-generator-cli.git
cd shell-command-generator-cli
pip install -e .
```
### Prerequisites
- Python 3.10+
- Ollama (for Ollama backend) or Llama.cpp
- Required Python packages (installed automatically)
## Quick Start
### Basic Usage
```bash
# Generate a command
shellgen "find all python files in current directory"
# Generate for specific shell
shellgen --shell zsh "list files with detailed info"
# Execute immediately (safe commands only)
shellgen --auto-execute "show current directory"
```
### Using Different Backends
```bash
# Use Ollama backend (default)
shellgen --backend ollama "create a new git branch"
# Use Llama.cpp backend
shellgen --backend llama_cpp "find files modified today"
```
### Command History
```bash
# View command history
shellgen history
# View last 50 entries
shellgen history --limit 50
# Provide feedback on a generated command
shellgen feedback 5 --corrected "find . -name '*.py' -type f"
```
## Configuration
### Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `OLLAMA_HOST` | localhost:11434 | Ollama API endpoint |
| `OLLAMA_MODEL` | codellama | Default model name |
| `SHELLGEN_HISTORY_PATH` | ~/.shellgen/history.db | History database path |
| `SHELLGEN_TEMPLATES_PATH` | ~/.shellgen/templates | Custom templates path |
### Config File
Copy `.env.example` to `.env` and modify:
```bash
cp .env.example .env
```
Create or edit `config.yaml` for additional settings:
```yaml
default_backend: ollama
shell:
default: bash
supported:
- bash
- zsh
safety:
auto_execute_safe: false
require_confirmation: true
```
## Safety Features
ShellGen includes a safety system that:
1. **Detects Dangerous Patterns**: Blocks commands like `rm -rf`, `dd`, `mkfs`, etc.
2. **Safe Command Whitelist**: Automatically approves common safe operations
3. **Confirmation Workflow**: Prompts before executing potentially risky commands
4. **Force Flag**: Use `--force` to bypass safety checks (not recommended)
### Dangerous Commands Blocked
- `rm -rf` on root or system directories
- `dd` operations on system devices
- Disk formatting commands
- Fork bombs and other destructive patterns
## Command History
ShellGen stores your command history in an SQLite database, allowing you to:
- Review previously generated commands
- Track which commands were executed
- Provide feedback on incorrect generations
- Learn from your corrections
### Database Location
Default: `~/.shellgen/history.db`
### Feedback System
When a generated command isn't quite right, use the feedback system:
```bash
# Submit correction
shellgen feedback 3 --corrected "find . -name '*.txt' -delete"
# Add notes
shellgen feedback 3 --feedback "The original command had wrong flags"
```
## Backend Selection
### Ollama
Ollama provides an easy-to-use API for local models:
```bash
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama server
ollama serve
# Pull a model
ollama pull codellama
```
Recommended models:
- `codellama` - Best for code-related commands
- `llama2` - General purpose
- `mistral` - Fast and capable
### Llama.cpp
For direct model execution without API server:
```bash
# Install llama-cpp-python
pip install llama-cpp-python
# Download a model (e.g., from Hugging Face)
# Place in ~/.cache/llama-cpp/models/
```
## Examples
### File Operations
```bash
shellgen "find all Python files larger than 1MB"
# Output: find . -name '*.py' -size +1M
shellgen "list directories sorted by size"
# Output: du -h --max-depth=1 | sort -h
```
### Git Operations
```bash
shellgen "show uncommitted changes"
# Output: git diff
shellgen "list all branches merged into main"
# Output: git branch --merged main
```
### System Monitoring
```bash
shellgen "show processes using most memory"
# Output: ps aux --sort=-%mem | head -10
shellgen "check disk space usage"
# Output: df -h
```
## Development
### Running Tests
```bash
pytest tests/ -v
pytest tests/ --cov=shellgen --cov-report=term-missing
```
### Project Structure
```
shellgen/
├── __init__.py # Main package
├── main.py # CLI entry point
├── config.py # Configuration management
├── history.py # History database
├── core/
│ ├── __init__.py
│ ├── generator.py # Command generation logic
│ └── prompts.py # Prompt templates
├── backends/
│ ├── __init__.py
│ ├── base.py # Abstract backend interface
│ ├── ollama.py # Ollama backend
│ ├── llama_cpp.py # Llama.cpp backend
│ └── factory.py # Backend factory
├── safety/
│ ├── __init__.py
│ └── checker.py # Safety checking
└── ui/
├── __init__.py
├── console.py # Rich console UI
└── argparse.py # Argument parsing
```
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Run tests: `pytest tests/ -v`
5. Submit a pull request
## License
MIT License - see LICENSE file for details.
## Disclaimer
Always review generated commands before execution. While safety checks are in place, always verify that commands do what you expect, especially when using `--force` or `--execute` flags.