This commit is contained in:
377
README.md
377
README.md
@@ -1,273 +1,210 @@
|
||||
# MCP Server CLI
|
||||
# Local LLM Prompt Manager
|
||||
|
||||
A CLI tool that creates a local Model Context Protocol (MCP) server for developers, enabling custom tool definitions in YAML/JSON with built-in file operations, git commands, shell execution, and local LLM support for offline AI coding assistant integration.
|
||||
A CLI tool for developers to manage, organize, version control, and share local LLM prompts and configurations.
|
||||
|
||||
## Features
|
||||
|
||||
- **Create & Manage Prompts**: Store prompts in YAML format with metadata
|
||||
- **Tagging System**: Organize prompts with tags for fast filtering
|
||||
- **Powerful Search**: Find prompts by name, content, or tags
|
||||
- **Template System**: Jinja2-style templating with variable substitution
|
||||
- **Git Integration**: Generate commit messages from staged changes
|
||||
- **LLM Integration**: Works with Ollama and LM Studio
|
||||
- **Import/Export**: Share prompts in JSON/YAML or LLM-specific formats
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
# From source
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
Or from source:
|
||||
|
||||
```bash
|
||||
git clone <repository>
|
||||
cd mcp-server-cli
|
||||
pip install -e .
|
||||
# Or install dependencies manually
|
||||
pip install click rich pyyaml requests jinja2
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. Initialize a configuration file:
|
||||
### Create a Prompt
|
||||
|
||||
```bash
|
||||
mcp-server config init -o config.yaml
|
||||
llm-prompt prompt create my-prompt \
|
||||
--template "Explain {{ concept }} in simple terms" \
|
||||
--description "Simple explanation generator" \
|
||||
--tag "explanation" \
|
||||
--variable "concept:What to explain:true"
|
||||
```
|
||||
|
||||
2. Start the server:
|
||||
### List Prompts
|
||||
|
||||
```bash
|
||||
mcp-server server start --port 3000
|
||||
llm-prompt prompt list
|
||||
llm-prompt prompt list --tag "explanation"
|
||||
```
|
||||
|
||||
3. The server will be available at `http://127.0.0.1:3000`
|
||||
|
||||
## CLI Commands
|
||||
|
||||
### Server Management
|
||||
### Run a Prompt with Variables
|
||||
|
||||
```bash
|
||||
# Start the MCP server
|
||||
mcp-server server start --port 3000 --host 127.0.0.1
|
||||
|
||||
# Check server status
|
||||
mcp-server server status
|
||||
|
||||
# Health check
|
||||
mcp-server health
|
||||
llm-prompt run my-prompt --var concept="quantum physics"
|
||||
```
|
||||
|
||||
### Tool Management
|
||||
### Generate a Commit Message
|
||||
|
||||
```bash
|
||||
# List available tools
|
||||
mcp-server tool list
|
||||
# Stage some changes first
|
||||
git add .
|
||||
|
||||
# Add a custom tool
|
||||
mcp-server tool add path/to/tool.yaml
|
||||
|
||||
# Remove a custom tool
|
||||
mcp-server tool remove tool_name
|
||||
# Generate commit message
|
||||
llm-prompt commit
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
### Prompt Management
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `llm-prompt prompt create <name>` | Create a new prompt |
|
||||
| `llm-prompt prompt list` | List all prompts |
|
||||
| `llm-prompt prompt show <name>` | Show prompt details |
|
||||
| `llm-prompt prompt delete <name>` | Delete a prompt |
|
||||
|
||||
### Tag Management
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `llm-prompt tag list` | List all tags |
|
||||
| `llm-prompt tag add <prompt> <tag>` | Add tag to prompt |
|
||||
| `llm-prompt tag remove <prompt> <tag>` | Remove tag from prompt |
|
||||
|
||||
### Search & Run
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `llm-prompt search [query]` | Search prompts |
|
||||
| `llm-prompt run <name>` | Run a prompt |
|
||||
|
||||
### Git Integration
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `llm-prompt commit` | Generate commit message |
|
||||
|
||||
### Import/Export
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `llm-prompt export <output>` | Export prompts |
|
||||
| `llm-prompt import <input>` | Import prompts |
|
||||
|
||||
### Configuration
|
||||
|
||||
```bash
|
||||
# Show current configuration
|
||||
mcp-server config show
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `llm-prompt config show` | Show current config |
|
||||
| `llm-prompt config set <key> <value>` | Set config value |
|
||||
| `llm-prompt config test` | Test LLM connection |
|
||||
|
||||
# Generate a configuration file
|
||||
mcp-server config init -o config.yaml
|
||||
## Prompt Format
|
||||
|
||||
Prompts are stored as YAML files:
|
||||
|
||||
```yaml
|
||||
name: explain-code
|
||||
description: Explain code in simple terms
|
||||
tags: [documentation, beginner]
|
||||
variables:
|
||||
- name: code
|
||||
description: The code to explain
|
||||
required: true
|
||||
- name: level
|
||||
description: Explanation level (beginner/intermediate/advanced)
|
||||
required: false
|
||||
default: beginner
|
||||
template: |
|
||||
Explain the following code in {{ level }} terms:
|
||||
|
||||
```python
|
||||
{{ code }}
|
||||
```
|
||||
|
||||
provider: ollama
|
||||
model: llama3.2
|
||||
```
|
||||
|
||||
## Template System
|
||||
|
||||
Use Jinja2 syntax for variable substitution:
|
||||
|
||||
```jinja2
|
||||
Hello {{ name }}!
|
||||
Your task: {{ task }}
|
||||
|
||||
{% for item in items %}
|
||||
- {{ item }}
|
||||
{% endfor %}
|
||||
```
|
||||
|
||||
Provide variables with `--var key=value`:
|
||||
|
||||
```bash
|
||||
llm-prompt run my-prompt --var name="Alice" --var task="review code"
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Create a `config.yaml` file:
|
||||
Default configuration file: `~/.config/llm-prompt-manager/config.yaml`
|
||||
|
||||
```yaml
|
||||
server:
|
||||
host: "127.0.0.1"
|
||||
port: 3000
|
||||
log_level: "INFO"
|
||||
|
||||
llm:
|
||||
enabled: false
|
||||
base_url: "http://localhost:11434"
|
||||
model: "llama2"
|
||||
|
||||
security:
|
||||
allowed_commands:
|
||||
- ls
|
||||
- cat
|
||||
- echo
|
||||
- git
|
||||
blocked_paths:
|
||||
- /etc
|
||||
- /root
|
||||
prompt_dir: ~/.config/llm-prompt-manager/prompts
|
||||
ollama_url: http://localhost:11434
|
||||
lmstudio_url: http://localhost:1234
|
||||
default_model: llama3.2
|
||||
default_provider: ollama
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description |
|
||||
|----------|-------------|
|
||||
| `MCP_PORT` | Server port |
|
||||
| `MCP_HOST` | Server host |
|
||||
| `MCP_LOG_LEVEL` | Logging level (DEBUG, INFO, WARNING, ERROR) |
|
||||
| `MCP_LLM_URL` | Local LLM base URL |
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `LLM_PROMPT_DIR` | `~/.config/llm-prompt-manager/prompts` | Prompt storage directory |
|
||||
| `OLLAMA_URL` | `http://localhost:11434` | Ollama API URL |
|
||||
| `LMSTUDIO_URL` | `http://localhost:1234` | LM Studio API URL |
|
||||
| `DEFAULT_MODEL` | `llama3.2` | Default LLM model |
|
||||
|
||||
## Built-in Tools
|
||||
## Examples
|
||||
|
||||
### File Operations
|
||||
|
||||
| Tool | Description |
|
||||
|------|-------------|
|
||||
| `file_tools` | Read, write, list, search, glob files |
|
||||
| `read_file` | Read file contents |
|
||||
| `write_file` | Write content to a file |
|
||||
| `list_directory` | List directory contents |
|
||||
| `glob_files` | Find files matching a pattern |
|
||||
|
||||
### Git Operations
|
||||
|
||||
| Tool | Description |
|
||||
|------|-------------|
|
||||
| `git_tools` | Git operations: status, log, diff, branch |
|
||||
| `git_status` | Show working tree status |
|
||||
| `git_log` | Show commit history |
|
||||
| `git_diff` | Show changes between commits |
|
||||
|
||||
### Shell Execution
|
||||
|
||||
| Tool | Description |
|
||||
|------|-------------|
|
||||
| `shell_tools` | Execute shell commands safely |
|
||||
| `execute_command` | Execute a shell command |
|
||||
|
||||
## Custom Tools
|
||||
|
||||
Define custom tools in YAML:
|
||||
|
||||
```yaml
|
||||
name: my_tool
|
||||
description: Description of the tool
|
||||
|
||||
input_schema:
|
||||
type: object
|
||||
properties:
|
||||
param1:
|
||||
type: string
|
||||
description: First parameter
|
||||
required: true
|
||||
param2:
|
||||
type: integer
|
||||
description: Second parameter
|
||||
default: 10
|
||||
required:
|
||||
- param1
|
||||
|
||||
annotations:
|
||||
read_only_hint: true
|
||||
destructive_hint: false
|
||||
```
|
||||
|
||||
Or in JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "example_tool",
|
||||
"description": "An example tool",
|
||||
"input_schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"message": {
|
||||
"type": "string",
|
||||
"description": "The message to process",
|
||||
"required": true
|
||||
}
|
||||
},
|
||||
"required": ["message"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Local LLM Integration
|
||||
|
||||
Connect to local LLMs (Ollama, LM Studio, llama.cpp):
|
||||
|
||||
```yaml
|
||||
llm:
|
||||
enabled: true
|
||||
base_url: "http://localhost:11434"
|
||||
model: "llama2"
|
||||
temperature: 0.7
|
||||
max_tokens: 2048
|
||||
```
|
||||
|
||||
## Claude Desktop Integration
|
||||
|
||||
Add to `claude_desktop_config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"mcp-server": {
|
||||
"command": "mcp-server",
|
||||
"args": ["server", "start", "--port", "3000"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Cursor Integration
|
||||
|
||||
Add to Cursor settings (JSON):
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"mcp-server": {
|
||||
"command": "mcp-server",
|
||||
"args": ["server", "start", "--port", "3000"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- Shell commands are whitelisted by default
|
||||
- Blocked paths prevent access to sensitive directories
|
||||
- Command timeout prevents infinite loops
|
||||
- All operations are logged
|
||||
|
||||
## API Reference
|
||||
|
||||
### Endpoints
|
||||
|
||||
- `GET /health` - Health check
|
||||
- `GET /api/tools` - List tools
|
||||
- `POST /api/tools/call` - Call a tool
|
||||
- `POST /mcp` - MCP protocol endpoint
|
||||
|
||||
### MCP Protocol
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1,
|
||||
"method": "initialize",
|
||||
"params": {
|
||||
"protocol_version": "2024-11-05",
|
||||
"capabilities": {},
|
||||
"client_info": {"name": "client"}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example: Read a File
|
||||
### Create a Code Explainer Prompt
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3000/api/tools/call \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"name": "read_file", "arguments": {"path": "/path/to/file.txt"}}'
|
||||
llm-prompt prompt create explain-code \
|
||||
--template "Explain this code: {{ code }}" \
|
||||
--description "Explain code in simple terms" \
|
||||
--tag "docs" \
|
||||
--variable "code:The code to explain:true"
|
||||
```
|
||||
|
||||
### Example: List Tools
|
||||
### Export to Ollama Format
|
||||
|
||||
```bash
|
||||
curl http://localhost:3000/api/tools
|
||||
llm-prompt export my-prompts.yaml --format ollama
|
||||
```
|
||||
|
||||
### Import from Another Library
|
||||
|
||||
```bash
|
||||
llm-prompt import /path/to/prompts.json --force
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Error | Solution |
|
||||
|-------|----------|
|
||||
| Prompt not found | Use `llm-prompt list` to see available prompts |
|
||||
| LLM connection failed | Ensure Ollama/LM Studio is running |
|
||||
| Invalid YAML format | Check prompt YAML has required fields |
|
||||
| Template variable missing | Provide all required `--var` values |
|
||||
| Git repository not found | Run from a directory with `.git` folder |
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
Reference in New Issue
Block a user