Kubectl-MCP โ a Python-based Model Context Protocol (MCP) server that lets AI assistants (Claude, Cursor, Windsurf โฆ) perform kubectl and Helm actions on a Kubernetes cluster via natural-language requests.
https://github.com/rohitg00/kubectl-mcp-serverStop switching between your AI assistant and terminal every time you need to check pod status, deploy applications, or troubleshoot cluster issues. kubectl-mcp-server bridges the gap by letting Claude, Cursor, Windsurf, and other AI assistants directly interact with your Kubernetes clusters through natural language.
You're debugging a production issue in Claude. You need to check pod logs, examine deployment status, and maybe scale a service. Currently, that means:
kubectl-mcp-server eliminates this workflow friction entirely.
This MCP server implements 26 specialized tools that let AI assistants execute Kubernetes operations directly:
Instead of explaining what you want to do, you just ask: "Scale the nginx deployment to 5 replicas and check if the pods are healthy." The AI handles the kubectl commands and gives you structured results.
Production Debugging:
"The payment service is throwing 500 errors. Check the pod status,
get recent logs, and see if there are any related events."
Deployment Validation:
"I just deployed the new user-service. Verify the deployment is healthy,
check resource usage, and confirm the service endpoints are working."
Security Audit:
"Audit the RBAC permissions for the api-gateway service account
and check if any pods are running with excessive privileges."
Cluster Maintenance:
"Show me which nodes are under resource pressure and identify
any pods that might need to be rescheduled."
Install from PyPI:
pip install kubectl-mcp-tool
Configure your AI assistant by adding this to their MCP settings:
{
"mcpServers": {
"kubernetes": {
"command": "python",
"args": ["-m", "kubectl_mcp_tool.mcp_server"],
"env": {
"KUBECONFIG": "/path/to/your/.kube/config"
}
}
}
}
That's it. Your AI can now execute kubectl operations directly in conversation.
Faster Incident Response: Debug production issues without breaking conversation flow with your AI assistant. Get cluster insights, check logs, and identify problems in a single chat session.
Natural Language Operations: Instead of remembering kubectl syntax, describe what you want. "Check if the database pods are ready" becomes a simple request rather than constructing kubectl get pods -l app=database -o jsonpath='{.items[*].status.conditions[?(@.type=="Ready")].status}'
.
Context-Aware Troubleshooting: The server maintains namespace context and can perform complex multi-step operations. Ask about a pod's health and automatically get related events, resource usage, and dependency status.
Security-First Design: Built-in RBAC validation and security auditing tools help you maintain cluster security while operating at AI speed.
Mock Mode: Test and demo cluster operations without needing a real Kubernetes cluster. Perfect for training or development scenarios.
Multi-Transport Support: Works with stdio, SSE, and HTTP transport methods, ensuring compatibility across different AI assistant architectures.
Intelligent Error Handling: When operations fail, you get actionable recovery suggestions, not just error messages.
Cross-Namespace Operations: Seamlessly work across different namespaces without manually switching context.
The server exposes a comprehensive toolkit that covers real-world DevOps scenarios, from basic resource management to complex security audits and performance monitoring.
pip install kubectl-mcp-tool
The included installation script can automatically configure Claude Desktop, Cursor, and Windsurf for you.
With 634+ GitHub stars and active development, kubectl-mcp-server is becoming the standard way to integrate AI assistants with Kubernetes workflows. Stop context-switching and start operating your clusters at the speed of thought.