Views: 172
LLMs do not take into account the entirety of a project
LLMs struggle with real-world deployments, consistently forgetting ~20% of critical components. This became clear while setting up @Istio on Kubernetes with Prometheus and Kiali observability.
Following my thoughts on DevOps interviews and whiteboard coding tests (see link below), I’m now seeing the flip side of the AI equation.
I watched as the LLM missed these crucial elements:
- RBAC permissions for Prometheus service account- Without this, no metrics collection was possible, rendering Kiali useless
- Image repository reliability issues – LLMs consistently reference outdated image locations or versions that no longer exist. Docker Hub’s frequent policy changes and image deprecations mean yesterday’s working manifest is today’s “ImagePullBackOff” error. This required multiple iterations to find images that would actually pull.
- Proper verification steps – No systematic debugging or testing between configuration changes by LLMs.
- Integration between monitoring components – Components deployed but couldn’t communicate
The RBAC issue was particularly telling. Even with correct Prometheus configuration, we saw empty metrics. Only after checking logs (kubectl logs -n istio-system -l app=prometheus) did we discover the service account lacked permissions to list pods and services.
This highlights a crucial point: Kubernetes RBAC has become as formidable as AWS IAM or Azure AD permissions – even on local Docker Desktop with Kind. The principle of least privilege means nothing works until explicitly permitted.
For DevOps professionals, this reinforces that:
- Understanding security models is non-negotiable whether basic K8s RBAC, AWS IAM or Azure AD
- Troubleshooting always requires systematic investigation of logs
- Even “simple” deployments have complex dependencies
Despite these limitations, LLMs remain incredible time-savers. What once would have taken days of Google searches and StackOverflow deep-dives can now be accomplished in hours, even with the necessary corrections. The human-AI partnership is powerful when you understand where AI needs supervision.
What’s your experience with AI-assisted deployments? Have you found similar gaps?