Starter micro-app: build the dining recommender with a deployable GitOps repo
Download a deployable dining micro-app: code, LLM prompts, container image, and an ArgoCD GitOps repo to deploy a production-ready micro-app.
Build a production-ready dining micro-app: downloadable code, LLM prompts, container image, and a GitOps repo
Hook: If you’re tired of fragile pipelines, tool sprawl, and one-offs that never make it to production, this starter micro-app gives you a repeatable, deployable template: downloadable source, LLM prompt patterns, a container image, and a GitOps repo you can point ArgoCD at — all designed for teams in 2026.
Quick summary (most important first)
This guide recreates the “dining app” story as a production-ready micro-app using modern best practices. You get:
- Downloadable starter repo (clone, fork, or use as a template)
- LLM prompt templates for recommendations, personalization, and multi-user consensus
- Container image and a repeatable build strategy (multi-arch friendly)
- GitOps pipeline — ArgoCD Application manifest, image automation tips, and CI that updates Git
- Security & compliance patterns (Sigstore signing, secret management) and cost/scale guidance
Why this matters in 2026
By late 2025 and into 2026, two trends converged: vibe-coding and AI-assisted app creation made micro-apps common, and organizations pushed these micro-apps toward repeatable, governed deployments. Tools like Claude Code, Anthropic’s Cowork preview, and advanced Copilot experiences accelerated prototype speed — but without predictable infrastructure patterns, prototypes become unpaid technical debt.
GitOps remains the de facto way to manage continuous delivery for cloud-native workloads. Teams want a lightweight, opinionated template to turn an LLM-powered personal app (like a dining recommender) into a production micro-app you can deploy, secure, and operate.
What you’ll get in this guide
- Starter repo layout and commands to get running locally
- Example Dockerfile and container build/push steps
- Kubernetes manifests (Deployment, Service, ConfigMap, Secret) and an ArgoCD Application manifest
- Sample CI (GitHub Actions) that builds an image, pushes it, and creates a Git commit in the GitOps repo
- LLM prompt templates and code examples to integrate with OpenAI-style APIs or Claude
- Security, cost, and scaling recommendations
Architecture overview
The starter micro-app follows a minimal but production-ready architecture:
- Frontend — lightweight React or Svelte app for choosing preferences (optional)
- API service — small Python/Node/Go service that orchestrates LLM calls and caching
- Data source — external APIs (Yelp/Google Places) or a small dataset stored in a managed DB
- GitOps repo — Kubernetes manifests and ArgoCD Application that deploys the service
- CI — builds container image, signs it, pushes it, and updates the GitOps repo
Starter repo layout (recommended)
starter-dining-microapp/
├─ app/ # API service (Python/Node/Go)
│ ├─ src/
│ ├─ Dockerfile
│ └─ prompts/
├─ infra/ # GitOps manifests (k8s/argocd)
│ ├─ base/
│ │ ├─ deployment.yaml
│ │ ├─ service.yaml
│ │ └─ kustomization.yaml
│ └─ overlays/prod/
│ └─ kustomization.yaml
├─ .github/workflows/ # CI workflow to build, sign, push, and update infra repo
└─ README.md
Quickstart — three commands
- Clone the template:
git clone https://github.com/deployed-cloud/starter-dining-microapp.git - Build and push the container image (example using Docker Buildx):
docker buildx build --platform linux/amd64,linux/arm64 -t ghcr.io/ORG/dining-microapp:0.1.0 --push ./app - Install ArgoCD (if not installed) and point it at infra/overlays/prod (example uses argocd CLI):
kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml argocd app create dining-microapp --repo https://github.com/ORG/dining-gitops.git --path overlays/prod --dest-server https://kubernetes.default.svc --dest-namespace default argocd app sync dining-microapp
Example Dockerfile (small, secure)
FROM python:3.11-slim as build
WORKDIR /app
COPY app/requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY app/ .
FROM python:3.11-slim
WORKDIR /app
COPY --from=build /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
COPY --from=build /app /app
USER 1000:1000
ENV PORT=8080
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8080"]
Kubernetes manifests (minimal, production-aware)
apiVersion: apps/v1
kind: Deployment
metadata:
name: dining-microapp
spec:
replicas: 2
selector:
matchLabels:
app: dining-microapp
template:
metadata:
labels:
app: dining-microapp
spec:
containers:
- name: dining-microapp
image: ghcr.io/ORG/dining-microapp:0.1.0
ports:
- containerPort: 8080
env:
- name: LLM_API_KEY
valueFrom:
secretKeyRef:
name: llm-api-secret
key: api_key
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: dining-microapp
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: dining-microapp
ArgoCD Application manifest (point ArgoCD at your GitOps repo)
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: dining-microapp
namespace: argocd
spec:
project: default
source:
repoURL: 'https://github.com/ORG/dining-gitops.git'
targetRevision: HEAD
path: overlays/prod
destination:
server: 'https://kubernetes.default.svc'
namespace: default
syncPolicy:
automated:
prune: true
selfHeal: true
CI to build, sign, push, and update GitOps
Use a CI workflow that builds the image, signs it with sigstore (cosign), pushes to the registry, and commits an updated image tag or Kustomize image patch into the GitOps repo so ArgoCD can deploy. Here’s a simplified GitHub Actions snippet:
name: ci
on:
push:
paths:
- 'app/**'
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build image
run: |
docker buildx create --use
docker buildx build --platform linux/amd64 -t ghcr.io/${{ github.repository_owner }}/dining-microapp:${{ github.sha }} --push ./app
- name: Sign image (cosign)
run: |
cosign sign --key cosign.key ghcr.io/${{ github.repository_owner }}/dining-microapp:${{ github.sha }}
- name: Update GitOps repo
run: |
git clone https://github.com/ORG/dining-gitops.git
cd dining-gitops/overlays/prod
kustomize edit set image ghcr.io/ORG/dining-microapp=ghcr.io/ORG/dining-microapp:${{ github.sha }}
git add . && git commit -m "image: dining-microapp:${{ github.sha }}" && git push
LLM prompt templates — reproducible and auditable
Rather than hard-coding prompts into your app binary, store prompt templates in a ConfigMap or a managed prompt store (vector DB + metadata). Keep a few standard templates:
System prompt (context):
You are DiningRecommender v1. You receive a user profile and group preferences. Return a JSON list of recommended restaurants with fields: name, cuisine, distance_meters, estimated_price_level, score (0-1), reason.
Constraints: only include restaurants within {max_distance_meters} and price <= {max_price_level}. Do not hallucinate; if missing data, return source: "unknown".
User prompt (example-driven):
Example 1:
Input: {"user_preferences": ["sushi","vegan"], "group_size": 3, "location": "94043"}
Output: [{"name":"Sushi Place","cuisine":"Japanese","distance_meters":800,"estimated_price_level":2,"score":0.9,"reason":"High match for sushi and vegan options."}]
Now respond for Input: {input}
Store these templates in app/prompts/ and load them at runtime so you can update prompts through GitOps (a simple update to the ConfigMap k8s manifest and a PR will change behavior).
Example LLM call (Python pseudocode)
import os
import requests
API_KEY = os.environ['LLM_API_KEY']
PROMPT = open('prompts/system.txt').read() + "\n" + open('prompts/user.txt').read()
resp = requests.post('https://api.openai.com/v1/chat/completions', json={
'model':'gpt-4o-mini',
'messages':[{'role':'system','content':PROMPT},{'role':'user','content':user_input}],
}, headers={'Authorization': f'Bearer {API_KEY}'})
recommendations = resp.json()
Security & supply chain
- Image signing: Use cosign (Sigstore) to sign images during CI and verify signatures in ArgoCD or admission controllers.
- Secret management: Do NOT store LLM API keys in plain ConfigMaps. Use ExternalSecrets (AWS Secrets Manager, Vault) or sealed-secrets.
- SBOM & attestations: Generate an SBOM (Syft) and attach SLSA attestations. Keep provenance in the GitOps repo as artifacts.
- Policy enforcement: Use OPA/Gatekeeper or Kyverno to enforce resource limits, required signatures, and network policies for egress to LLM endpoints.
Cost, scaling and operational tips
- LLM costs: Batch calls and cache results using Redis or in-cluster cache — don’t call the LLM for every identical request.
- Autoscaling: Use HPA with CPU/memory metrics and consider queue-based autoscaling when using async workers for LLM calls.
- Preview environments: Use Argo Rollouts or ephemeral preview namespaces for PRs — each PR can point to a preview deployment using Image Automation and k8s overlays.
Advanced GitOps tips (2026)
- Image automation: Use ArgoCD Image Updater or Flux image automation to detect new signed images and open PRs to the GitOps repo automatically.
- Signed PRs: Configure your automation to attach cosign attestation metadata to the commit message or add an attestations/ directory to the GitOps repo.
- Policy as code: Ship OPA policies in the same repo and enforce them in CI so the GitOps repo only accepts compliant manifests.
Example workflows and decisions
Choose these patterns based on your needs:
- If you need fastest iteration: build images on push and let ArgoCD auto-sync. Use image automation to patch manifests.
- If you need strict compliance: require image signing and human approval before the GitOps repo is merged (use protected branches and required checks).
- For low-cost staging: use smaller instance types, fewer replicas, and a cheaper LLM model for staging vs. production models.
Case example: from prototype to production (inspired by the Where2Eat story)
Imagine a student builds a quick dining recommender in a weekend using LLMs and a local dataset. To make it reliable for her friend group and eventually a campus-wide roll-out, she:
- Extracts prompts to a versioned prompt directory and commits them to the GitOps repo.
- Wraps the prototype in a small API service, adds resource limits, and builds a signed container image.
- Configures ArgoCD to deploy from a repo where PRs are the only way to change production overlays.
- Adds cosign verification and an admission controller to allow only signed images, ensuring supply chain integrity.
"Micro-apps aren’t disposable — with a small amount of governance and GitOps, they’re repeatable, auditable, and production-ready."
Future predictions (2026 → 2027)
- More local runtime LLMs: Lightweight models running in edge containers will reduce API costs and latency for simple recommender tasks.
- Stronger supply chain standards: SLSA levels and automated attestations will be standard in GitOps flows.
- Agent-led deployments: Safe, approved agents will propose Git changes (image bumps, config tweaks) which humans will approve in PRs.
Actionable checklist — what to do now
- Clone the starter template: git clone https://github.com/deployed-cloud/starter-dining-microapp.git
- Set up a container registry (GitHub Container Registry, ECR, or GCR) and CI secrets
- Run the CI once to build and push an image
- Install ArgoCD and create the Application that points at your GitOps repo
- Add cosign signing to CI and enable an admission controller to validate signatures in cluster
- Store prompt templates in the repo and iterate via PRs — prompts are first-class, versioned artifacts
Where to extend this template
- Integrate a vector DB + RAG for contextualized recommendations
- Add rate limiting and cost controls around LLM calls
- Replace external LLMs with an on-prem or edge model for PII-sensitive usage
- Implement progressive delivery with Argo Rollouts (canary, blue/green)
Final takeaways
Micro-apps born from rapid AI-assisted prototyping can and should follow production-grade patterns. The pattern in this guide — versioned prompts, signed images, GitOps-managed manifests, and ArgoCD-driven continuous delivery — is intentionally minimal and extendable. It helps teams reduce tool sprawl, enforce compliance, and make micro-apps repeatable.
Call to action
Ready to ship your dining micro-app? Clone the starter repo, run the CI, and point ArgoCD at the GitOps repo to deploy in minutes. If you want a walkthrough, workshop materials, or an enterprise template with SSO and secrets integration, fork the template and open an issue — we’ll publish step-by-step lab guides and ArgoCD appsets for multi-cluster deployments in early 2026.
Related Reading
- Mini-Me Travel: Matching Owner-and-Dog Travel Sets That Actually Pack Well
- Set Up Your Perfect Beauty Editing Station on a Budget (Mac mini + Accessories)
- Detecting Price Movement Signals from USDA Export Sales (Corn & Soybean Use Case)
- Stop Cleaning Up After AI: 7 Workflow Rules Small Businesses Should Adopt
- How AI in Gmail Could Help — Not Hurt — Your Affiliate Email Campaigns
Related Topics
Unknown
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Protecting sovereign analytics pipelines: encryption, key management and audit in EU clouds
Real-time analytics at the edge: running ClickHouse near RISC-V + GPU inference nodes
Platform checklist for supporting citizen-built micro-apps in production
Evaluating enterprise LLM integrations: vendor lock-in, privacy and API architecture
Bridging WCET to SLAs: how timing analysis informs production SLAs for safety-critical systems
From Our Network
Trending stories across our publication group