AI Platform Engineer — Agent Development Operating System (Node.js + LiteLLM + GitHub Automation)
UpworkUSNot specifiedexpertScore: 63
Node.jsPythonPostgreSQLDockerTerraformLLM / Large Language ModelsGitHub APIAI Agent DevelopmentDevOps EngineeringREST API Development
# AI Platform Engineer — Agent Development Operating System (Node.js + LiteLLM + GitHub Automation)
## What we're building
We are building **OpenClaw Builder** — an AI-powered software development operating system that enables a two-person team to manage unlimited AI engineering capacity and ship products daily across multiple ventures simultaneously.
The platform orchestrates 18 specialized AI agents (QA, Security, Spec Compliance, Backend, Frontend, DevOps, ML, LLM, Data, and more) through a fully automated pipeline: spec → tasks → code → review → staging → production. No human writes code. No human reviews PRs. Agents do everything. The two humans set direction, maintain quality, and approve what ships.
This is not a typical project. If you're looking for a standard web app build, this isn't it. If you're genuinely excited about AI-first engineering systems and want to build the platform that builds everything else — read on.
---
## The engagement
**Phase 1 — Fixed price build (15 days)**
Implement OpenClaw Builder v1 from a detailed, implementation-grade spec and Build Packet. The spec covers every component, schema, API contract, agent definition, GitHub enforcement rule, and acceptance criterion. You are not designing the system — you are building it precisely as documented, using Cursor and AI agents as your primary development tools.
**Phase 2 — Monthly retainer (if Phase 1 is strong)**
Ongoing Platform Engineer role. You keep the platform healthy, tune agent prompts, resolve escalations, and expand agent capabilities as we ship across our venture portfolio. This is a long-term relationship if Phase 1 demonstrates the right alignment.
---
## What you will build in Phase 1
- **PostgreSQL state machine** (Prisma + DO Managed DB) — the source of truth for all platform state
- **Telegram bot** — command interface for the two human operators
- **GitHub automation** — Checks API integration, builder-bot push identity, branch/PR automation, Ruleset enforcement via Terraform
- **LiteLLM Proxy** — model routing layer; all agents call the Proxy, no provider keys in agent code
- **5 AI review agents** as GitHub check-runs (QA, Security, Spec Compliance, Performance, Consistency)
- **AI PM + AI Engineering Manager** — spec-to-task decomposition pipeline
- **Linear sync** — bidirectional task state sync with webhook dedup
- **Multi-project orchestration** — parallel agent instances across simultaneous projects
- **Canary rollout + SLO monitoring** — Docker + Nginx weighted routing, auto-rollback on breach
- **Feature flag system + break-glass identity management**
- **Full observability stack** — Prometheus + Grafana + Loki
The full spec, Build Packet (28 issues with done-when checklists and BDD), data models, API contracts, and dependency map will be shared with shortlisted candidates.
---
## The stack
- **Backend:** Node.js 20 + Express + Prisma + PostgreSQL
- **Frontend:** React + Vite + shadcn/ui + Tailwind (minimal internal UI)
- **Mobile capability:** React Native + Expo (agent capability, not v1 scope)
- **AI/LLM:** Python 3.11 + LiteLLM Proxy + Anthropic SDK + Pydantic
- **DevOps:** DigitalOcean + Docker + Nginx + Terraform + GitHub Actions
- **Observability:** Prometheus + Grafana + Loki
- **Task management:** Linear (GraphQL API via @linear/sdk)
- **Human interface:** Telegram Bot API
---
## Who we're looking for
**This role requires genuine alignment, not just technical skill.** The Platform Engineer's job is to make agents more capable — not to build a case for hiring more humans. If your instinct is to scale teams, this is not the right fit.
**You are a strong match if you:**
- Have built production systems involving LLM agents, GitHub automation, or orchestration pipelines
- Are comfortable reading a detailed spec and implementing it precisely — not redesigning it
- Think in systems: you understand state machines, webhook dedup, idempotency, retry logic, and why they matter
- Use Cursor, Aider, or similar AI coding tools as your primary development environment — you are not hand-writing every line
- Have worked with the GitHub Checks API (not just Statuses API)
- Have deployed infrastructure via Terraform — you do not configure DigitalOcean manually
- Understand LiteLLM, prompt engineering, and how to write eval harnesses for LLM outputs
- Can operate async and independently — daily brief, escalation via Telegram, no micromanagement
- Write clean commit history (feat/fix/chore convention) and treat the bot identity rule seriously
**Bonus (not required):**
- Experience with Linear API / @linear/sdk
- Prior work on multi-agent systems or AI orchestration platforms
- Familiarity with canary deployments and SLO-based rollback
---
## What disqualifies you
We want to be direct so you don't waste your time:
- You want to manage a team of human engineers
- You think AI tools are a shortcut for junior devs, not a force multiplier for seniors
- You need heavy PM oversight to make decisions
- You would rather rebuild the spec from scratch than implement it as written
- You have not built anything with LLM APIs in production
---
## How to apply
Do not send a generic proposal. We will not read it.
Send us three things:
**1. One system you've built that's most relevant to this project**
Describe what it did, what the hardest part was, and what you'd do differently. Two paragraphs maximum.
**2. Your honest read on the riskiest part of this build**
We have a detailed spec. Where do you think implementation is most likely to slip, and why? One paragraph.
**3. Your toolchain**
What AI coding tools do you use daily and how? (Cursor, Aider, Claude, Copilot, other — be specific.)
Applications that answer all three will be reviewed within 48 hours. Shortlisted candidates receive the full spec and Build Packet before any further conversation.
---
*Async-friendly. Any timezone. Right person wins on merit.*
Unlock AI Intelligence, score breakdowns, and real-time alerts
Upgrade to Pro — $29.99/moClient
Spent: $3,239.32Rating: 4.6Verified