AI Engineer – Vision + LLM Pipeline
UpworkUSNot specifiedintermediateScore: 36
PythonMachine LearningComputer Vision
We are looking for a senior AI / backend engineer to work on a production-grade vision + LLM inference pipeline. This role is focused on system design and implementation, not experimentation or prompt-only work.
The system combines:
Vision-enabled LLMs
Optional secondary models (OpenAI fallback)
Backend policy and validation layers
Progressive / staged responses for low-latency UX
This is a technical role for someone who understands how AI models behave in production, including failure modes, cost tradeoffs, and latency constraints.
Scope of Work
You will help design and implement:
Vision-based image analysis
Multi-call LLM pipelines (vision + text generation)
Parallel and progressive inference flows
Backend validation and safety filters
Prompt control via system vs runtime instructions
Model routing and fallback logic
Token, latency, and error observability
Tool / function calling integration where appropriate
Required Skills
You should be strong in most of the following:
Python (FastAPI or similar backend frameworks)
Vision-enabled LLMs (Groq, OpenAI, or comparable)
Designing LLM pipelines, not just single prompts
Parallel execution and streaming responses (SSE, NDJSON, or equivalent)
Prompt control and structured outputs
Guardrails for hallucination, safety, and coherence
Performance optimization for inference latency and cost
Clean, maintainable backend architecture
Strong Plus If You Have
Experience with LLM tool / function calling
Prior work on image understanding systems
Familiarity with AWS or similar cloud environments
Experience debugging real-world LLM failures
What This Role Is Not
Not prompt engineering only
Not chatbot development
Not academic research
Not a one-off script
This is systems-level AI engineering.
Unlock AI Intelligence, score breakdowns, and real-time alerts
Upgrade to Pro — $29.99/moClient
Spent: $3,100Rating: 0.0Verified