AI Integration Engineer (MedGemma + On-Prem LLM) — Healthcare HIMS
UpworkINNot specifiedexpertScore: 74
AI Model IntegrationPythonFastAPIPostgreSQLRedisDockerVector DatabaseArtificial IntelligenceAPI Integration
We are hiring an AI Integration Engineer to integrate MedGemma into our on-prem Hospital Information Management System (ZypoCare by Zyposoft).
You will build/extend an AI microservice (FastAPI preferred) to provide context-aware clinical drafting and a screen-aware AI Copilot across HIMS modules. The integration must be secure (RBAC + audit logs), reliable (streaming chat, retries, timeouts), and support RAG with our internal SOPs/templates.
Responsibilities
Integrate MedGemma for local inference (GPU-ready, CPU fallback)
Build RAG pipeline (embeddings + vector store + citations)
Implement secure APIs (/chat streaming, /summarize, /extract)
Enforce branch + role-based access, PHI-safe logs, full audit trails
Integrate with Next.js web app (screen-aware summaries, copilot panel)
Add observability + performance tuning + documentation/runbooks
Must have
Production LLM integration experience (HuggingFace/vLLM/Ollama/llama.cpp)
FastAPI + Python, Docker, Postgres/Redis
RAG + vector DB experience
Security mindset for sensitive data
Deliverables
Working AI copilot integrated in our HIMS with RBAC, audit logs, RAG citations, streaming chat, and deployable Docker setup.
Unlock AI Intelligence, score breakdowns, and real-time alerts
Upgrade to Pro — $29.99/mo