Overview
Experience: 4.00 + years
Salary: Confidential (based on experience)
Expected Notice Period: 7 Days
Shift: (GMT+05:30) Asia/Kolkata (IST)
Opportunity Type: Remote
Placement Type: Full Time Contract for 6 Months(40 hrs a week/160 hrs a month)
(*Note: This is a requirement for one of Uplers' client - LL)
What do you need for this opportunity?
Must have skills required:
Anthropic, Open AI, token budgeting, AI, LLM, MLOps, Vertex, GCP, Machine Learning, Python
LL is Looking for:
Overview
We are supporting an innovative early-stage SaaS company building a first-of-its-kind AI insights platform for commercial teams. The platform analyses customer conversations alongside sales and marketing data to generate actionable insights—automating strategy workflows that are traditionally manual. With the MVP built and entering its first customer trials, the team is now looking for an experienced Machine Learning Engineer to optimise and production LLM-driven workflows. This role offers the opportunity to shape how AI operates within the product—from prompt design and output safety to scalable, reliable inference in real-world environments.
About The Project
This role focuses on delivering practical, high-impact LLM integrations that power the platform’s core insight features. Responsibilities include designing robust prompts, deploying inference workflows, and ensuring safe, scalable model behaviour in production.
You will work closely with Product and Backend Engineering to embed AI logic into real user flows—enabling contextual, traceable insights with minimal friction. The current stack includes GCP, Vertex AI, Python-based scripting, and a pragmatic, delivery-focused approach.
This position is ideal for someone who thrives at the intersection of prompt engineering, MLOps, and production deployment, and who is comfortable shaping early-stage infrastructure while maintaining quality, performance, and safety.
Current Phase: Post-MVP Growth & Scaling
With the MVP complete, the focus has shifted from simply “getting it to work” to:
- Improving accuracy
- Reducing hallucination risk
- Optimising cost and performance
The goal is to evolve early prompt experimentation into a commercially robust, scalable AI engine.
The Role
You will own the AI/ML capability of the platform, ensuring insight generation is:
- Clear and consistently structured
- Cost-efficient and low-latency
- Production-grade and measurable
You will help transition the product from experimental prompts into a reliable, scalable system that supports long-term growth.
Must-Have Experience & Skills
Technical / Product
- 2–3+ years of prompt engineering or applied LLM integration experience
- Strong understanding of OpenAI, Anthropic, or Vertex AI APIs
- Experience deploying LLM inference pipelines in production
- Proficiency in Python and cloud-based backend functions
- Knowledge of token budgeting, latency constraints, and output control
- Experience with prompt testing, risk mitigation, and hallucination reduction
- Hands-on exposure to vector databases and RAG architectures (required)
- Exposure to embedding-based tagging (bonus)
- Understanding of safe, cost-efficient LLM design in early-stage products
- Ability to translate business language and taxonomies into model prompts
- Experience collaborating with Product Managers and domain experts
- Strong judgement around end-user expectations, tone, and insight utility
- Familiarity with privacy, governance, and enterprise data practices (preferred)
- Strong collaboration skills across Product, Backend, and Leadership teams
- High autonomy with a bias for iteration and experimentation
- Clear written and verbal communication
- Curious, organised, and highly user-focused
- Prompt & Model Optimisation: Improve prompt quality, error handling, and structured outputs
- Performance Engineering: Optimise token usage, latency, grounding strategies, and hallucination safeguards
- Evaluation & Metrics: Define and implement metrics to assess insight quality and reliability
- Architecture: Contribute to future architecture decisions for content generation and RAG workflows
- Production Lifecycle: Partner with engineering on scalable deployment and model lifecycle management
- LLM outputs are relevant, structured, and aligned with business tone
- Inference pipelines are performant, stable, and scalable
- Prompts support core insight categories (e.g., sentiment, trends, themes)
- Strong collaboration across product and engineering teams
- AI infrastructure enables rapid iteration beyond MVP
- Equipment: BYOD
- Onboarding: Intro sessions with engineering, product, and delivery leads
- Eligibility: Candidates must not be based in regions subject to UK financial sanctions
- Step 1: Click On Apply! And Register or Login on our portal.
- Step 2: Complete the Screening Form & Upload updated Resume
- Step 3: Increase your chances to get shortlisted & meet the client for the Interview!
Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement.
(Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well).
So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!