Overview
Opentext - The Information CompanyOpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation.
AI-First. Future-Driven. Human-Centered.
At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us.
YOUR IMPACT
We are seeking a highly skilled AI Systems Engineer to lead the design, development, and optimization of Retrieval-Augmented Generation (RAG) pipelines and multi-agent AI workflows within enterprise-scale environments.
The role requires deep technical expertise across LLM orchestration, context engineering, and production-grade deployment practices. You will work cross-functionally with data, platform, and product teams to build scalable, reliable, and context-aware AI systems that power next-generation enterprise intelligence solutions.
What The Role Offers
- Be part of an enterprise AI transformation team shaping the future of LLM-driven applications.
- Work with cutting-edge technologies in AI orchestration, RAG, and multi-agent systems.
- Opportunity to architect scalable, secure, and context-aware AI systems deployed across global enterprise environments.
- Collaborative environment fostering continuous learning and innovation in Generative AI systems engineering.
- Architect, implement, and optimize enterprise-grade RAG pipelines covering data ingestion, embedding creation, and vector-based retrieval.
- Design, build, and orchestrate multi-agent workflows using frameworks such as LangGraph, Crew AI, or AI Development Kit (ADK) for collaborative task automation.
- Engineer prompts and contextual templates to enhance LLM performance, accuracy, and domain adaptability.
- Integrate and manage vector databases (pgvector, Milvus, Weaviate, Pinecone) for semantic search and hybrid retrieval.
- Develop and maintain data pipelines for structured and unstructured data using SQL and NoSQL systems.
- Expose RAG workflows through APIs using FastAPI or Flask, ensuring high reliability and performance.
- Containerize, deploy, and scale AI microservices using Docker, Kubernetes, and Helm within enterprise-grade environments.
- Implement CI/CD automation pipelines via GitLab or similar tools to streamline builds, testing, and deployments.
- Collaborate with cross-functional teams (Data, ML, DevOps, Product) to integrate retrieval, reasoning, and generation into end-to-end enterprise systems.
- Monitor and enhance AI system observability using Prometheus, Grafana, and OpenTelemetry for real-time performance and reliability tracking.
- Integrate LLMs with enterprise data sources and knowledge graphs to deliver contextually rich, domain-specific outputs.
- Education: Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, or related technical discipline.
- Experience: 5–10 years in AI/ML system development, deployment, and optimization within enterprise or large-scale environments.
- Deep understanding of Retrieval-Augmented Generation (RAG) architecture and hybrid retrieval mechanisms.
- Proficiency in Python with hands-on expertise in FastAPI, Flask, and REST API design.
- Strong experience with vector databases (pgvector, Milvus, Weaviate, Pinecone).
- Proficiency in prompt engineering and context engineering for LLMs.
- Hands-on experience with containerization (Docker) and orchestration (Kubernetes, Helm) in production-grade deployments.
- Experience with CI/CD automation using GitLab, Jenkins, or equivalent tools.
- Familiarity with LangChain, LangGraph, Google ADK, or similar frameworks for LLM-based orchestration.
- Knowledge of AI observability, logging, and reliability engineering principles.
- Understanding of enterprise data governance, security, and scalability in AI systems.
- Proven track record of building and maintaining production-grade AI applications with measurable business impact.
- Experience in fine-tuning or parameter-efficient tuning (PEFT/LoRA) of open-source LLMs.
- Familiarity with open-source model hosting, LLM governance frameworks, and model evaluation practices.
- Knowledge of multi-agent system design and Agent-to-Agent (A2A) communication frameworks.
- Exposure to LLMOps platforms such as LangSmith, Weights & Biases, or Kubeflow.
- Experience with cloud-based AI infrastructure (AWS Sagemaker, Azure OpenAI, GCP Vertex AI).
- Working understanding of distributed systems, API gateway management, and service mesh architectures.
- Strong analytical and problem-solving mindset with attention to detail.
- Effective communicator with the ability to collaborate across technical and business teams.
- Self-motivated, proactive, and capable of driving end-to-end ownership of AI system delivery.
- Passion for innovation in LLM orchestration, retrieval systems, and enterprise AI solutions.
OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws.
If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.