Information Technology
Full-Time
Brightly Software
Overview
Gen AI Engineer
Job Description
Brightly Software is seeking a high performer to join our Product team in the role of Gen AI engineer to drive best in class client - facing AI features by creating and delivering insights that advise client decisions tomorrow.
Role
As a Gen AI Engineer , you will play a critical role in building AI offering s for Brightly. Y ou will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster . This will include the following:
Job Description
Brightly Software is seeking a high performer to join our Product team in the role of Gen AI engineer to drive best in class client - facing AI features by creating and delivering insights that advise client decisions tomorrow.
Role
As a Gen AI Engineer , you will play a critical role in building AI offering s for Brightly. Y ou will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster . This will include the following:
- Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients.
- Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases.
- Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines.
- Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS OpenSearch
- D evelop GenAI applications using Hugging Face Transformers, LangChain , and Llama related frameworks
- Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building.
- Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong ex erience in predictive and stastical modelling .
- Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines).
- Understanding and development of state management workflows using Langraph .
- Engineer and evaluate prompts, including prompt chaining and output quality assessment
- Apply NLP and transformer model expertise to solve language tasks
- Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes
- Monitor and optimize model and pipeline performance for scalability and efficiency
- Communicate technical concepts clearly to cross-functional and non-technical stakeholders
- Thrive in a fast-paced, lean environment and contribute to scalable GenAI system design
- Bachelor’s degree is required
- 2-4 years of experience of total experience with a strong focus on AI and ML and 1+ years in core GenAI Engineer ing
- Demonstrated expertise in working with large language models (LLMs) and generative AI systems, including both text-based and multimodal models.
- S trong programming skills in Python, including proficiency with data science libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, and/or PyTorch .
- Familiarity with MLOps principles and tools for automating and streamlining the ML lifecycle.
- Experience working with agentic AI .
- Capable of building Retrieval-Augmented Generation (RAG) pipelines leveraging vector stores like Pinecone, Chroma, or FAISS.
- St rong programming skills in Python, with experience using leading AI/ML libraries such as Hugging Face Transformers and LangChain .
- Practical experience in working with vector databases and embedding methodologies for efficient information retrieval.
- P ossess experience in developing and exposing API endpoints for accessing AI model capabilities using frameworks like FastAPI .
- Knowledgeable in prompt engineering techniques, including prompt chaining and performance evaluation strategies .
- Solid grasp of natural language processing (NLP) fundamentals and transformer-based model architectures.
- Experience in deploying machine learning models to cloud platforms (preferably AWS) and containerized environments using Docker or Kubernetes.
- Skilled in fine-tuning and assessing open-source models using methods such as LoRA , PEFT, and supervised training.
- Strong communication skills with the ability to convey complex technical concepts to non-technical stakeholders.
- Able to operate successfully in a lean, fast-paced organization, and to create a vision and organization that can scale quickly
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in