Ahmedabad, Gujarat, India
Information Technology
Full-Time
MagellanicCloud
Overview
Responsibilities
- Design, develop, train, and evaluate Large Language Models (LLMs) and Generative AI models for various applications.
- Implement and optimize Retrieval-Augmented Generation (RAG) pipelines to enhance the performance and accuracy of LLM-based systems.
- Utilize your expertise in Python and leading deep learning frameworks such as TensorFlow and PyTorch.
- Leverage tools and libraries like LangChain and Hugging Face Transformers to build and deploy AI solutions.
- Work extensively with cloud-based AI platforms and APIs, including AWS Bedrock, AWS Nova (if applicable), Azure AI, Vertex AI, and OpenAI APIs.
- Implement and maintain robust MLOps pipelines for the automation of AI model deployment, monitoring, and continuous integration/continuous delivery (CI/CD).
- Design and integrate with Vector Databases such as Pinecone, FAISS, and ChromaDB for efficient information retrieval and storage.
- Collaborate closely with cross-functional teams, including product managers, researchers, and other engineers, to define and deliver AI-powered solutions.
- Stay up-to-date with the latest research and advancements in LLMs, Generative AI, and the broader AI/ML landscape.
- Contribute to the development of best practices and standards for AI/ML development and deployment within the organization.
- Troubleshoot and resolve issues related to AI/ML models and infrastructure.
- Document all aspects of the development process, including model architecture, training procedures, and deployment strategies.
- 6-8 years of hands-on experience in the field of Artificial Intelligence and Machine Learning (AI/ML), with a strong focus on Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), and Generative AI.
- Proven proficiency in Python programming.
- Deep understanding and practical experience with deep learning frameworks such as TensorFlow and PyTorch.
- Hands-on experience with LangChain and the Hugging Face ecosystem (Transformers library, Datasets, Accelerate, etc.
- Significant experience working with cloud-based AI platforms and their respective APIs: AWS Bedrock, AWS Nova (if applicable), Azure AI, Vertex AI, and OpenAI APIs.
- Strong background in implementing MLOps practices and tools for automating the deployment, monitoring, and management of AI models.
- Practical knowledge and experience with Vector Databases, including at least one of the following: Pinecone, FAISS, ChromaDB.
- Experience with fine-tuning pre-trained language models.
- Knowledge of different generative modeling techniques (e.g, GANs, diffusion models).
- Experience with containerization technologies (e.g, Docker, Kubernetes).
- Familiarity with data engineering pipelines and tools.
- Contributions to open-source AI/ML projects.
- Advanced degree (Master's or Ph.) in Computer Science, Machine Learning, or a related field.
- Publications in relevant AI/ML conferences or journals
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in