Information Technology
Full-Time
BEO Software Private Limited
Overview
Responsibilities
Role snapshot
You’ll be a hands-on AI engineer who helps design, prototype, and deliver generative AI capabilities. You’ll work on practical research, build POCs, fine-tune open models, contribute to multimodal experiments, and help take solutions toward production — all while rapidly learning both modern and classical ML techniques.
What You’ll Do (core Responsibilities)
Build and evaluate prototypes / POCs for generative AI features and ideas.
Fine-tune and adapt open-source LLMs and smaller generative models for targeted use cases.
Collaborate on multimodal experiments (text ↔ image ↔ audio) and implement training/evaluation pipelines.
Implement data preprocessing, augmentation, and basic feature engineering for model inputs.
Run experiments: design evaluation metrics, perform ablations, log results, and iterate on models.
Optimize inference and memory footprint for models (quantization, batching, basic distillation).
Contribute to model training pipelines, scripting, and reproducible experiments.
Work with cross-functional teams (product, infra, MLOps) to prepare prototypes for deployment.
Write clear docs and present technical results to the team; participate in code reviews and knowledge sharing.
Continuously learn: read papers, try new tools, and bring fresh ideas into projects.
Desired Candidate Profile
Mandatory Technical Skills
Strong Python programming and familiarity with ML tooling (numpy, pandas, scikit-learn).
Hands-on experience (2+ years) with PyTorch and/or TensorFlow for model development and fine-tuning.
Solid grounding in classical ML & DL: supervised/unsupervised learning, optimization, CNNs, RNNs/LSTMs, Transformers.
Good understanding of algorithms & data structures, numerical stability, and computational complexity.
Practical experience fine-tuning open models (Hugging Face Transformers, LLaMA family, BLOOM, Mistral, or similar).
Familiarity with PEFT approaches (LoRA, adapters, QLoRA basics) and simple efficiency techniques (mixed precision, model quantization).
Comfortable running experiments, logging (e.g., Weights & Biases, MLflow), and reproducing results.
Exposure to at least one cloud ML environment (GCP Vertex AI, AWS SageMaker, or Azure AI) for training or deployment tasks.
Good communication skills for documenting experiments and collaborating with product/infra teams.
Highly Desirable / Preferred Skills
Experience with multimodal training pipelines or cross-modal loss functions.
Familiarity with MLOps concepts (model packaging, CI/CD for models, basic monitoring).
Experience with tooling like DeepSpeed, Accelerate, Ray, or similar distributed/efficiency libraries.
Knowledge of LangGraph / Autogen / CrewAI or interest in agentic systems.
Experience with BigQuery / Synapse or data warehousing for analytics.
Publications, open-source contributions, or sample projects demonstrating model work (GitHub, Colabs, demos).
Awareness of AI safety and responsible-AI best practices.
Behavioral & growth attributes
Fast learner & curious: picks up new algorithms, libraries, and research quickly.
Strong ownership: delivers features from prototype to handoff with minimal supervision.
Collaborative: works well with senior researchers, engineers, and product managers.
Coachable: accepts feedback and grows technical and soft skills.
Results-oriented: balances experimentation with delivering measurable outcomes.
Why this role is great
Rapid exposure to state-of-the-art generative AI and real production use cases.
Mix of research mindset and product delivery — great for people who want to grow into senior/researcher roles.
Mentor support from senior researchers and opportunities to publish or demo prototypes.
Back
Role snapshot
You’ll be a hands-on AI engineer who helps design, prototype, and deliver generative AI capabilities. You’ll work on practical research, build POCs, fine-tune open models, contribute to multimodal experiments, and help take solutions toward production — all while rapidly learning both modern and classical ML techniques.
What You’ll Do (core Responsibilities)
Build and evaluate prototypes / POCs for generative AI features and ideas.
Fine-tune and adapt open-source LLMs and smaller generative models for targeted use cases.
Collaborate on multimodal experiments (text ↔ image ↔ audio) and implement training/evaluation pipelines.
Implement data preprocessing, augmentation, and basic feature engineering for model inputs.
Run experiments: design evaluation metrics, perform ablations, log results, and iterate on models.
Optimize inference and memory footprint for models (quantization, batching, basic distillation).
Contribute to model training pipelines, scripting, and reproducible experiments.
Work with cross-functional teams (product, infra, MLOps) to prepare prototypes for deployment.
Write clear docs and present technical results to the team; participate in code reviews and knowledge sharing.
Continuously learn: read papers, try new tools, and bring fresh ideas into projects.
Desired Candidate Profile
Mandatory Technical Skills
Strong Python programming and familiarity with ML tooling (numpy, pandas, scikit-learn).
Hands-on experience (2+ years) with PyTorch and/or TensorFlow for model development and fine-tuning.
Solid grounding in classical ML & DL: supervised/unsupervised learning, optimization, CNNs, RNNs/LSTMs, Transformers.
Good understanding of algorithms & data structures, numerical stability, and computational complexity.
Practical experience fine-tuning open models (Hugging Face Transformers, LLaMA family, BLOOM, Mistral, or similar).
Familiarity with PEFT approaches (LoRA, adapters, QLoRA basics) and simple efficiency techniques (mixed precision, model quantization).
Comfortable running experiments, logging (e.g., Weights & Biases, MLflow), and reproducing results.
Exposure to at least one cloud ML environment (GCP Vertex AI, AWS SageMaker, or Azure AI) for training or deployment tasks.
Good communication skills for documenting experiments and collaborating with product/infra teams.
Highly Desirable / Preferred Skills
Experience with multimodal training pipelines or cross-modal loss functions.
Familiarity with MLOps concepts (model packaging, CI/CD for models, basic monitoring).
Experience with tooling like DeepSpeed, Accelerate, Ray, or similar distributed/efficiency libraries.
Knowledge of LangGraph / Autogen / CrewAI or interest in agentic systems.
Experience with BigQuery / Synapse or data warehousing for analytics.
Publications, open-source contributions, or sample projects demonstrating model work (GitHub, Colabs, demos).
Awareness of AI safety and responsible-AI best practices.
Behavioral & growth attributes
Fast learner & curious: picks up new algorithms, libraries, and research quickly.
Strong ownership: delivers features from prototype to handoff with minimal supervision.
Collaborative: works well with senior researchers, engineers, and product managers.
Coachable: accepts feedback and grows technical and soft skills.
Results-oriented: balances experimentation with delivering measurable outcomes.
Why this role is great
Rapid exposure to state-of-the-art generative AI and real production use cases.
Mix of research mindset and product delivery — great for people who want to grow into senior/researcher roles.
Mentor support from senior researchers and opportunities to publish or demo prototypes.
Back
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in