About the Role

We build AI features for real products — not research papers. Our AI/ML Engineer will design and deploy models that power resume parsing (Skillety), autonomous DevOps agents (OpenClaw), and intelligent document processing across our product suite. You will have access to on-premise GPU infrastructure with CUDA acceleration.

You will work across the full ML lifecycle: data preparation, model training, evaluation, deployment, and monitoring in production. We use PyTorch, Hugging Face Transformers, and integrate with both local models and commercial APIs (OpenAI, Anthropic, Cohere).

This role is ideal for someone who loves building practical AI systems and can bridge the gap between research and production.

What You Will Do

  • Design and implement ML pipelines for NLP, computer vision, and recommendation systems
  • Train, fine-tune, and deploy models using PyTorch on GPU infrastructure
  • Build RAG (Retrieval-Augmented Generation) systems and vector search pipelines
  • Integrate LLMs (GPT-4, Claude, open-source models) into production applications
  • Develop data preprocessing and feature engineering pipelines
  • Monitor model performance, detect drift, and implement retraining workflows
  • Collaborate with product and engineering teams to identify high-impact AI opportunities

What We Are Looking For

  • Strong Python proficiency with experience in PyTorch or TensorFlow
  • Hands-on experience deploying ML models to production environments
  • Understanding of NLP fundamentals: embeddings, transformers, tokenization
  • Experience with vector databases (FAISS, Pinecone, Qdrant, or similar)
  • Familiarity with MLOps practices: experiment tracking, model versioning, CI/CD for ML
  • Solid grasp of statistics, linear algebra, and probability theory
  • Experience with Docker and containerized model serving

Nice to Have

  • Experience with CUDA programming or GPU optimization
  • Published research or contributions to ML open-source projects
  • Experience with LangChain, LlamaIndex, or similar LLM frameworks
  • Knowledge of graph neural networks or knowledge graphs

Quick Facts

Department:AI/ML
Location:Bengaluru / Remote
Type:Full-time
Experience:2-5 years
Salary:12-22 LPA

Benefits

  • Competitive salary (12-22 LPA) with meaningful equity
  • On-premise GPU infrastructure (NVIDIA GTX 1650, upgrades planned)
  • Remote-first culture — work from anywhere in India
  • Learning budget for courses, conferences, and certifications
  • Comprehensive health insurance for you and your family
  • Flexible working hours — outcomes over hours logged

Interested?

Send us your resume and a brief note on why you are excited about this role.

Why TechSaaS?

We are not just another IT services company.

Ship Real Products

No throwaway prototypes. You will build and ship products that real users depend on daily across HR-Tech, Ed-Tech, and FinTech.

AI-First Culture

We use AI in everything — from AI-powered recruitment tools to autonomous DevOps agents. Every team member gets GPU access and AI tools.

Small, Expert Team

No layers of management. You will work directly with the founder and senior engineers. Your ideas ship in days, not quarters.

Cutting-Edge Stack

Next.js 15, PyTorch, Docker, self-hosted GPU infrastructure, Cloudflare Zero Trust. We use the best tools for every problem.

Equity for Everyone

Every team member gets meaningful equity. When TechSaaS wins, you win. We believe in building wealth together.

Remote-First

Work from anywhere in India. We optimize for async collaboration, deep work, and flexible hours. Outcomes over hours.

Ready to Apply?

Send your resume and a short note about why this role excites you. We typically respond within 48 hours.