About the Role
Job Summary:
We are looking for a passionate and innovative Generative AI Engineer to join our AI team. You will work on developing, fine-tuning, and deploying generative AI models such as Large Language Models (LLMs), diffusion models, and multimodal architectures for a wide range of real-world applications.
This role requires strong expertise in machine learning, deep learning, and model deployment, with a focus on generative technologies. You will collaborate cross-functionally with product, engineering, and research teams to build cutting-edge generative AI solutions that power next-gen user experiences.
Key Responsibilities:
* Design, train, fine-tune, and deploy generative models (e.g., GPT, LLaMA, Stable Diffusion, DALL·E)
* Evaluate and improve performance of models using techniques such as RLHF (Reinforcement Learning from Human Feedback), prompt tuning, LoRA, or parameter-efficient fine-tuning.
* Collaborate with data engineers and MLOps teams to manage data pipelines and scalable training infrastructure.
* Integrate generative models into applications (e.g., chatbots, content generation tools, copilots).
* Stay up to date with the latest advancements in generative AI and NLP research.
* Optimize models for latency, throughput, and cost-efficiency in production.
* Ensure safety, bias mitigation, and ethical use of generative models.
* Contribute to reusable components, tools, and documentation for internal and external use.
Requirements
Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or related field.
5 years of experience in machine learning or deep learning, with a focus on NLP or generative models.
Hands-on experience with modern AI frameworks (e.g., PyTorch, TensorFlow, Hugging Face Transformers).
Strong experience working with or deploying LLMs (e.g., OpenAI GPT, Claude, Mistral, LLaMA, Falcon, Gemini, etc.).
Familiarity with fine-tuning, prompt engineering, and inference optimization.
Proficient in Python and ML development tooling (e.g., Jupyter, Weights & Biases, MLflow).
Experience with cloud platforms (AWS, Azure, or GCP) and GPU compute environments.
Experience working with multimodal models (e.g., text-to-image, speech-to-text, video generation).
Understanding of tokenization, attention mechanisms, and transformer architecture internals.
Exposure to vector databases (e.g., Pinecone, Weaviate, FAISS) and retrieval-augmented generation (RAG) systems.
Familiarity with ethical AI principles and methods for bias detection and mitigation.
Contributions to open-source projects or research papers in the generative AI space.
Please submit your resume, GitHub/portfolio (if applicable), and a brief statement of interest.
About the Company
Mapping Metrics is a next-generation strategic management firm, founded by industry leaders to streamline business operations through innovative strategies. We specialize in streamlining business operations, with efficient project management, enabling automation, and crafting powerful growth strategies with adequate governance tailored to modern enterprises. We harness the transformative power of automation to create a seamless business scaling.