Return to all offers

AI/ML Engineer

Hybrid work: one day per week in the office (Gliwice)
3+
C1
B2B up to 24000 PLN - 31600 PLN + VAT
UoP up to 21500 PLN - 27000 PLN gross
aimlpython

About the role

Emporix is a next-generation Autonomous Commerce Intelligence platform, built for modern B2B and sophisticated B2C enterprises. We uniquely combine orchestration, automation, and AI to streamline commerce processes across systems. We’re looking for a skilled AI Python Developer / Machine Learning Engineer to join our team. You’ll work on cutting-edge backend systems and AI models, leveraging large and small language models (LLMs/SLMs), retrieval-augmented generation (RAG), and agentic frameworks to power intelligent, production-ready commerce solutions.

If you're excited by scalable AI systems, LLM fine-tuning, modern MLOps, and working in a cloud-native environment—this role is for you.

Requirements

  • 3+ years of hands-on experience in Python development, ideally in AI, ML, or backend systems.
  • Proven experience with LLM/SLM fine-tuning and deployment (e.g., LoRA, QLoRA).
  • Hands-on experience with agentic AI frameworks like LangChain, LangGraph, or FastMCP.
  • Strong knowledge of transformer models and libraries like Hugging Face, Mistral, LLaMA, or Gemma.
  • Familiarity with vector databases, semantic search, and prompt engineering.
  • Experience designing and deploying RAG pipelines in real-world applications.
  • Skilled in building and maintaining event-driven microservices and async APIs.
  • Cloud-native engineering experience, preferably on GCP (Cloud Run, Pub/Sub, Storage).
  • Familiarity with secure coding practices and data protection in AI systems.
  • Strong communication skills in English (B2/C1 level).
  • Bonus: Experience with Go, Java, or orchestrating cloud functions.

Responsibilities

AI & Backend Development

  • Design and build intelligent backend services using Python and FastAPI.
  • Develop agent-based systems using LangChain, LangGraph, and FastMCP.
  • Integrate LLMs, RAG pipelines, and custom tools into microservices architecture.
  • Implement orchestration logic for multi-agent workflows and real-time SSE-based interactions.

LLM/SLM Model Engineering

  • Fine-tune and optimize LLMs/SLMs on domain-specific datasets.
  • Develop custom models tailored to business use cases and workflows.
  • Evaluate models for performance, latency, cost-efficiency, and robustness in production.

Production-Grade MLOps & Cloud Deployment

  • Deploy and monitor models and services on Google Cloud Platform (GCP) using Kubernetes, Cloud Run, and Pub/Sub.
  • Build reproducible training and inference pipelines with CI/CD and MLOps best practices.
  • Track performance with logging, monitoring, and feedback loops for continuous improvement.

System Integration & Collaboration

  • Work closely with frontend, backend, and product teams to deliver scalable, production-ready AI systems.
  • Ensure event-driven, fault-tolerant communication between AI modules and commerce services.

Documentation & Knowledge Sharing

  • Maintain clean and well-structured documentation for APIs, ML pipelines, and system logic.
  • Contribute to knowledge bases and architecture diagrams for long-term scalability and onboarding.

Our Benefits

Startup atmosphere based on partnership approach
Independence, influence on the company
No corporate rules
Cyclic integrations
Private medical care Medicover
Choice of the type of contract
Development in modern technologies
Knowledge sharing, Agile / Scrum
Flexible working hours – you choose
Multisport card
Modern equipment
Free fruits, snacks and drinks

This process is handled by

Anna Kowalczyk
IT recruiter

Anna Kowalczyk

Apply now