Staff Software Engineer

Snowplow Analytics

Snowplow Analytics

Software Engineering
Poland
Posted on Mar 30, 2026

About Snowplow

Snowplow is the real-time customer context layer that collects, validates, enriches, and delivers behavioral data for advanced analytics, ML, and AI agent decisioning. Snowplow's event tracking is leveraged across 2M+ websites and applications globally, processing over one trillion events per month.

More than 250 companies, including Samsung, Experian, AutoTrader, Strava, Condé Nast, and HelloFresh, rely on Snowplow to build a well-governed, first-party data foundation that powers their customer-facing AI agents, in-session personalization and recommendations, and real-time analytics.

About the Team

The Snowplow Signals team builds the intelligence layer on top of Snowplow's behavioral data infrastructure. Signals enables customers to derive real-time scores, predictions, and AI-powered interventions directly from their behavioral event streams. The team works at the intersection of data engineering, machine learning, and product development — shipping capabilities that make Snowplow's behavioral data actionable in real time.

This is a high-impact, high-ownership role. You will work alongside a small, senior team and have direct influence over architecture, ML model design, and the roadmap direction of one of Snowplow's most strategically important product areas.

The Role

As a Staff Software Engineer on the Snowplow Signals team, you will be a technical leader responsible for the full lifecycle of ML-powered features — from prototype to production to ongoing operation. You will design, build, release, and support machine learning models that process real-time behavioral event streams at scale, and own the SQL data models that power downstream analytics and customer-facing Signals outputs.

At the Staff level, we expect you to operate across team boundaries, set technical direction, mentor engineers, and drive engineering excellence through architecture decisions, code quality, and delivery practices.

Responsibilities

Machine Learning & AI Engineering

  • Design, train, deploy, and operate machine learning models in production environments, with a focus on reliability, performance, and observability
  • Build real-time and batch ML pipelines on top of Snowplow's behavioral event streams
  • Own the operational health of ML models, including monitoring, retraining pipelines, and incident response
  • Define and enforce ML model standards — versioning, evaluation frameworks, feature engineering, and deployment practices
  • Collaborate with Product and Data teams to translate business objectives into ML model specifications

Data Engineering

  • Design and maintain SQL data models that power Signals outputs and customer-facing analytics
  • Own dbt model development, testing, documentation, and deployment within the Signals domain
  • Ensure data model correctness, performance, and scalability as customer event volumes grow
  • Partner with Analytics Engineers and data consumers to evolve schemas and enrichments

Required Skills

  • Proven experience building, releasing, and supporting machine learning models in production — not just prototype or research environments
  • Strong command of SQL and experience supporting SQL-based data models at scale
  • Hands-on experience with dbt (data build tool) for data modeling, testing, and pipeline management
  • Proficiency in Python and/or Scala for ML and data engineering workloads
  • Experience with cloud-native data infrastructure (AWS, GCP, or Azure) and streaming or event-driven architectures
  • Strong software engineering fundamentals: version control, CI/CD, testing, code review, and observability
  • Excellent written and verbal communication; able to explain complex technical trade-offs to varied audiences

Nice to Have

  • Experience with agentic coding workflows and LLM tooling such as Claude Code, GitHub Copilot, or similar AI-assisted development environments
  • Familiarity with behavioral event data, clickstream analytics, or customer data platforms
  • Contributions to open source projects, technical writing, or public engineering work
  • Experience working in early-stage or scaling product companies where ownership and autonomy are the norm

Benefits

💰 A competitive package, including share options

🧘 Flexible working

🏖 A generous holiday allowance no matter where you are in the world

💻 MacBook and home office equipment allowance

👪 Enhanced maternity, paternity, shared parental and adoption leave