Why Hirevector?
Hirevector is a world‑renowned leader in entertainment, media, and technology, constantly pushing the boundaries of storytelling and digital experiences. With a portfolio that spans streaming services, sports broadcasting, and immersive interactive platforms, Hirevector leverages cutting‑edge data solutions to create magical moments for millions of fans worldwide. As the company continues to expand its digital footprint, the demand for visionary data professionals who can design, build, and operate massive data pipelines has never been higher.
Our Remote Data Engineering team sits at the heart of this transformation. Working from the comfort of your own home, you will partner with product innovators, content strategists, and analytics scientists to turn raw data into actionable insight that powers everything from personalized recommendations to real‑time sports analytics.
Position Overview
Hirevector is seeking an experienced Senior Data Engineer to join the Item Execution and Instrumentation Group (IEIG). In this role, you will lead the design, development, and operational excellence of large‑scale data platforms that serve the entire Hirevector ecosystem. Your expertise in cloud technologies, data lake‑house architecture, and modern programming languages will enable the organization to deliver high‑performance data solutions that drive business value across multiple verticals.
This is a full‑time, 100% remote position based in the United States, offering a competitive salary range of $35,000 – $40,000 per year plus a comprehensive benefits package.
Key Responsibilities
• Architect & Build Scalable Data Pipelines: Design, implement, and maintain robust ETL/ELT workflows in Scala, Python, and PySpark that process terabytes of data daily across cloud (AWS) and on‑premise environments.
• Lakehouse Engineering: Drive the migration to a lakehouse‑driven data platform using Snowflake, Delta Lake, and Databricks, ensuring seamless integration with existing data marts.
• Collaborate with Cross‑Functional Teams: Partner with Data Product Managers, Data Scientists, and Business Intelligence analysts to translate business requirements into technical solutions.
• Maintain SLA Compliance: Monitor pipeline health, troubleshoot incidents, and continuously improve system uptime to meet strict Service Level Agreements (SLAs).
• Documentation & Governance: Produce clear, up‑to‑date documentation of data models, pipeline architecture, and operational procedures to support data quality and governance initiatives.
• Agile Participation: Actively contribute to Scrum ceremonies, sprint planning, and retrospectives, fostering a culture of continuous improvement.
• Problem Solving & Innovation: Investigate emerging data challenges, propose automation opportunities, and optimize cost‑efficiency across the data stack.
• Stakeholder Engagement: Build strong relationships with internal customers, translating complex technical concepts into understandable business value.
Essential Qualifications
• Bachelor’s degree in Computer Science, Software Engineering, Data Engineering, or a related technical field.
• 5+ years of professional experience designing and operating large‑scale data pipelines.
• Deep proficiency in SQL, with the ability to craft performant queries for complex analytical workloads.
• Extensive hands‑on experience with Apache Spark (including PySpark) and Flink for real‑time stream processing.
• Strong programming skills in Scala and Python, including best practices for code modularity and testing.
• Solid background in AWS services such as S3, EMR, EC2, and IAM.
• Demonstrated expertise with at least one major MPP or cloud data warehouse technology (Snowflake, Redshift, BigQuery, etc.).
• Familiarity with data lakehouse concepts, Delta Lake, and Databricks orchestration.
• Experience working within Agile/Scrum frameworks and a commitment to collaborative delivery.
• Excellent communication and interpersonal skills, capable of influencing cross‑functional teams.
Preferred Qualifications & Additional Skills
• Master’s degree or advanced certifications in data engineering, cloud architecture, or big data technologies.
• Hands‑on experience with data visualization tools (Tableau, Looker, Power BI) and supporting data pipelines for analytics.
• Knowledge of CI/CD pipelines for data engineering (e.g., GitHub Actions, Jenkins, CircleCI).
• Familiarity with containerization (Docker, Kubernetes) and infrastructure‑as‑code (Terraform, CloudFormation).
• Exposure to machine learning workflows and model‑serving pipe