Jobs Career Advice Post Job
X

Send this job to a friend

X

Did you notice an error or suspect this job is scam? Tell us.

  • Posted: Dec 2, 2025
    Deadline: Not specified
    • @gmail.com
    • @yahoo.com
    • @outlook.com
  • BURN designs, manufactures, and distributes aspirational fuel-efficient cooking products that save lives and forests in the developing world.BURN has revolutionized the global cookstove sector by proving the business case for selling a high quality, locally manufactured and unsubsidized cookstoves.Since 2013, BURN has sold 200,000+ high quality, locally manu...
    Read more about this company

     

    Data Engineer

    About the role 

    • Burn is seeking a Data Engineer to own the continued maintenance, optimization, and governance of our cloud data infrastructure. This role ensures the reliability and performance of our data warehouse, ETL pipelines, and self-service platforms while enabling new data integrations and supporting our growing AI and analytics initiatives. The ideal candidate excels in operational excellence, cost optimization, and improving data models and processes as our data ecosystem scales.

    Key Responsibilities
    Data Platform Maintenance & Monitoring

    • Monitor daily ETL workflows, data pipelines, and scheduled jobs to ensure high availability and timely delivery.
    • Troubleshoot pipeline failures, performance bottlenecks, and data quality issues.
    • Ensure adherence to SLAs for data freshness and system availability.

    Data Warehouse Modelling & Optimization

    • Review, optimize, and update the existing data warehouse schema as business requirements evolve.
    • Improve table partitioning, clustering, and indexing to boost performance. 
    • Ensure the warehouse follows best practices in dimensional modelling, data vault, or hybrid approaches.
    • Refactor legacy tables or models to improve clarity, performance, and usability.
    • Collaborate with analysts to design semantic models that improve analytics and self-service adoption.

    Performance Optimization & Cost Management

    • Continuously optimize queries, transformations, storage, and processing schedules for efficiency.
    • Monitor AWS (or other cloud) costs and implement cost-saving strategies (e.g., lifecycle rules, storage tiering, compute optimization, query tuning).
    • Conduct periodic performance audits of the warehouse and ETL layers.

    Data Quality, Governance & Documentation

    • Implement and maintain validation rules, automated quality checks, and alerting.
    • Work with the data governance team to improve data lineage, metadata management, access controls, and dataset documentation.
    • Maintain structured documentation for pipelines, data flows, and schemas.

    ETL Enhancements & New Data Integrations

    • Integrate new data sources into the existing pipelines.
    • Refactor and modernize ETLs as business requirements evolve.
    • Develop reusable, modular components following best practices.

    Self-Service Platform (Metabase) Administration

    • Manage Metabase backend performance: resource usage, query tuning, cache configuration, and scaling.
    • Ensure stable connections to the warehouse and optimize slow dashboards.
    • Support analysts and business users in building efficient reports.

    External Donor System Integrations

    • Maintain and optimize pipelines that push data to external donor systems as part of grant reporting obligations.
    • Ensure outgoing data meets required formats, completeness, and SLA timelines.
    • Troubleshoot sync issues and collaborate with external technical teams.

    AI & Advanced Analytics Support

    • Prepare and maintain training datasets, feature pipelines, and model-serving data flows.
    • Collaborate with Data Scientists to transform raw and semi-structured data into AI-ready formats.
    • Maintain data pipelines that support experimental and production AI workloads.

    Stakeholder Collaboration

    • Work closely with analysts, scientists, product teams, and business units to understand data needs.
    • Provide guidance on data access, modelling, and best practices.

    Success Measures

    • Pipeline Reliability: 95%+ reduction in avoidable pipeline failures within the first 3–6 months.
    • Performance Gains: ≥20% improvement in warehouse and Metabase query performance.
    • Cost Efficiency: Achieve measurable cloud cost reduction through storage, compute, and ETL optimization.
    • Data Quality: Automated validation in place for all critical datasets, with significant reduction in recurring data-quality issues.
    • Donor Integrations: 100% on-time and error-free data submissions to external donor systems.
    • Documentation: Complete and updated documentation for major ETLs, data models, and integrations.
    • Model Readiness: AI/ML pipelines and datasets consistently meet quality and performance standards for analytics initiatives.

    Skills & Experience Required
    Technical Skills

    • Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field.
    • 4–6 years of hands-on data engineering experience (maintenance + optimization heavy roles preferred).
    • Strong SQL skills and experience with warehouse modelling (star schema, dimensions/facts, normalization/denormalization).
    • Experience optimizing warehouse performance via partitioning, indexing, clustering, and storage tuning.
    • Proficiency with ETL/ELT tools (Airflow, AWS Glue).
    • Deep understanding of AWS cloud services (S3, Glue, Lambda, IAM, CloudWatch, RDS).
    • Experience maintaining data integrations: APIs, batch exports, and external partner systems.
    • Experience administering or maintaining BI/self-service platforms (Metabase is a strong plus).
    • Familiarity with data governance: cataloguing, lineage, access control, quality frameworks.
    • Familiarity with monitoring tools (Grafana, Datadog, Prometheus).

    Soft Skills

    • Strong troubleshooting and root-cause-analysis ability.
    • Effective communicator who collaborates well with technical and non-technical teams.
    • Detail-oriented with a strong documentation mindset.
    • Proactive and highly accountable.
    • Experience supporting AI/ML data operations (feature stores, data prep, inference datasets).

    Check how your CV aligns with this job

    Method of Application

    Interested and qualified? Go to BURN on burnmanufacturing.applytojob.com to apply

    Build your CV for free. Download in different templates.

  • Send your application

    View All Vacancies at BURN Back To Home

Subscribe to Job Alert

 

Join our happy subscribers

 
 
Send your application through

GmailGmail YahoomailYahoomail