Organizational Context

The International Federation of Red Cross and Red Crescent Societies (IFRC) is the world’s largest humanitarian network, with 191 member National Red Cross and Red Crescent Societies. IFRC uses the Triple R – response, resilience and respect – to deliver on Strategy 2030. IFRC responds to disasters and crises, ensuring timely, coordinated and locally led humanitarian action. IFRC supports its members in building community resilience in the areas of climate and environment, health and wellbeing, and migration and displacement. IFRC promotes respect for our fundamental principles of humanity, impartiality, neutrality, independence, voluntary service, unity, and universality, including in our work on values, power and inclusion. The IFRC focuses throughout on our core mandate – our raison d’être – of strategic and operational coordination, humanitarian diplomacy, National Society development, and accountability.

IFRC is led by its Secretary General and has its Headquarters in Geneva and five regional offices in Africa (Nairobi); the Americas (Panama); Asia Pacific (Kuala Lumpur); Europe (Budapest); and MENA (Beirut) as well as representation offices, service centres and delegations across the globe.

IFRC is led by its Secretary General, and has its Headquarters in Geneva, Switzerland. IFRC has five regional offices in Africa, Asia Pacific, Middle East and North Africa, Europe, and the Americas. IFRC also has country cluster delegations and country delegations throughout the world. Together, the Geneva Headquarters and the field structure (regional, cluster and country) comprise the IFRC Secretariat.

IFRC has a zero-tolerance policy on conduct that is incompatible with the aims and objectives of the Red Cross and Red Crescent Movement, including sexual exploitation and abuse, sexual harassment and other forms of harassment, abuse of authority, discrimination, and lack of integrity (including but not limited to financial misconduct). IFRC also adheres to strict child safeguarding principles.

Background to the position

In virtually all countries, people increasingly rely on and expect a diverse range of digital services (e.g., through their mobile devices) to interact with local government, companies, and community organizations and services. This disruption is already happening to humanitarian assistance. Yet, the Digital Divide remains a persistent and significant challenge at both national and local levels.

The need for a successful and large-scale digital transformation is urgent. And Digitally Transforming the IFRC and its 191 members is a complex process which requires collaborative action and support across the membership. Therefore, the IFRC recently developed a Digital Transformation Strategy which was approved by the IFRC Governing Board in May 2021.  

The Digital Transformation Department (DTD) has full leadership responsibility for the implementation of the digital transformation strategy and the positive impact it will have on the 191 National Society members of the IFRC. The DTD provides strategic leadership and guides the IFRC Secretariat as well as the members network to adapt and innovate humanitarian services, drawing on digital services, data-enabled decision-making, and other opportunities for digital transformation in support of the IFRC’s Strategy 2030. In addition, the DTD is responsible for the development and implementation of business transformation, information technology and digitalization services throughout the IFRC Secretariat, thereby supporting the same transformation in 191 NSs, setting the vision, and drawing stakeholders together on this digital journey. 

Job Purpose

(...Continued from Organizational Context)

The incumbent report into the AI and Data management unit (“AI & Data unit”), that is accountable to the Director of the DTD. The other units and teams reporting to the DT Director include the Enterprise Architecture, Strategy & Planning unit, the Digital development and management unit, the Infrastructure & Security Team and the Global Service Desk and Endpoint Management team. Under the leadership of the Director, these units and teams serve as members of the global DTD management team, along with the CISO and Regional DT managers. 

The AI & Data unit is responsible to deliver greater value from IFRC’s data, and ensures the development and implementation of IFRCs data and AI governance and strategy, in close collaboration with the IFRC’s Data Protection Office and through indirect management of data professionals in other departments. The unit oversees all data operations, data product lifecycles and manages the organisation’s data platform, as well as the coordination and support to AI initiatives.

The Data Platform DevOps Engineer will also interact with internal and external AI and data experts, private sector, academia, external suppliers, vendors and consultants who are sourced to deliver AI-related activities. 

Job Purpose

The Data Platform DevOps Engineer will design, implement, maintain, and optimize IFRC’s enterprise data platform built on Microsoft Fabric, Azure, and related Microsoft products. This critical role will bridge platform engineering, DevOps practices, and data infrastructure management to ensure reliable, secure, and scalable data operations supporting IFRC's global humanitarian mission. The ideal candidate will possess deep expertise in the Microsoft Fabric and Azure ecosystem, cloud infrastructure, automation, and platform engineering principles while demonstrating commitment to IFRC's humanitarian values.  

Job Duties and Responsibilities

Platform Engineering & Architecture

  • Design, build, and maintain the Microsoft Fabric data platform infrastructure, including OneLake, Data Warehouse, Lakehouse, Data Factory pipelines, Real-Time Intelligence, and Power BI environments.
  • Implement and manage platform architecture supporting data engineering, data science, analytics, and business intelligence workloads across the organization.
  • Develop and maintain Infrastructure as Code (IaC) solutions using Terraform, Azure Resource Manager templates, or similar tools for consistent, repeatable deployments.
  • Architect multi-region, scalable solutions within Microsoft Fabric to support global humanitarian operations.
  • Design and implement OneLake storage structure, shortcuts, mirroring configurations, and data lake optimization strategies.

CI/CD & Deployment Automation

  • Build and maintain comprehensive CI/CD pipelines for Microsoft Fabric workspaces using Azure DevOps.
  • Implement automated deployment strategies utilizing Fabric REST APIs, Git integration, Fabric deployment pipelines, and fabric-cicd tools.
  • Manage source control integration with Microsoft Fabric workspaces, establishing branching strategies and deployment workflows across development, test, and production environments.
  • Automate infrastructure provisioning, configuration management, and deployment processes for all Fabric components.
  • Develop and maintain deployment rules, parameterization strategies, and environment-specific configurations.

Platform Maintenance & Operations

  • Perform ongoing maintenance of Microsoft Fabric platform components including capacity management, workspace administration, and item lifecycle management.
  • Monitor platform health, performance metrics, and resource utilization using Azure Monitor, Prometheus, Grafana, or integrated Fabric monitoring capabilities.
  • Implement comprehensive observability and alerting framework covering logs, metrics, and traces across the entire data platform.
  • Conduct regular platform updates, patches, and upgrades while minimizing disruption to users and maintaining system stability.
  • Manage disaster recovery procedures, backup strategies, and business continuity planning for critical data assets.
  • Optimize platform performance through capacity planning, resource allocation, and cost management strategies.

Security, Governance & Compliance

  • Implement and enforce security controls using Microsoft Purview for data governance, classification, and protection across the Fabric ecosystem.
  • Configure and maintain role-based access controls (RBAC), row-level security, column-level security, and OneLake folder/table-level permissions.
  • Establish and enforce data sensitivity labeling, information protection policies, and data loss prevention (DLP) measures.
  • Integrate Microsoft Entra authentication, manage service principals, and implement managed identities for secure platform access.
  • Ensure compliance with global regulatory frameworks including GDPR, HIPAA, and ISO standards through automated controls and auditing.
  • Implement network security measures including conditional access, private endpoints, customer-managed encryption keys, and secure data sharing protocols.
  • Monitor and respond to security incidents, conducting regular security assessments and vulnerability management.

Automation & Scripting

  • Develop automation scripts using Python, PowerShell, Bash, and Azure CLI for routine operational tasks and platform management.

Job Duties and Responsibilities (continued)

  • Create custom tooling and utilities to enhance platform capabilities and improve developer experience.
  • Automate data pipeline orchestration, monitoring, alerting, and incident response workflows.
  • Build self-service capabilities enabling data engineers and analysts to work autonomously while maintaining governance standards.

Collaboration & Support

  • Work closely with data engineers, data scientists, business analysts, and application developers to understand requirements and optimize platform functionality.
  • Provide technical guidance and support for Fabric workspace management, data pipeline development, and platform best practices.
  • Collaborate with IT security and compliance teams to ensure platform aligns with organizational security policies.
  • Participate in incident response and troubleshooting efforts, conducting root cause analysis and implementing preventive measures.
  • Foster a culture of continuous improvement, sharing knowledge, and promoting DevOps practices across teams.

Documentation & Knowledge Management

  • Create and maintain comprehensive technical documentation for infrastructure configurations, deployment procedures, and operational processes.
  • Develop runbooks, standard operating procedures, and troubleshooting guides for common platform operations.
  • Document architecture decisions, security controls, and governance frameworks for audit and compliance purposes.
  • Maintain up-to-date inventory of platform components, dependencies, and integration points.

Duties applicable to all staff

  1. Work actively towards the achievement of IFRC’s goals.
  2. Abide by and work in accordance with the Red Cross and Red Crescent principles.
  3. Perform any other work-related duties and responsibilities that may be assigned by the line manager.

Education

Required

  • Degree in in computer science, information security, data science, or related field.

Preferred

  • Certifications for Microsoft Azure (Azure Solutions Architect Expert, Azure Administrator, Azure Data Engineer).
  • Certifications for Microsoft Fabric and PowerBI.

Experience

Required

  • 5+ years of experience in DevOps, platform engineering, or site reliability engineering roles.
  • 3+ years of hands-on experience with Microsoft Azure cloud services and Azure DevOps.
  • Strong experience with Microsoft Fabric or related Microsoft data platforms (Power BI, Azure Synapse Analytics, Azure Data Factory).
  • Proficiency in scripting and programming languages including Python, PowerShell, Bash, and SQL.
  • Deep understanding of CI/CD principles and tools (Azure DevOps, GitHub Actions).
  • Experience with cloud blob data services (Azure Storage, AWS S3, GCP Cloud Storage) and cloud managed relational databases (Azure SQL, Amazon RDS, GCP Cloud SQL).
  • Hands-on experience with monitoring and observability tools (Azure Monitor, Prometheus, Grafana, ELK Stack, Datadog).
  • Understanding of storage ACLs, network security, encryption, identity management, and cloud security best practices.
  • Experience deploying and managing Microsoft Fabric workspaces, capacities, and items.
  • Knowledge of OneLake architecture, shortcuts, mirroring, maintenance, and security.
  • Proficiency in designing scalable ETL/ELT architectures and data pipeline orchestration strategies.

Preferred

  • Expertise in Infrastructure as Code tools (Terraform, Ansible, ARM templates, CloudFormation).
  • Experience with containerization and orchestration technologies (Docker, Kubernetes) and cloud services (AKS, EKS, GKE).
  • Strong knowledge of data warehousing, data lakes, lakehouse architectures, and ETL/ELT processes.
  • Familiarity with Fabric REST APIs and SDK for automation and integration.
  • Experience with Fabric Git integration and deployment pipelines.
  • Knowledge of Power BI administration and deployment best practices.
  • Background in multi-region or multi-cloud data architecture deployments.
  • Experience in humanitarian, international development or non-profit work.

Knowledge, Skills and Languages

Required

  • Knowledge of Delta Lake, Parquet (and other modern data formats), optimization techniques such as partitioning/clustering.
  • Technical Excellence: Deep understanding of data lakehouse architecture, medallion patterns, and data mesh organizational frameworks.
  • Strong expertise in data security, encryption, identity management, and privacy controls (GDPR, HIPAA, row/column-level security).
  • Knowledge of monitoring, observability, and performance optimization techniques for data platforms.
  • Knowledge of Microsoft Entra, including service principals.
  • Knowledge of Fabric notebooks (PySpark and Python) and pipelines (and relevant APIs).
  • Familiarity with big data (Hadoop) database design, optimization, and relational/non-relational data models.
  • Understanding of Microsoft Purview for data governance, cataloging, and lineage management.
  • Problem-Solving: Ability to break down complex challenges into solvable problems.
  • Communication: Excellent in translating complex technical concepts to executive and non-technical stakeholders.
  • Collaboration: Excellent interpersonal skills and ability to work and influence effectively across organizational boundaries and disciplines.
  • Continuous Learning: Drive to stay current with rapid cloud security and data and AI field evolution.
  • Attention to Detail: Precision in code quality, data validation, and model documentation.
  • Demonstrated strategic thinking and ability to align technical decisions with business objectives, and to maintain hands-on credibility while leading.
  • Project and portfolio management capabilities, sequencing complex initiatives with dependencies.

Preferred

  • Knowledge of real-time analytics, streaming architectures, and event-driven data processing.
  • Familiarity with machine learning operations (MLOps) and AI workload integration.
  • Familiarity with agile development practices and iterative model improvement.

Languages

Required

  • Fluent spoken and written English.

Preferred

  • Good command of another IFRC official language (French, Spanish or Arabic).

Competencies, Values and Comments

Values: Respect for diversity; Integrity; Professionalism; Accountability.

Core competencies: Communication; Collaboration and teamwork; Judgement and decision making; National society and customer relations; Creativity and innovation; Building trust.

Application Instruction

Please submit your application in English only.


At Impactpool we do our best to provide you the most accurate info, but closing dates may be wrong on our site. Please check on the recruiting organization's page for the exact info. Candidates are responsible for complying with deadlines and are encouraged to submit applications well ahead.
Before applying, please make sure that you have read the requirements for the position and that you qualify. Applications from non-qualifying applicants will most likely be discarded by the recruiting manager.