Big Data Platform Operations Engineer

nairobiKE

Full-time

Bachelor

2 months ago06/20/202507/20/2025

- Accepting Applications

Key Responsibilities

  • Platform Architecture: Design and implement scalable and fault-tolerant big data platforms using distributed computing technologies such as Apache Hadoop, Apache Spark, or Apache Flink. Architect data storage solutions, including distributed file systems, NoSQL databases, and data warehouses. 
  • Data Ingestion and Integration: Develop data ingestion pipelines to collect, process, and ingest data from various sources, including streaming data sources, databases, APIs, and log files. Implement ETL (Extract, Transform, Load) processes to preprocess and cleanse raw data for analysis. 
  • Data Processing and Analysis: Optimize data processing workflows for efficiency, performance, and scalability. Develop and maintain data processing jobs, queries, and analytics workflows using distributed computing frameworks and query languages such as SQL, Hive, or Spark SQL. 
  • Scalability and Performance: Design and implement strategies for scaling big data platforms to handle large volumes of data and diverse workloads. Optimize resource utilization, data partitioning, and parallel processing techniques to maximize performance and minimize latency. 
  • Monitoring and Optimization: Develop monitoring and alerting solutions to track the health, performance, and availability of big data platforms. Implement automated scaling, load balancing, and resource management mechanisms to optimize platform utilization and performance. 
  • Security and Governance: Ensure compliance with data governance policies, security requirements, and regulatory standards. Implement access controls, encryption, and auditing mechanisms to protect sensitive data and ensure data privacy and confidentiality. 
  • Infrastructure as Code (IaC): Implement infrastructure automation using tools such as Terraform, Ansible, or CloudFormation. Define infrastructure configurations, provisioning scripts, and deployment pipelines to enable reproducible and consistent deployments of big data platforms. 
  • Documentation and Training: Document platform architecture, configurations, and best practices. Provide training and support to data engineers and data scientists to ensure effective use of big data platforms and tools.

Qualifications

  • Bachelor’s or master’s degree in computer science, Engineering, or related field.
  • Solid understanding of big data technologies, distributed systems, and cloud computing principles.
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Experience with big data frameworks such as Apache Hadoop, Apache Spark, or Apache Flink.
  • Familiarity with cloud platforms such as AWS, GCP, or Azure. 
  • System administration skills.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.
  • Ability to work independently and manage multiple priorities in a fast-paced environment.

Interested and qualified? Go to Safaricom Kenya on egjd.fa.us6.oraclecloud.com to apply

Elevolt does not charge job seekers any fees for job applications or consideration. Do not make any payments without doing your due diligence. If you think this posting is not genuine, please flag it below orcontact us

Share:

Safaricom

Safaricom

Safaricom is the leading provider of converged communication solutions in Kenya. In addition to providing a broad range of first-class products and services for Telephony, Broadband Internet and Finan...