Absa Bank Need Lead Hadoop Engineer

by KMax

Lead Hadoop Engineer (KE)

remote type Remote locations Absa Headquarters (KE)time type Full time posted on Posted Yesterday time left to apply End Date: February 14, 2025 (6 days left to apply) job requisition idR-15971730

Empowering Africa’s tomorrow, together…one story at a time.

With over 100 years of rich history and strongly positioned as a local bank with regional and international expertise, a career with our family offers the opportunity to be part of this exciting growth journey, to reset our future and shape our destiny as a proudly African group.

My Career Development Portal: Wherever you are in your career, we are here for you. Design your future. Discover leading-edge guidance, tools and support to unlock your potential. You are Absa. You are possibility.

Job SummaryJob Description Summary
This role provides an exciting opportunity to roll out a new strategic initiative within the

firm– Enterprise Infrastructure Big Data Service. The Big Data Developer serves as a development and support expert with responsibility for the design, development, automation, testing, support and administration of the Enterprise Infrastructure Big Data Service.

The roles require experience with both Hadoop and Kafka.  This will involve building and supporting a real time streaming platform utilized by Absa data engineering community.

You will be responsible for developing features, ongoing support and administration, and documentation for the service.  The platform provides a messaging queue and a blueprint for integrating with existing upstream and downstream technology solutions.

Job Description

You will have the opportunity of working directly across the firm with developers, operations staff, data scientists, architects and business constituents to develop and enhance the big data service.   

  • Development and deployment of data applications 
  • Design & Implementation of infrastructure tooling and work on horizontal frameworks and libraries 
  • Creation of data ingestion pipelines between legacy data warehouses and the big data stack 
  • Automation of application back-end workflows 
  • Building and maintaining backend services created by multiple services framework 
  • Maintain and enhance applications backed by Big Data computation applications 
  • Be eager to learn new approaches and technologies 
  • Strong problem-solving skills 
  • Strong programming skills 
  • Background in computer science, engineering, physics, mathematics or equivalent 
  • Worked on Big Data platforms (Vanilla Hadoop, Cloudera or Hortonworks) 
  • Preferred: Experience with Scala or other functional languages (Haskell, Clojure, Kotlin, Clean) 
  • Preferred: Experience with some of the following: Apache Hadoop, Spark, Hive, Pig, Oozie, ZooKeeper, MongoDB, CouchbaseDB, Impala, Kudu, Linux, Bash, version control tools, continuous integration tools 

Education Bachelor’s Degree: Information Technology

Apply

You may also like

We DO NOT support recruitment agents/entities that demand money or any other favors from applicants to expedite hiring process. We shall not be liable to any money, favors and valuables lost during the process. Incase you see it on this site, report it to us via our Facebook page Pata Kazi so as to take the necessary action. Report the matter to the police asap.

 

More from Us: OYK-CVs | Internshub

 

© 2025 All Rights Reserved. Web Design by Clinet Online

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.