Are you looking for an employer who promotes individual excellence and mutual respect in a team-driven culture with a key focus on social empowerment? The Co-operative Bank of Kenya is the place for those looking to new horizons.
We seek a fearless inventor with an artistic streak who can find creative solutions to tough problems and sculpt brilliant strategies from a mountain of data, someone who can create the digital equivalent of Mona Lisa with only an algorithm and a smile.
Reporting to the Head – Business Intelligence, the Data Engineer will be responsible for designing, building, and maintaining high-quality data pipelines, data integration layers, and automated machine learning operational pipelines (MLOps). This role ensures reliable, secure, scalable movement of data across enterprise systems and supports the full lifecycle of AI/ML models
The Role
Specifically, the successful jobholder will be required to:
- Gather information from business users to understand their detailed requirements and expectations, analyze business/use case requirements from BI analysts to determine operational problems, define data modelling requirements and develop data structures to support the generation of business insights and strategy.
- Carry out analysis of requirements and recommend solutions to address user requirements.
- Assist in preparing system definition/specification by the users highlighting technical requirements and roll out BI Solutions to stakeholders.
- Identify, analyze and interpret trends or patterns in complex data sets using statistical techniques and provide reports.
- Creation, scheduling, testing, deployment, and maintenance, of data pipeline from different source to required destination with the required transformations for reporting (ETL).
- Design, build, and optimize data ingestion pipelines from structured and unstructured sources
- Ensure data quality, lineage, and governance through automated checks and metadata management.
- Implement CI/CD for data pipelines including automated testing, version control, and rollbacks.
- Create reusable pipeline components and templates to accelerate onboarding of new data sources.
- Develop and maintain data models, warehouse layer, lakehouse zones.
- Build and automate end-to-end ML pipelines integrating training, validation, deployment, and monitoring.
- Create feature pipelines, model training pipelines, and batch/real-time prediction services.
- Manage ML model versioning, metadata tracking, and reproducibility
- Build visualizations to summarize and make presentations to business and other key stakeholders. Filter, clean data and review reports, print outs and performance indicators to locate and correct code problems.
- Secure BI solutions by putting adequate controls and restrict access to programs by users in accordance to the requirements of the Bank.
- Guide the business in drawing report formats & wireframes and advice on the best approach to transform data and automate reports as well as Design and code reports/dashboards according to user specification with the key objective to deliver reports that will assist in decision-making and control.
- Develop and maintain documentation/manuals on system configuration or set up, carry out technical user training as required to enable users interpret BI reports as well as deal with data, dashboards and report queries from users and resolve or advise them accordingly.
Skills, Competencies and Experience
The successful candidate will be required to have the following skills and competencies:
- Bachelor of Science degree in Computer Science, IT, Software Engineering, or any other degree in related fields.
- A minimum of 3 years’ experience in Data engineering, BI & Software Development using Oracle.
- Strong knowledge and experience with ETL tools (Oracle ODI, Microsoft SSIS,Talend), query languages (Oracle PL/SQL, SQL), programming languages (Java, Python, Scala)
- Experience with Dimensional data modeling, data management and data processing. Knowledge of statistics and experience using statistical packages in analyzing large data sets (Python,R, SPS, SAS, Excel etc.)
- Experience in CI/CD and automation tools (GitLab CI, Jenkins, Argo, etc.).
- Experience with Big data tools (Hadoop, Apache Hive, Scala, Kafka, Apache Spark, NoSQL da)
- Knowledge of visualization tools (Oracle Analytics Server, Power BI, SSRS, Tableau, click view)
- Technical expertise regarding data models, database design development, data mining and segmentation techniques is desired.
- Very good knowledge of Windows Operating Systems and a fair knowledge of Unix & Linux.
How to apply:
If you fit the profile, then apply today! Please forward your application enclosing detailed Curriculum Vitae to [email protected] indicating the job reference number DE/IID/2025 as the subject of your email by 31st December 2025.