Never pay for any notarisation, certificate or assessment as part of any recruitment process. When in doubt, contact us
Strathmore University is a Chartered University located in Nairobi, Kenya. It was the first multiracial and multi religious educational institution in English speaking Eastern Africa and more recently the first institute of higher learning to be ISO certified in East and Central Africa in 2004. Our mission is to provide all-round quality education in an atmo...
Read more about this company
Basic job summary:
- The role is responsible for leading advanced data science workstreams related to AI model development, benchmarking, safety testing, and applied analytics. will involve implementing data ingestion frameworks,
Duties & Responsibilities:
Data Pipelines and Reporting
- Contributes significantly to the creation of the data architecture in terms of projected and expected data needs, performance and efficiency KPIs.Scopes and stages work into well-defined milestones to avoid monolithic deliverable.
- Go-to expert in are or the codebase. Understands architecture of the entire systems and provides technical advice and weights on the technical decisions that impact whole project.
- Able to successfully design and build end-to-end solutions with guidance from experts in the fields.
Data Science Strategy and Planning
- Contribute to the development and implementation of data science strategies.
- Work with cross-functional teams to identify and prioritize data science requirements.
- Support in recommending and implementing new technologies to enhance data science capabilities.
Data Quality and Governance
- Ensure the accuracy and reliability of data through data profiling, cleansing, and validation.
- Collaborate with data governance teams to establish and maintain data quality standards.
- Acquire data from primary or secondary data sources, filter, and clean data, maintain databases/data systems, and ensure data quality.
- Research on governance trends, best practices and improves on existing implementations. Constantly looking for improvements on the previous iterations.
Advanced Analytics and Modeling
- Model, design, and implement AI algorithms using diverse sources of data.
- Design and implement rigorous evaluation pipelines for AI models including large language models (LLMs), retrieval-augmented systems, and task-specific models.
- Support in the development and maintenance of benchmarking datasets (e.g. agricultural Q&A, edge cases, contextual prompts) to support standardized model assessment.
- Lead technical safety testing of AI advisory systems, including hallucination detection, inappropriate content identification, and escalation logic.
- Support the development and testing of guardrails, disclaimers, and fallback mechanisms for farmer-facing advisory use cases.
- Design and analyse experiments (e.g. A/B testing, persona-based trials) to assess AI output quality, usability, and performance across different contexts.
- Work closely with Data Engineers and MLOps Engineers to ensure AI pipelines are reproducible, auditable, and well-documented.
- Explore, learn, and deliver more complex tasks, including robust scheduled code execution, building frameworks and Apis for the rest of the team and building eventbased data processing
Collaboration and Stakeholder Management
- Collaborate with internal and external stakeholders to gather business requirements and understand analytical needs.
- Provide support and training to end-users on utilizing data science tools and interpreting analytical outputs.
- Support in writing reports, documentations, and publications related to business intelligence.
- Liaise and work effectively with the software development team to ensure all data needs are well addressed in projects.
- Research new and emerging trends in data science to grow skills and facilitate client projects.
Project Management and Follow-up
- Achieve results through follow-up of projects through to completion.
- Monitor project progress, manage priorities, commit to achieving quality outcomes, adhere to documentation procedures, and seek feedback from stakeholders to gauge satisfaction.
Minimum Academic Qualifications:
- Bachelor's or Master’s degree in Data Science, Statistics, Computer Science, Artificial Intelligence, or a closely related quantitative field.
Experience:
- Applicants should possess at least 5 years of progressive experience in data science, advanced analytics, or applied research roles, with demonstrated responsibility for complex analytical or modelling tasks.
go to method of application »
Basic job summary:
- The Senior Data Engineer will be responsible for designing, building, and maintaining data infrastructure and pipelines. l mplementing data ingestion frameworks,
Duties & Responsibilities:
Data Pipeline Design and Implementation
- Design, implement, and maintain robust data ingestion and processing pipelines for heterogeneous data sources, including soil, weather, agronomic, geospatial, and related contextual datasets.
- Develop scalable ETL/ELT workflows to transform raw data into structured, validated, and analytics-ready formats.
- Ensure pipelines support both batch and, where required, near-real-time data processing.
- Implement data versioning and lineage tracking to support reproducibility and auditability.
Cloud-Based Data Infrastructure
- Design and manage cloud-native data architectures, including data lakes, data warehouses, and analytical storage solutions.
- Optimize data storage and processing for performance, cost efficiency, and scalability.
- Support deployment of data pipelines across development, testing, and pilot environments.
- Collaborate with platform teams to ensure infrastructure aligns with DPI principles and interoperability standards.
Data Quality, Governance, and Reliability
- Implement automated data quality checks, validation rules, and monitoring to ensure accuracy, completeness, and consistency.
- Support enforcement of data governance requirements, including access controls, permissions, and audit logging.
- Work with policy and governance partners to ensure technical implementations align with data protection and consent frameworks.
- Proactively identify and remediate data reliability risks or bottlenecks.
Enablement of AI and LLM-Based Systems
- Prepare and serve data in formats optimized for AI and LLM-based advisory systems, including retrieval-augmented generation (RAG) pipelines and structured knowledge services.
- Support model evaluation, benchmarking, and experimentation workflows.
MLOps Support and Operational Readiness
- Contribute to MLOps workflows by supporting data versioning, pipeline automation, and integration with model deployment and evaluation processes.
- Implement monitoring and logging for data pipelines to support observability and issue diagnosis.
- Support reproducible experimentation through consistent data environments and pipeline automation.
Documentation, Collaboration, and Delivery
- Produce clear technical documentation covering data architectures, pipeline logic, and operational procedures.
Minimum Academic Qualifications:
- Bachelor’s degree in Computer Science, Software Engineering, Information Systems, or a closely related technical field
Experience:
- Applicants should possess at least 5 years of professional experience in data engineering, with demonstrated responsibility for designing and operating complex data pipelines and data platforms.
- Strong experience designing and implementing data ingestion, transformation, and processing pipelines (ETL/ELT) for large and heterogeneous datasets.
- Proficiency in Python and SQL, and experience with data processing frameworks and tools commonly used in modern data engineering environments
go to method of application »
Basic job summary:
The Senior Software Engineer will be responsible for designing, building, and operationalizing software infrastructure. This role will lead the full-stack development and system integration of backend services, APIs, and data pipelines.
Responsibilities:
Software Development and Design
- Collaborate with the project technical lead and other team members to analyze requirements and design software solutions for AI applications.
- Develop, test, and debug software components for data exchange gateways, and cloud platforms.
- Assist in implementing data management, analytics, and visualization features for AI applications.
- Implement engineering frameworks that enable LLM-based advisory systems, including retrieval-augmented generation (RAG), structured knowledge integration, and prompt orchestration.
- Integrate soil, weather, and agronomic datasets into retrieval and reasoning pipelines to support contextualized and actionable advisory outputs.
- Support experimentation with different GenAI architectures and system configurations in collaboration with data science teams.
- Develop or support frontend and interface components (e.g. dashboards, admin tools, sandbox interfaces) required for internal testing, monitoring, and partner integration.
- Implement technical controls to support data governance requirements, including consent-aware data access, role-based permissions, and audit logging.
- Participate in code reviews and maintain coding standards and best practices.
.Quality Assurance and Testing
- Debug Identify and address any software-related issues, anomalies, or performance bottlenecks.
- Collaborate with the Quality Assurance team to ensure a high-quality optimized code solution.
- Ensure the security and integrity of AI software systems, implementing encryption, authentication, and access control mechanisms as necessary.
- Perform code reviews, testing, and debugging activities to maintain high quality and reliability in software deliverables.
- Ensure secure handling of sensitive or regulated data in line with Kenya’s Data
- Protection Act and project governance frameworks.
- Embed responsible AI considerations into system design, including safeguards, escalation pathways, and human-in-the-loop mechanisms where required.
Documentation and Reporting
- Create and maintain comprehensive repository documentation for software designs, iterations, specifications, and testing procedures.
- Develop standard operating procedures (SOPs) for software MVP development and testing.
- Generate simulation and evaluation software code report before final release version for deployment
Collaboration and Support
- Collaborate with research, data science and engineering teams to meet project timelines and deliverables.
- Provide technical guidance and mentorship to junior software developers, fostering a culture of innovation and continuous learning
Minimum Academic Qualifications:
- Bachelor’s degree or Master’s degree in Computer Science, Software Engineering, Information Systems, Data Science, or a closely related technical field from a recognized institution
Experience:
- Applicants should possess a minimum of 7 years’ experience in software development
Method of Application
Are you qualified for this position and interested in working with us? We would like to hear from you. Kindly send us a copy of your updated resume and letter of application (ONLY) quoting “Senior Data Scientist” on the subject line to recruitment@strathmore.edu by 27 th February 2026.
Build your CV for free. Download in different templates.