Responsibilities:
- Design, build, and maintain Azure data services for internal and client platforms, including Azure SQL Database, Azure Data Lake, Data Factory, Stream Analytics, Azure Analysis Services, Azure Databricks, and MS Fabric.
- Develop and implement ETL processes, data pipelines, and data integration solutions in PySpark clusters.
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
- Collaborate with the team to design optimised data models for traditional warehouses as well as the latest delta lakes.
- Ensure data security, compliance, and best practices are always maintained.
- Optimise system performance, ensuring the fastest possible data retrieval and efficient data updates.
- Keep up-to-date with emerging database technologies and trends.
Requirements:
- Bachelor's degree in Computer Science, Information Systems, or a related field. A Master's degree is preferred.
- Proven work experience as a Data Engineer or similar role.
- Expertise in Azure data services and tools, including Azure Databricks and MS Fabric.
- Proficiency in PySpark and experience with other programming languages (SQL, Python, Java).
- Experience in designing optimised data models in traditional warehouses and delta lakes.
- Strong analytical skills and the ability to create useful data insights and visualisation.
- Excellent problem-solving and communication skills.
- Knowledge of other cloud platforms is a plus.
- Certifications in Azure Data Engineering or related fields would be a plus.