- Design and develop Perl-based data processing solutions to ingest, transform, and load data from various sources into our data infrastructure.
- Collaborate with stakeholders to understand data requirements and translate them into efficient and scalable data pipelines.
- Implement data validation and quality assurance processes to ensure data integrity and accuracy.
- Optimize data processing and storage strategies to improve system performance and scalability.
- Identify and troubleshoot data-related issues, and propose and implement solutions.
- Collaborate with software engineers to integrate data pipelines with existing systems and applications.
- Develop and maintain documentation related to data pipelines, data models, and processes.
- Stay updated with industry trends and best practices in data engineering, and contribute to the continuous improvement of our data infrastructure.
- Proficiency in Perl programming language, with strong experience in developing data processing applications.
- Solid understanding of data structures, algorithms, and database concepts.
- Experience with relational databases (such as MySQL, PostgreSQL) and SQL query optimization.
- Familiarity with data integration and ETL (Extract, Transform, Load) processes.
- Strong problem-solving skills and ability to analyze and troubleshoot complex data-related issues.
- Experience working with large datasets and distributed computing frameworks (e.g., Hadoop, Spark) is a plus.
- Knowledge of data warehousing concepts and tools is desirable.
- Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams.
- Attention to detail and a commitment to delivering high-quality data products.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
- Pension scheme
- Home office
- Stock options
- Shopping discounts