Responsibilities
Design, develop, and maintain cloud-based data pipelines and data processing systems.
Collaborate with stakeholders to gather requirements and define data engineering solutions.
Implement data integration and ETL processes to extract, transform, and load data from various sources.
Ensure data quality, integrity, and consistency across different data sets.
Optimize and fine-tune data processing and data storage systems for performance and scalability.
Monitor and troubleshoot data pipelines and resolve any issues that arise.
Collaborate with cross-functional teams to implement data governance and security measures.
Requirements
Minimum of 5 years experience in Cloud Data Engineering, Machine Learning, and Visualization.
Proficiency in AWS Boto3 Python SDK
Experience with big data processing frameworks such as Apache Spark or Hadoop.
Familiarity with data modeling, data warehousing, and database design principles.
Knowledge of Cloud Warehousing Solutions such as Amazon Redshift and GoogleBigQuery
Experience with ETL tools such as Amazon Glue
Knowledge of SQL and NoSQL databases.
Experience with data integration tools and technologies (e.g., Apache Kafka, ApacheNiFi).
Understanding of data governance, data security, and privacy best practices.
Knowledge of data visualization tools (e.g., Data Studio,QuickSight, Tableau, Power BI).
Relevant Solution Architecture, Data engineering and Machine learning Certification,
Experience with containerization technologies (e.g., Docker, Kubernetes).
go to method of application »
Interested and qualified candidates should send their CV to: apply@alfred-victoria.com using the Job Title as the subject of the mail.
Apply via :
apply@alfred-victoria.com