(T-447) TRANSFORMATIVE DATA EXPERT

Bebeedataengineer


Unlock the Power of Data Engineering Blend is a leading provider of AI services, dedicated to driving meaningful impact for its clients through cutting-edge data science, technology, and innovation. Job Description: We are seeking an experienced Senior Data Engineer with a strong foundation in Python, SQL, and Azure, and hands-on expertise in Databricks. In this role, you will design and maintain scalable data pipelines and architecture to support analytics, data science, and business intelligence initiatives. Key Responsibilities: - Migrate legacy data infrastructure while ensuring performance, security, and scalability. - Develop and optimize data systems, pipelines, and analytical tools, leveraging a bronze-silver-gold architecture. - Configure Databricks notebooks and pipelines for ingestion of data from both on-premise and cloud environments, following native Databricks standards. - Collaborate with stakeholders to translate business needs into robust data solutions, and effectively communicate technical progress, risks, and recommendations. - Implement strong data governance and access control mechanisms to ensure data quality, security, and compliance. - Conduct advanced data analysis to support decision-making and reporting needs. - Apply QA testing practices within data workflows, particularly in healthcare environments. - Drive initiatives in Data Discovery, Data Lineage, and Data Quality across the organization. Requirements: - Studies in computer science, engineering, or a related field. - 3+ years of hands-on experience in data engineering, with at least 2 years working with Azure or Databricks. - English language proficiency (B2 or more). - Experience working with Azure Cloud, CI/CD pipelines, and Agile methodologies. - Proficiency in developing and managing Databricks notebooks and implementing data engineering frameworks. - Strong programming skills in Python for data processing and automation. - Advanced proficiency in SQL for querying and transforming large datasets. - Solid understanding of data modelling, warehousing, and performance optimization techniques. - Proven experience in data cataloging and inventorying large-scale datasets. - Pyspark experience a plus. Perks and Benefits: Learning Opportunities: Certifications in AWS, Databricks, and Snowflake. Access to AI learning paths to stay up-to-date with the latest technologies. Study plans, courses, and additional certifications tailored to your role. Access to Udemy Business, offering thousands of courses to boost your technical and soft skills. English lessons to support your professional communication. Mentoring and Development: Career development plans and mentorship programs to help shape your path. Celebrations & Support: Special day rewards to celebrate birthdays, work anniversaries, and other personal milestones. Company-provided equipment. Flexible Working Options: To help you strike the right balance.

trabajosonline.net © 2017–2021
Más información