Job Description As a Manager/Architect Data Engineer , you will lead large-scale digital transformation projects by designing and implementing robust, cloud-native data platforms. Your role involves guiding strategic architecture and hands-on delivery, collaborating with stakeholders to develop data ecosystems that foster business innovation and growth. Responsibilities Your Impact Architecture & Strategy Define end-to-end data architecture strategies using Azure and Databricks, ensuring scalability, security, and compliance with enterprise standards. Lead the selection of appropriate data technologies, frameworks, and patterns based on use case considerations such as cost, performance, and maintainability. Develop and maintain architectural roadmaps for data platform modernization. Solution Design & Delivery Translate complex business requirements into scalable, modular data architectures. Design and implement data ingestion, processing, storage, and consumption layers following modern data engineering practices. Create reusable components and frameworks to streamline future development efforts. Technical Leadership Provide technical guidance to engineering teams, reviewing solutions to ensure architectural principles are followed. Assist in estimating project scope, timelines, and resources. Promote best practices in data engineering, including CI/CD, testing, and automation. Client Engagement & Collaboration Partner with business and technical stakeholders to understand objectives and co-develop data-driven solutions. Lead architecture reviews, design workshops, and technical deep dives with clients. Operational Excellence Monitor and optimize the performance and reliability of data solutions. Lead initiatives in automation, observability, and platform health monitoring. Qualifications Your Skills & Experience Proven experience designing and implementing scalable data pipelines and architectures with Azure and Databricks. Expertise in Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure SQL Database. Strong Python programming skills for data transformation and pipeline development. Experience with various data storage solutions: columnar (e.g., BigQuery, Redshift, Vertica), NoSQL (e.g., Cosmos DB, DynamoDB), and relational databases (e.g., SQL Server, Oracle, MySQL). Solid understanding of ETL/ELT, data modeling (dimensional, star/snowflake), and distributed frameworks like Spark. Familiarity with CI/CD, Git, and deployment automation tools. Excellent communication and stakeholder management skills. Additional Information Set Yourself Apart With Knowledge of DevOps and DataOps practices in cloud environments. Experience with multi-cloud (AWS, GCP) and hybrid cloud architectures. This position is full-time and open for permanent or contractor roles. #J-18808-Ljbffr