Design and implement a centralized Data Fabric ensuring governance, trust, and cost efficiency. Optimize data ingestion pipelines and ETL workflows for batch and real-time processing. Develop and maintain an enterprise data catalog enriched with metadata to enhance discoverability. Implement data governance frameworks using Unity Catalog and define policies for centralized and self-governed domains. Enable self-service analytics with scalable Databricks and ThoughtSpot solutions. Manage and integrate internal and external data sources, ensuring quality and eliminating redundancies. Collaborate with engineering, QA, and business teams to align data strategies with business goals. Support ML/AI, analytics, and martech initiatives by providing a robust data infrastructure. What we're looking for:Extensive experience with Databricks (Azure) and its ecosystem. Strong expertise in data governance, metadata management, and enterprise data catalogs. Proficiency in SQL, Spark, and Python for data processing and transformation. Experience with Azure Data Factory or similar orchestration tools (e.g., Airflow). Knowledge of BI and self-service analytics tools like ThoughtSpot or Tableau. Hands-on experience with cloud data platforms (Azure, GCP preferred). Solid understanding of batch and real-time data processing architectures. Ability to assess and optimize data infrastructure costs. Strong problem-solving skills and ability to translate business needs into scalable solutions. Excellent communication and collaboration skills in cross-functional environments. #J-18808-Ljbffr