[H-298] | SENIOR CLOUD AUTOMATION ENGINEER

Bebeedevops


Job Title A senior DevOps engineer position is available to manage and operate scalable and robust Kubernetes environments, design complex data pipelines using Argo Workflows, and develop infrastructure as code with Terraform. Job Description As a mid-senior level professional, you will be responsible for: - Designing, deploying, and operating scalable and robust Kubernetes environments (EKS or similar) supporting data and analytics workloads; - Developing and managing infrastructure with Terraform and related tools, implementing infrastructure automation and repeatable deployments in AWS and Kubernetes; - Supporting high-availability S3-based data lake environments and associated data tooling, ensuring robust monitoring, scalability, and security; - Instrumenting, monitoring, and creating actionable alerts and dashboards for Kubernetes clusters, Argo workflows, and data platforms to quickly surface and resolve operational issues; - Participating in incident, problem, and change management processes, proactively driving improvements in reliability KPIs (MTTD/MTTR/availability); - Collaborating cross-functionally with Data Engineering, SRE, Product, and Business teams to deliver resilient solutions and support key initiatives like Git migration and cloud modernization; - Applying best practices in networking (Layer 4-7), firewalls, VPNs, IAM, and data encryption across the cloud/data stack; - Engaging in capacity planning, forecasting, and performance tuning for large-scale cloud and Kubernetes-based workloads. Requirements To succeed in this role, you should have: - Bachelor's degree in Computer Science, Engineering, or related field, or equivalent experience; - 5+ years of production experience operating and managing Kubernetes clusters (preferably in AWS, EKS, or similar environments); - Strong hands-on experience with AWS cloud services; - Deep hands-on experience with Argo Workflows, including developing, deploying, and troubleshooting complex pipelines; - Experience with Git, GitLab, and CI/CD, including leading or supporting migration projects and the adoption of GitOps practices; - Effective at developing infrastructure as code with Terraform and related automation tools; - Practical experience in automating data workflows and orchestration in a cloud-native environment; - Proficient in SQL and basic scripting (Python or similar); - Solid understanding of networking (Layer 4-7), security, and IAM in cloud environments; - Proficient in Linux-based systems administration (RedHat/CentOS/Ubuntu/Amazon Linux); - Strong written and verbal communication skills; - Ability to collaborate in cross-functional environments; - Track record delivering reliable, secure, and scalable data platforms in rapidly changing environments; - Experience working with S3-based data lakes or similar large, cloud-native data repositories; - Upper-Intermediate English level. Nice to Have Additionally, it would be beneficial if you have: - Exposure to regulated or healthcare environments; - Familiarity with data modeling, analytics/BI platforms, or DBT; - Experience leading software/tooling migrations (e.g., Bitbucket to GitLab), or managing large-scale CI/CD consolidations. Benefits This role offers: - Professional growth: Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps; - Competitive compensation: Matching your ever-growing skills, talent, and contributions with competitive USD-based compensation; - Flextime: Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office – whatever makes you the happiest and most productive.

trabajosonline.net © 2017–2021
Más información