The Senior Software Data Engineer is responsible for design, implementation, management, and automation of the data value stream in the Azure public cloud.
He/She will assist with the initial design, establish best practices, and provide proof of concepts for Azure Cloud Service managed and unmanaged database deployments. This person will operate in a hybrid cloud environment that spans multi-tenant private datacenters and public cloud.
Design and implement continuous delivery database pipelines that serve the purpose of provisioning non-prod and production environments for data centric microservices, data access services, and ETL services among other DataOps and AIOps capabilities.
Develop deployment automation for any type of workload (Application, database, caching, data movement, eventing, etc.) on Azure DevOps/AWS CloudFormation using industry standard deployment and configuration tools.
Develop tools and solutions to integrate, automate, and orchestrate cloud operational database needs leveraging Automation, Azure/AWS Managed Data Service, Azure/AWS DevOps and ARM templates.
Develop Data Process-KPI Map to clearly communicate the data value stream and KPIs associated with each data subprocess as an indication as to continuous improvements in productivity, quality, reliability, and availability and associated KPIs.
Automation of data quality using SPC techniques.
Automation of data assets deployment.
Automation of data monitoring and alerting.
Automation of data infrastructure provisioning/deprovisioning.
Automation of all facets of data access, data persistence, data infrastructure, and data reliability/quality/availability management services.
Document deployment automation and pipeline details and effectively socialize with development, infrastructure, security, and operations teams to educate and leverage DevOps and DataOps pipelines
Partner with development, infrastructure, security, and operations teams to identify the workload patterns and deliver the suitable deployment automation.
Minimum 8+ years of experience in development, analytics, design, and software development.
Minimum 5+ years Kafka hands-on implementation of cluster records and topics working with services interoperability hooking Producer, Consumer, Streams, Connector, and Admin APIs.
Minimum 5+ years designing and administrating various cloud platforms in a large-scale environment; preferred platform: Azure.
Minimum 5+ years Oracle DBMS design, implementation, management, administration, and replaforming of persistence services, business intelligence and analytics for continuity of operations
Minimum 5+ years in implementing distributed data processing, event driven data processing and data integration patterns (ETL, ELT).
Experience with Azure/AWS Managed Database technologies and services for SQL and NoSQL.
Experience with Azure/AWS service including Kubernetes experience (EKS/AKS).
Experience with Public Cloud automation using Azure DevOps and ARM templates.
Experience with Scripting languages and automate processes: Helm Charts, Yaml, JSON and Python.
Experience with Designing and developing CI/CD pipelines for automated application deployments, using Azure DevOps, Jenkins, Harness, UDeploy Artifactory, BitBucket and Docker Container tools.
Certification in Azure or AWS Cloud
About NTT DATA Services
NTT DATA Services is a global business and IT services provider specializing in digital, cloud and automation across a comprehensive portfolio of consulting, applications, infrastructure and business process services. We are part of the NTT family of companies, a partner to 85 % of the Fortune 100.
NTT DATA Services is an equal opportunity employer and will consider all qualified applicants for employment without regard to race, gender, disability, age, veteran-status, sexual orientation, gender identity, or any other class protected by law.