The Engineer is an experienced data pipeline builder and data wrangler who optimizes data systems and builds them from the ground up. The candidate will support software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. The candidate must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The candidate will have experience working with application architects and engineers to optimize data access paths at the same time maintaining solid structural integrity and performance for integrating with backend data stores. The candidate will also have excellent communication and presentation skills required for socializing technical concepts and building consensus.
- Create and maintain optimal data pipeline architecture,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
- Build the infrastructure required for optimal extraction, transformation, and loading (ETL) of data from a wide variety of data sources.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
- Experience with big data tools like: Hadoop, Spark, Kafka.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with object-oriented/object function scripting languages like: R, Python, Java, C++, Scala.
Job Type: Full-time
- relevant: 1 year (Required)