- Data Warehouse
- Communication Skills
- Health Insurance
- Paid Time Off
We are in search of a technologist and problem solver that can wear many hats in our growing organization. The ideal candidate is a self-starter who is good at working both independently and with others knowing when to figure things out on their own and when to ask questions and seek assistance. Strong communication skills are a must, both written and verbal, along with the ability to gather information from a variety of sources.
The role involves providing monitoring, support and system administration centered around predictive maintenance and reliability engineering. The ideal candidate will possess expertise in traditional data science analysis processes as well as software engineering skills needed to deploy and build decision support aides. The team is looking for candidates with a full range of skills in statistical analysis, programming, and data science backgrounds (e.g. machine learning and artificial intelligence). Candidates experienced with process automation and optimization are also highly encouraged to apply.
Designing schemas, data models and data architecture for Hadoop and HBase environments.
Implementing data flow scripts using Unix / Hive QL / Oozie scripting
Designing, building and support data processing pipelines to transform data using Hadoop technologies
Designing, building data assets in HIVE
Developing and executing quality assurance and test scripts
Working with business analysts to understand business requirements and use cases
A Bachelor's degree in Statistics, Industrial Systems and Engineering, Machine Learning, Applied Analytics, Computer Science, or Computer Engineering.
Minimum of 2 years of experience in understanding of best practices for building and designing ETL code Strong SQL experience with the ability to develop, tune and debug complex SQL applications is required
Strong SQL experience with the ability to develop, tune and debug complex SQL applications is required
Hands-on experience in Python object oriented programming (At least 2 years)
Knowledge in schema design, developing data models and proven ability to work with complex data
Hands-on experience with Hadoop, MapReduce, Hive, Oozie, Airflow, ElasticSearch
Understanding Hadoop file format and compressions
Familiarity with MapR distribution of Hadoop
Understanding of best practices for building Data Lake and analytical architecture on Hadoop
Scripting / programming with UNIX, Java, Python, Scala etc.
Knowledge in real time data ingestion into Hadoop
Experience in working in large environments such as RDBMS, EDW, NoSQL, etc.
Experience with Test Driven Code Development, SCM tools such as GIT, Jenkins i
Experience with Graph database
This position works out of our headquarters in the Sandy Springs area of Atlanta. This is a full time salaried position with full health benefits, 401K, paid vacation and bonus opportunity.
Mather Economics is a business consultancy specializing in applied economics. Our work utilizes econometric analysis to develop implementable business solutions such as optimal pricing recommendations, suggested inventory levels, sales forecasts, asset valuation, subscription pricing and customer retention. We employ leading-edge econometric approaches to solve complex business problems, and assist our clients as they implement these solutions to maximize operating margins, grow revenue, or lower costs. We work with extremely large highly technical data sets captured digitally from the web and provided from our customers' data systems. We help companies gain actionable insights from their own data to develop and maintain competitive advantages.
Mather Economics has worked in many industries including, but not limited to: Lottery, Publishing & Media, Energy & Utilities, Technology, Telecommunications, Banking, Litigation support and Environmental Valuation Services.