- Machine Learning
- Big Data
- Distributed Systems
RII develops cutting-edge software for the government and military. We use agile development practices and user-centered design to create innovative software solutions for complex real-world problems. We're breaking through the big, slow status quo with transformative technology that fundamentally changes and improves the world.
Our team is currently seeking a Big Data Engineer to work at INDOPACOM. The Engineer will work directly with INDOPACOM staff to drive the development of program analytic concepts and solutions.
If you are a sharp, experienced engineer with demonstrated capabilities in implementing machine learning and analytical solutions on Big Data stacks we want to hear from you. Joining RII not only provides unique challenges and opportunities, it also directly and positively impacts many of our Defense and Homeland Security end users.
WHAT YOU WILL BE DOING
Identify, develop, and evaluate machine learning capabilities for deployment on our best-of-breed Big Data tech stack in support of INDOPACOM AI/ML initiatives
Design, develop, train, tune, and deploy learning models in current ML frameworks (e.g., Tensorflow, PyTorch, Theano, Caffe, Spark MLib, etc.)
Maintain awareness of current and emerging capabilities in machine learning and analytical technologies and how these apply to solving our customers’ challenges
Develop captivating solutions by collaborating with customers and development team
Document use cases, solutions, & recommendations to customers.
WHAT YOU HAVE DONE
BS in Computer Science, or equivalent degree or work experience
Experience designing, developing, training, tuning, and deploying learning models in current ML frameworks (e.g., Tensorflow, PyTorch, Theano, Caffe, Spark MLib, etc.)
Strong software development in Python or Java
Experience performing exploratory analysis – cleaning, joining, enriching, statistical modeling, and prototyping visualizations to identify latent trends and patterns
Experience curating training and analysis datasets – cleaning, enriching, joining, annotating, and crowdsourcing data for creating and evaluating production models
Understanding in Linux operating systems, distributed systems, and databases (NoSQL and relational)
Excellent understanding of ETL and data analytics platforms
Experience understanding and decomposing system level requirements into discrete and measurable tasks
Experience designing, developing, and deploying analytics in streaming and cross-corpus frameworks like Spring Cloud, Kafka, Spark, MapReduce, and/or other related BigTable technologies
Experience with Agile Methodologies
Experience with natural language processing techniques and technologies
Strong knowledge of the installation, configuration, and maintenance cloud computing and Big Data infrastructure to include Hadoop, Accumulo, Kafka, Spark, Elasticsearch, Puppet, Ansible, Lucene, and related technologies
Experience with continuous integration and continuous deployment using Atlassian products
Experience with Atlassian products
Extensive experience with model transfer techniques and methodologies
Experience applying machine learning techniques to sparse data sets
Knowledge of cloud computing infrastructure such as: Amazon Web Services EC2 & Elastic
Experience creating Big Data solutions using public, private, and hybrid cloud approaches
Experience with integration methodologies and tools for Big Data applications and services
Experience with data quality and data profiling tools
Active TS required, TS/SCI preferred.
Research Innovations, Inc. is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.