Sr. Big Data Developer - NO H-1
Technology Ventures
McLean, VA

Title: Sr. Big Data Developer
Job Type: Salaried
Company Type: Fortune 50 Financial

An exciting opportunity for a Development, Sr (Big Data) to contribute to the development and maintenance of Big Data Applications. We are seeking a motivated individual who can solution the business need, design and develop enterprise software applications based on Big Data platforms.

Responsibilities and desired skills:

  • Work with initiative leads, business areas, technical leads, architects, subject matter experts, developers, technical support groups and test teams to deliver value to the business.
  • Deliver Hadoop components as specified in the design, functional and non-functional requirements, within agreed upon cadence based on Scaled Agile practices.
  • Design, prototype, develop, test, and document product features. Make recommendations for enhancements that result in a cost-effective product delivery and/or operational efficiency.
  • Develop processes to extract, transform and load unstructured data
  • Cleanse, manipulate and analyze large datasets (Structured and Unstructured data – XMLs, JSONs, PDFs) using Hadoop platform.
  • Develop HIVE scripts to filter/map/aggregate data. Sqoop to transfer data to and from Hadoop.
  • Spark to cleanse/enable and analyze large datasets
  • Manage and implement data processes (Data Quality reports)
  • Develop data profiling, deduping logic, matching logic for analysis
  • Experience with MPP databases like Vertica/Redshift
  • Has experience with loading data from disparate data sets and structured and non-structed data sources
  • Has experience with Pre-processing using Hive and Pig.
  • Perform analysis of vast data stores and uncover insights.
  • Maintain security and data privacy.
  • Has experience with High-speed querying.
  • Has experience with Managing and deploying HBase.
  • Good knowledge in back-end programming, specifically java, JS, Node.js and OOAD
  • Expertise in writing high-performance, reliable and maintainable code.
  • Ability to write MapReduce jobs.
  • Good knowledge of database structures, theories, principles, and practices.
  • Hands on experience in HiveQL.
  • Experience with data loading tools like Spark, Sqoop.
  • Knowledge of workflow/schedulers like Oozie.
  • Analytical and problem-solving skills applied to Big Data domain
  • Good aptitude in multi-threading and concurrency concepts.
  • Perform Exploratory Data Analysis
  • Experience with Unix shell scripting and administration.
  • Experience with configuration management tools & Unix shell.
  • Extensive knowledge and experience with Java EE patterns.
  • Extensive programming experience in Java Scripting, Python, R, etc.
  • Hands-on experience with Build and Deployment tools and languages – Maven/Jenkins and Shell scripting.
  • Perform unit testing and document test results.
  • Complete code reviews, complete documentation of issues identified and action items.
  • Correct testing defects and support all testing, including but not limited to: Development Integration Testing, System Testing, User Acceptance Testing, End-to-End Testing, and Performance Testing.
  • Identify application bottlenecks and opportunities to optimize performance.
  • Perform troubleshooting of production issues.
  • Provide resolution to an extensive range of complicated problems, proactively and as issues surface.
  • Perform code version control activities.
  • Minimum of 2 to 4 years of experience working in Agile, Lean/Kanban, or Scaled Agile organization. Demonstrates ability to use Lean/Agile delivery practices to improve teams, quality, and reliability
  • Understand Business needs and processes, identify solutions, present and guide project team/sponsors in identifying the best solution in a simplified meaningful way.
  • Engage with Architecture. Partner with Enterprise Architecture to define technical solutions to complex business issues that align with target state architecture and conforms with corporate best practices.
  • Quick learner of new technologies and tools.
  • Possess ‘Can do attitude’ and be able to works under minimal supervision.

Qualifications:

  • Minimum 2 to 4 years of experience in Big Data technologies (Spark, Hadoop, HDFS, MongoDB, HBase, Vertica, Redshift, Nifi, Kafka, Oozie. MapReduce, Solr)
  • Minimum of 8 to 10 years of overall development experience.
  • Minimum 2 years of programming experience in Java Scripting, Python, R, etc.
  • Minimum 2 years of experience in analyzing unstructured data
  • Be self-driven, actively looks for ways to contribute, and knows how to get things done
  • Possess good communication and reasoning skills, including the ability to make a strong case for technology choices
  • Help identify risks/impediments, escalate issues, bring transparency to deliverables and ensure Agile delivery
  • Bachelor’s degree in Computer Science or Engineering or equivalent work experience.
  • Excellent communication skills that will allow the candidate to successfully document processes as well as interact with business owners, translating technical details for a non-technical audience.
  • Strong problem-solving skills.
  • Innovative in providing solutions, likes to take on challenges

Job Type: Full-time

Experience:

  • Green Card or US Citizenship: 1 year (Required)
  • Unix: 5 years (Preferred)
  • Programming JavaScript, Python, R, etc: 5 years (Required)
  • Hadoop: 3 years (Preferred)
  • Big Data: 5 years (Required)
  • ETL: 4 years (Required)
  • Data Analysis: 5 years (Preferred)

Education:

  • Bachelor's (Required)

License:

  • Relevant (Preferred)

Work authorization:

  • United States (Required)

Work Location:

  • One location

Benefits:

  • Health insurance
  • Dental insurance
  • Vision insurance
  • Retirement plan
  • Parental leave
  • Tuition reimbursement

Visa Sponsorship Potentially Available:

  • No: Not providing sponsorship for this job

Schedule::

  • Monday to Friday