We are EA —the world’s largest video game publisher. You’re probably familiar with many of our titles—Madden, FIFA, The Sims, Need for Speed, Dead Space, Battlefield and Star Wars, to name a few. But maybe you don’t know how we’re committed to creating games for every platform—from social to mobile to console—to give our consumers that anytime, anywhere access they demand. What does that mean for you? It means more opportunities to unleash your creative genius, be inspired by those around you and ignite your path in any direction you choose.
The Challenge Ahead:
The EADP Data Group is responsible for developing a new unified Big Data pipeline across all franchises at Electronic Arts. This platform will incorporate data collection, ingestion, processing, access and visualization all built on a modern, cloud based tech stack with best-in-class tools. The Data Group will provide the tools and platform which powers the future state of game development, marketing, sales, accounting and customer experience.
We are looking for developers who are interested in working on a large scale distributed data system from the ground up for one of the most creative, innovative companies in technology.
Help us build a unified data platform across EA, spanning 20+ game studios as data sources
Develop infrastructure software that slice and dice data, using Hadoop and Map/Reduce
Develop reporting systems that inform on key metrics, detect anomalies, and forecast future results
Develop complex queries to solve data mining problems
Write reliable and efficient programs scaling to massive (petabyte) datasets and large clusters of machines
Flexibility to work with both SQL and NoSQL solutions
Work with data modelers, data analysts, and BI developers to understand requirements, develop ETL processes, validate results, and deliver to production
Coordinate the efficiency, scalability, and stability of data collection, extraction, and storage processes
Work towards a MS with experience in Computer Science or related technical discipline (or equivalent)
Experience working with large-scale systems and data platforms/warehouses
A solid foundation in computer science, with competencies in algorithms, data structures, and software design
Software development experience, writing clean re-useable code, test-driven development, and continuous integration
Experience with MapReduce, Hadoop, Hive, or other NoSQL stacks a plus
Fluency with Java, SQL, Perl/Python, or C++
Fast prototyping skills, familiarity with scripting languages such as bash, perl, awk, python
Experience working with columnar analytics databases or relational databases is a plus
Experience with data modeling and BI tools is a plus