Main Content

Corporate

Big Data Architect, Lead

Job Summary:

This position will lead Data Architecture efforts and will be focused on our new big data environment which will help us become a data-driven organization. Duties will include building the appropriate big data architecture and recommending required technologies to support this architecture. This role will conduct proof of concepts by building small environments and proving the value of these technologies. The Big Data Architect will help socialize and evangelize the value and benefits of the big data platform to our business and IT stakeholders.

Fundamental Job Tasks:

  • 6+ years of experience building solution designs and architecture for enterprise Big Data Solutions
  • 2+ years of experience in technology consulting preferred
  • Consumer Packaged Goods/Retail domains is preferred
  • Working with all organizational levels to understand requirements and provide thought leadership related to Big Data Solutions
  • Ability to facilitate, guide, and influence decision makers and stakeholders toward the proper IT architecture
  • Ability to create presentation materials and simplify complex ideas
  • Ability to present technology architecture and solution overviews to executive audiences
  • Drive innovations through hands on proof-of-concept's and prototypes to help illustrate approaches to technology and business problems

Functional Experience:

  • Full Software Development Life Cycle (SDLC) of the Big Data Solutions
  • Experience with data integration and streaming technologies for EDW and Hadoop
  • Data modeling and database design
  • Data warehousing and Business Intelligence systems and tools
  • Open source Hadoop stack
  • Administration, configuration, monitoring, and performance tuning of Haddoop/Distributed platforms
  • Big Data and real time analytics platforms
  • ETL for Big Data
  • Migration of Legacy data warehouse to Data Lake
  • Develop guidelines, standards, and processes to ensure the highest data quality and integrity
  • Understanding of CI/CD in relation to Big Data platform
  • Understanding of Containers technologies is a plus
  • Knowledge/experience of cloud computing infrastructure (e.g. Amazon Web Services EC2, Elastic MapReduce, Azure)

Combination of Technical Skills

  • Hadoop (HDFS, MapReduce, Hive, Hbase, Pig, Spark)
  • Cloudera, Hortonworks, MapR
  • NoSQL (Cassandra, MongoDB, Hbase)
  • Git, Nexus
  • Enterprise scheduler
  • Kafka, Flume, Strom
  • Appliances (Teradata, Netezza)
  • Languages (Java, Linux, Apache, Perl/Python/PHP)
  • Data Virtualization

Education / Experience:

  • Bachelors degree in Computer Science or related field preferred
  • Masters degree in related field preferred

 

 

Application Procedures

To apply for this position please submit your application using our online job portal.