Responsibilities
· Build out data pipelines aligning with solution architect design and project specifications
· Responsible for data modeling requests and realization
· Unit and integration testing of data pipelines
· Create repeatable data pipeline patterns/templates
Required Skills/experience
· Open source knowledge and expertise in: Hadoop (Hortonworks), HDFS, Hive, Spark, NiFi, Sqoop, Pig, Flume, MapReduce
· Experience in implementing production data pipelines and creation of repeatable ingestion patterns
· Experience with various databases and platforms, including but not limited to: DB2, Oracle, SQLServer
· Demonstrated knowledge and use of the following languages: Python, Scala, Shell Scripts, JSON, SQL
· Familiar with general data modeling concepts and processes to support business intelligence solutions
· Demonstrated performance in all areas of the SDLC, specifically related to ETL solutions
29 Jan 2018 -save job - original job
» Apply Now
Please review all application instructions before applying to Workhcm.
Workhcm is the #1 job site worldwide*, with over 200 million unique visitors per month from more than 60 countries in 28 languages. Since...