We are a large Sydney based IT Services business. We operate a number of Technology centric brands including our IT Services business, focused on delivering Data Transformation Services for clients large and small.
At our company, we have a passion for Data and Integration. Fully booked with projects for the foreseeable future, we are growing our Data team significantly with a variety of roles across our business. This is an opportunity for multiple Data Consultants - from mid to principal level - join our consulting community.
We are an open-minded and inclusive business and welcome applications from people seeking Permanent, Fixed-Term Contract, Day-Rate Contract, Visa Sponsorship (if you are in Australia).
Are you passionate about Spark and Hadoop?
Would you like to use tech like this: Scala, Impala, AWS EMR & Athena?
We are specifically searching for Big Data Engineers and have several assignments that can further your career and that you'd be proud to put your name to.
Successful Engineers in our team are passionate about unlocking and exploring data, helping our customers understand the possibilities it holds. Technically, we are searching for Big Data Engineers who have extensive experience working with Hadoop, Spark, and AWS (Ideally, GCP/Azure also considered).
Consultants at our company provide sleek and elegant solutions to our partners and customers, so your experience needs to be both technical and consultative.
You will have the ability to take our customers on transitional journeys with you leading meetings and workshops and assisting them reach a leading-edge solution. That said - it is expected that a consultant will have outstanding communication skills with the ability to proactively identify issues and present solutions.
Please apply if you have experience with most of these:
Designing and implementing data pipelines in Spark (including tuning and optimising, Impala and Presto experience preferred)
Exposure to CI/CD tools to test and deploy data pipelines
Deep familiarity with AWS core services such as EC2, S3, IAM (EMR & Athena a bonus)
Proficient in Hadoop and related technologies including HDFS, Spark, Impala and Hive
Coding in Java, Scala and/or Python
Experienced with data modelling (Data Vault methodology preferred)