Job Description
- MS/BS degree in Computer Science or related discipline
- Strong experience in Java, Scala, Go, Python, etc.
- Experience building domain-driven Microservices
- Experience with data modeling in different data stores and the Hadoop ecosystem
- Experience working with NoSQL data stores such as HBase, DynamoDB, etc.
- Experience working with Big Data processing frameworks such as Spark, Hive, etc.
- Experience working with Big Data streaming frameworks such as Nifi, Spark-Streaming, Flink, etc.
- Experience working with Big Data streaming services such as Kinesis, Kafka, etc.
- Experience working with schema evolution, serialization, and validation with file formats such as JSON, Parquet, Avro, etc.
- Experience working in a public cloud environment, particularly AWS
- Familiarity with practices like Continuous Development, Continuous Integration and Automated Testing
- Familiarity with build tools such as Cloud Formation and automation tools such as Jenkins
- Familiarity with Git and version control
· Agile/Scrum Application Development experience
- An interest in artificial intelligence and machine learning