About the company
Kraken, the trusted and secure digital asset exchange, is on a mission to accelerate the adoption of cryptocurrency so that you and the rest of the world can achieve financial freedom and inclusion. Our 2,350+ Krakenites are a world-class team ranging from the crypto-curious to industry experts, united by our desire to discover and unlock the potential of crypto and blockchain technology. As a fully remote company, we already have Krakenites in 70+ countries (speaking 50+ languages). We're one of the most diverse organizations on the planet and this remains key to our values. We continue to lead the industry with new product advancements like Kraken NFT, on- and off-chain staking and instant bitcoin transfers via the Lightning Network.
Job Summary
The opportunity
📍Build scalable and reliable data pipelines that collect, transform, load and curate data from internal systems 📍Augment data platform with data pipelines from external systems. 📍Ensure high data quality for pipelines you build and make them auditable 📍Drive data systems to be as near real-time as possible 📍Support design and deployment of distributed data store that will be central source of truth across the organization 📍Build data connections to company's internal IT systems 📍Develop, customize, configure self service tools that help our data consumers to extract and analyze data from our massive internal data store 📍Evaluate new technologies and build prototypes for continuous improvements in data engineering.
Skills you should HODL
📍5+ years of work experience in relevant field (Data Engineer, DWH Engineer, Software Engineer, etc) 📍Experience with data-lake and data-warehousing technologies and relevant data modeling best practices (Presto, Athena, Glue, etc) 📍Proficiency in at least one of the main programming languages used: Python and Scala. Additional programming languages expertise is a big plus! 📍Experience building data pipelines/ETL in Airflow, and familiarity with software design principles. 📍Excellent SQL and data manipulation skills using common frameworks like Spark/PySpark, or similar. 📍Expertise in Apache Spark, or similar Big Data technologies, with a proven record of processing high volumes and velocity of datasets. 📍Experience with business requirements gathering for data sourcing. 📍Bonus - Kafka and other streaming technologies like Apache Flink.
The crypto industry is evolving rapidly, offering new opportunities in blockchain, web3, and remote crypto roles — don’t miss your chance to be part of it.


.png?1651173526)

