About the company
Creating a culture of collaboration in the U.S. From Austin and Atlanta to Highlands Ranch and our space in Mission Rock, our coast-to-coast hubs offer benefits, amenities and flexibility that complement each office, itās vibe and the unique people.
Job Summary
Responsibilities included, but limited to:
šBuild and optimize Spark or Hive queries for both batch and interactive data processing on Hadoop. šDevelop and automate ETL pipelines using Apache Airflow to schedule, monitor, and manage dependencies and data flows across multiple systems. šSupport the integration of data sources and facilitate reporting solutions for business stakeholders utilizing Tableau and Power BI. šEnhance and maintain existing data workflows, collaborating with operations and analytics teams to ensure reliable, high-performance data delivery. šTroubleshoot issues related to data pipeline performance or job orchestration on Spark, Hadoop, and Airflow platforms.
Qualifications
Basic Qualifications: šPursuing a Bachelorās or Masterās degree in Computer Science, Data Science, Software Engineering, Information Systems, or related field āÆgraduating December 2026 ā August 2027 šStrong communications skills, specifically, the absence of repeated grammatical or typographical errors, clear and concise written and spoken communications that demonstrate professional judgment. āÆāÆ
If youāre passionate about blockchain and decentralized technologies, explore more opportunities in web3 and cryptocurrency careers.




