Gridsight is a rapidly growing Grid/CleanTech startup on a mission to accelerate global electrification and decarbonization. We are building a vertical SaaS platform for electricity utilities, enabling them to modernize grid operations and unlock transformational flexibility capabilities such as dynamic operating envelopes and flexible interconnections. Having recently raised our Series A funding from Airtree Ventures, Energy Transition Ventures and Area VC, we are poised for rapid growth and are seeking talented individuals to join us on our mission.
As a Data Engineer at Gridsight, you’ll engage your passion for distributed renewables with your experience leading and owning data pipelines to enhance and scale our platform to hundreds of thousands of meters across the globe. In this opportunity to design, develop, and optimise data pipelines and architectures for electricity distribution networks, you’ll play a critical role in redefining and revolutionizing the way utilities can enable the decentralization and decarbonisation of the grid.
This is an entry-level position open to 2024/2025 graduates.
Key Responsibilities•Create scalable and efficient production-quality ETL pipelines to handle large volumes of data from various sources.
•Integrate and transform a variety of meter and GIS data sources into common Gridsight schemas.
•Manage the end-to-end execution of customer data pipelines and remediate any failures.
•Ensure data accuracy, consistency, and integrity by implementing robust data validation and governance practices; monitor and optimize data pipelines and queries to ensure high performance and low latency.
•Work closely with data scientists, customer success engineers, and other key stakeholders to understand ongoing data needs; provide solutions that support their requirements.
Qualifications
•Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related discipline.
•Experience with SQL relational databases; exposure to data engineering tools such as dbt preferred.
•Hands-on experience building production-quality ETL pipelines.
•Proficiency with big data frameworks (Spark preferred).
•Familiarity with at least one major cloud computing provider (AWS preferred).
•Fluency in Python, the command line, and Git.
•Previous experience as a Data Engineering intern, preferably with delivering projects to external stakeholders or clients.
•Strong analytical and problem-solving skills, with a proactive attitude towards identifying and resolving technical challenges.
•Self-starter mentality; you’re able to independently prioritise tasks and manage time effectively with minimal oversight.
•Excellent communication skills and ability to collaborate effectively within a start-up environment.
•Currently living in Austin, TX or have the ability to relocate (required).
What We Offer
•Join a rapidly scaling venture-backed company on the first floor.
•Highly competitive salary and equity package.
•Flexible, hybrid working environment with a high performing, mission-driven team.