Core Natural Resources
Core Natural Resources
LSEG
NYISO
PSEG
Pacific Fusion
Kindle Energy Management Company LLC
Commonwealth Fusion Systems
EDF
TAE Power Solutions
Vistra Energy
Vistra Corp.
LSEG
Constellation
Constellation Energy
Mine Vision Systems
Consumers Energy
ERCOT
ERCOT/Electric Reliability Council of Texas
Enercon Services, Inc.
Privacy Policy – Core Natural Resources, Inc.
Title: Cloud & Data Engineer – On-site Only
On-site in Canonsburg, PA 4 days per week
Role Summary:
The Cloud & Data Engineer supports the design, build, and maintenance of Core’s cloud infrastructure, data lake environment, and backend service integrations. Operating under the Technology Strategist & Cloud Engineering Manager, this role delivers production-grade systems that enable analytics, automation, and reliable operations across enterprise platforms.
Key Responsibilities
- Accept, embrace, and promote the following core values of Core Resources: Safety, Sustainability & Continuous Improvement
- Implement and maintain cloud infrastructure using infrastructure-as-code and CI/CD pipelines (Terraform, Git-based workflows)
- Develop, test, and monitor ETL/ELT data pipelines for ingestion, transformation, and analytics-ready data flows
- Maintain and scale Core’s cloud-native data lake to support business reporting, data quality, and financial operations
- Build, manage, and operationalize backend APIs and service integrations, ensuring secure and stable data exchange
- Support internal applications and operational tooling deployments — focusing on automation, availability, and performance
- Implement observability tools (e.g., logging, tracing, monitoring, alerts) across infra and data systems
- Participate in incident resolution, root-cause analysis, and ongoing ops improvement
- Create and maintain technical documentation: pipeline architecture, API contracts, infrastructure diagrams, and data lineage
- Collaborate with internal teams to deliver consistent and auditable cloud solutions aligned to enterprise standards
- Bachelor’s degree in Computer Science, Software Engineering, Data Engineering, or a related technical discipline
- Professional experience in cloud or data engineering roles, with demonstrated ownership of production-grade deliverables
- AWS Certified Solutions Architect Associate (or higher) required. Applicants without AWS certification must complete Certified Solutions Architect Associate certification within 60 days of hire
- Must have experience working in environments with multi-account AWS architectures, enterprise security controls, and cost optimization practices.
- Ability to independently deliver end-to-end cloud/data solutions from architecture to production with minimal oversight
- Demonstrated ability to integrate AWS data pipelines with at least one ERP or enterprise financial platform in production
- Hands-on experience with AWS services including S3, Lambda, RDS, IAM, and ETL tools such as Glue and Step Functions
- Proficiency in Python and SQL for data processing, transformation, and scripting
- Demonstrated experience developing, integrating, or consuming RESTful APIs in a backend context
- Proven experience deploying infrastructure using Terraform in a team-based GitOps workflow
- Solid understanding of data lake architecture, data modeling principles, and pipeline orchestration (batch and streaming)
- Experience with monitoring and observability tooling (e.g., CloudWatch, Prometheus, Grafana), including alerting and dashboarding
- 1–3 years of professional experience in production-grade cloud, data, or backend engineering roles.
- Hands-on experience with orchestration platforms such as Apache Airflow, dbt, or AWS-native equivalents for automated pipeline management.
- Proficiency in containerization and backend service deployment workflows, including Docker and orchestration on ECS or EKS, with knowledge of CI/CD integration.
- Experience developing and maintaining backend applications using Python frameworks such as Django or FastAPI, or Node.js frameworks such as Express.js.
- Familiarity with microservices architecture and service-to-service communication using REST or gRPC.
- Understanding of event-driven architectures using AWS SNS, SQS, Kinesis, or Kafka.
- Experience with unit testing, integration testing, and continuous testing practices (e.g., PyTest, Jest).
- Knowledge of relational and NoSQL databases (e.g., PostgreSQL, DynamoDB, MongoDB) and ORM frameworks (e.g., Django ORM, SQLAlchemy).
- Experience deploying and integrating machine learning models into production systems using frameworks such as scikit-learn, TensorFlow, or PyTorch.
- Familiarity with MLOps workflows — model packaging, versioning (e.g., MLflow), and monitoring in a cloud environment.