Machine Learning Scientist

Arva IntelligenceBerkeley, CA
23h$100,000 - $130,000Hybrid

About The Position

The Modeling Scientist, Uncertainty Quantification is responsible for leading the development and application of statistical, probabilistic, and machine learning approaches that quantify confidence in Arva’s ecosystem model predictions. This role is central to advancing Arva’s monitoring, reporting, and verification platform for greenhouse gas emission reductions and removals. Working at the intersection of statistics, machine learning, and process-based ecosystem modeling, this role works closely with ecosystem modelers and data engineers to design robust uncertainty frameworks that support transparent, decision-ready outputs for customers, partners, and environmental markets. The Modeling Scientist plays a critical role in translating scientific rigor into real-world impact through credible, auditable modeling systems.

Requirements

  • 5+ years demonstrated experience in uncertainty quantification, probabilistic modeling, and data model integration
  • MUST HAVE: Advanced proficiency in Python and scientific computing, with experience building reproducible modeling pipelines
  • Strong software engineering practices, including writing modular, testable, and well-documented code
  • Deep commitment to scientific rigor, transparency, and integrity
  • Master’s or PhD degree or equivalent experience in Statistics, Applied Mathematics, Environmental Science, Earth System Science, Biology, or a related quantitative field

Nice To Haves

  • Experience integrating machine learning with process-based or mechanistic models preferred
  • Familiarity with ecosystem or Earth system models such as DayCent or CESM preferred
  • Familiarity with cloud platforms and data systems, including AWS and relational or spatial databases, preferred

Responsibilities

  • Design and implement uncertainty quantification frameworks for ecosystem and biogeochemical models, including parameter, input, and structural uncertainty
  • Apply sensitivity analysis, multivariate testing, and cross-validation to evaluate model robustness and generalizability across space and time
  • Quantify and communicate model confidence, uncertainty bounds, and performance metrics
  • Develop hierarchical and Bayesian calibration approaches to support distributed and iterative model optimization
  • Apply probabilistic methods to integrate data, models, and uncertainty across scenarios
  • Analyze model outputs to diagnose limitations and inform model improvement strategies
  • Integrate machine learning techniques with process-based or mechanistic models to improve predictive performance and scalability
  • Partner with data engineers to implement reproducible, scalable modeling pipelines
  • Contribute to the design of model evaluation and optimization workflows
  • Communicate uncertainty, confidence intervals, and model performance clearly to internal teams and external stakeholders
  • Contribute to scientific reports, transparent model documentation, and peer-reviewed publications as appropriate
  • Support defensible, auditable model outputs suitable for regulatory and credit market review
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service