Flood-managed aquifer recharge (Flood-MAR) is a promising solution for mitigating flood and drought risks while enhancing groundwater supply. The key challenge lies in optimizing water and land allocation for recharge across space and time. Traditional planning approaches rely on static policies and perfect future predictions, making them inadequate for dynamic and uncertain environments. Here, we propose a spatially explicit planning framework using deep reinforcement learning (DRL) for adaptive agriculture and water management. Our approach employs proximal policy optimization to train networks that generate pixel-level decisions and evaluate policy performance. Through a case study in California’s San Joaquin Valley, we demonstrate Flood-MAR’s potential for risk mitigation and DRL’s effectiveness in developing adaptive management strategies.
Agricultural landscapes are facing mounting pressures from increasing conflicts of water and land use. Meanwhile, unsustainable water and land use practices have led to serious environmental concerns, including groundwater depletion and biodiversity loss. Climate change further exacerbates these challenges by altering precipitation patterns and increasing the frequency and severity of extreme events like floods and droughts.
These agricultural landscapes urgently need to transition toward sustainable practices that:
• Deliver multiple socio-economic and environmental benefits
• Minimize trade-offs in water and land use under deep uncertainty
• Adapt water and land use management to keep pace with the changing climate
Water storage infrastructure is a key adaptation option for sustaining agricultural landscapes under uncertainty. However, conventional storage solutions no longer suffice to meet future water storage demands. Given the vast storage capacities of groundwater aquifers, managed aquifer recharge (MAR) offers a largely untapped opportunity to augment groundwater supply while supporting agriculture development and ecosystem health.
Traditional planning approaches often assume perfect system information and rely on static policies, which cannot well adapt to dynamic and uncertain environments. Recent advances in artificial intelligence, specifically Deep Reinforcement Learning (DRL), offer a scalable solution by learning optimal strategies through iterative agent-environment interactions, effectively handling large state and action spaces.
We present STELLAR (Spatial-Temporal Learning for LAndscape Repurposing), a novel spatial-temporal adaptive planning framework that leverages deep reinforcement learning (DRL) to optimize the joint management of agricultural landscapes and groundwater resources, with particular emphasis on Flood-Managed Aquifer Recharge (Flood-MAR). The optimization problem is formulated as a sequential Markov Decision Process (MDP) and solved using an actor-critic-based DRL architecture.
STELLAR consists of the following essential components:
• Agent: The central decision-maker responsible for regional land-use planning
• Environment: The coupled agricultural-water systems that model dynamic interactions between water demand and supply above and below the ground
• Actor Network: Generates pixel-level probabilistic land-use decisions based on current system states
• Critic Network: Evaluates policy performance through value estimation
• Policy Optimization: Utilizes the Proximal Policy Optimization (PPO) algorithm to ensure stable learning within trust regions
This integrated, AI-driven approach not only minimizes the trade-offs on water and land use within the water-agriculture nexus, but also provides a robust tool for policymakers and stakeholders to make decisions systematically in the face of increasing climatic uncertainties.
@inproceedings{li2024stellar,
author = {Li, Meilian and He, Xiaogang},
title = {Spatial-temporal Adaptive Planning of Flood Managed Aquifer Recharge Guided by Deep Reinforcement Learning},
year = {2024},
booktitle = {AGU Fall Meeting},
location = {Washington, D.C.},
}