TY - GEN
T1 - Reinforcement Learning Based EV Charging Scheduling
T2 - 2021 IEEE PES Innovative Smart Grid Technology – Asia
AU - Qian, Kun
AU - Adam, Rebecca
AU - Brehm, Robert
PY - 2021
Y1 - 2021
N2 - In recent years, several optimization techniques have been proposed for electric vehicle (EV) charging scheduling. A common approach to intelligent scheduling is day-ahead planning, assuming full arrival time, departure time and energy demand knowledge or having them forecasted. However, the result from the day-ahead scheduling is limitedly applicable due to the uncertainties from the charging behaviors. With the deployment of the EV charging communication protocol defined in ISO 15118, it is realistic to assume that the EV will publish the departure time and the energy demand upon arrival. Thus, real-time scheduling, making decisions at each decision timeslot, can adapt to the new information and increase scheduling performance. Traditional model-based approaches like model predictive control (MPC) still require models, for example, for the future arrival times to solve the scheduling problem. Reinforcement learning (RL), a model-free approach, has also been successfully applied to real-time scheduling. RL can learn how to make decisions without relying on any system knowledge. This paper proposes a new action space construction method for an RL as proposed in a preceding work. The resulting action space size is significantly reduced compared to the original approach. Further, we compare the performance of a novel prioritized RL method to the original method. A publicly available charging session dataset is used for performance comparison in contrast to the original method. It is shown, that the prioritized RL performs better.
AB - In recent years, several optimization techniques have been proposed for electric vehicle (EV) charging scheduling. A common approach to intelligent scheduling is day-ahead planning, assuming full arrival time, departure time and energy demand knowledge or having them forecasted. However, the result from the day-ahead scheduling is limitedly applicable due to the uncertainties from the charging behaviors. With the deployment of the EV charging communication protocol defined in ISO 15118, it is realistic to assume that the EV will publish the departure time and the energy demand upon arrival. Thus, real-time scheduling, making decisions at each decision timeslot, can adapt to the new information and increase scheduling performance. Traditional model-based approaches like model predictive control (MPC) still require models, for example, for the future arrival times to solve the scheduling problem. Reinforcement learning (RL), a model-free approach, has also been successfully applied to real-time scheduling. RL can learn how to make decisions without relying on any system knowledge. This paper proposes a new action space construction method for an RL as proposed in a preceding work. The resulting action space size is significantly reduced compared to the original approach. Further, we compare the performance of a novel prioritized RL method to the original method. A publicly available charging session dataset is used for performance comparison in contrast to the original method. It is shown, that the prioritized RL performs better.
KW - action space
KW - charging scheduling
KW - electric vehicles
KW - reinforcement learning
U2 - 10.1109/ISGTAsia49270.2021.9715603
DO - 10.1109/ISGTAsia49270.2021.9715603
M3 - Article in proceedings
T3 - IEEE PES Innovative Smart Grid Technologies - Asia
BT - 2021 10th IEEE PES Innovative Smart Grid Technologies Asia (ISGT)
PB - IEEE
Y2 - 5 December 2021 through 8 December 2021
ER -