The introduction of millions of electric vehicles (EVs) onto the power grid will create a transformational opportunity for America's decarbonization efforts. However, it also brings with it a significant challenge. Scientists and engineers are looking for the best way to ensure that vehicles can be charged smartly, efficiently, cheaply, and clean by a grid that may not be able to accommodate them all at once or all the time.
Image Credit: Smile Fight / Shutterstock
Researchers at the U.S. Department of Energy's Argonne National Laboratory and graduate students at the University of Chicago are collaborating on an exciting new project to tackle that challenge. This project will use a particular combination of computational rewards and punishments -; a technique called reinforcement learning -; to train an algorithm to help schedule and manage the charging of a diverse set of electric vehicles.
The first group of vehicles that the team is studying are those being charged by Argonne employees at the laboratory's Smart Energy Plaza, which offers both AC regular chargers and DC fast chargers. Because employees don't typically need their vehicles during the workday, there can be some flexibility in terms of when each car gets charged.
"There's a certain total amount of power that can be allocated, and different people have different needs in terms of when they need to have their cars available at the end of the day," said Argonne principal electrical engineer Jason Harper. "Being able to train a model to work within the constraints of a particular employee's departure time while being cognizant of peak demands on the grid will allow us to provide efficient, low-cost charging."
"When you have a lot of EVs charging at the same time, they can create a peak demand on the power station. This introduces increased charges, which we're trying to avoid," added Salman Yousaf. Yousaf is a graduate student in applied data science at the University of Chicago, working on the project with three other students.
The reinforcement learning in the algorithm works by incorporating feedback from positive results, like an EV having the desired amount of charge at the designated departure time. It also incorporates negative results, like having to draw power past a certain peak threshold. Based on this data, the charge scheduling algorithm can make more intelligent decisions about which cars to charge and when.
"Smart charge scheduling is really an optimization problem," Harper said. "In real time, the charging station is constantly having to make tradeoffs to make sure that each car is being charged as efficiently as possible."
Although the Argonne charging stations are the first location where the project's researchers are performing reinforcement learning, there is the potential to expand far beyond the laboratory's gates. "There's a lot of flexibility when it comes to charging at home, where overnight charging would allow for some ability to move around how the charging load is distributed," Yousaf said.
"True smart charging is really taking into consideration all of the actors in the ecosystem," Harper added. "That means the utility, the charging station owner and the EV driver or homeowner. We want to meet the needs of everyone while still being mindful of the restrictions that everyone faces."
Future work with the model will involve a simulation of a much more extensive charging network that will initially be based on data collected from Argonne's chargers.
Harper and his colleagues have also developed a mobile app called EVrest that allows users of networked charging stations (in this case, initially Argonne employees) to reserve stations and participate in smart charge scheduling. The EVrest platform collects data on charging behavior and will use that data to train future AI models to aid in smart charge management and vehicle grid integration.