Distributed Resource AGGregation (DRAGG) implements centralized MPC for residential buildings using an aggregator and residential building owner (RBO) model.
This can be run in two ways - using a local redis server or by deploying through Docker.
- Download data from the NSRDB for the location / year of your choice. Only variable which needs to be selected is the
GHI
. SelectHalf Hour Intervals
for most accurate simulation, as simulations start on the hour. (Note that DRAGG will repeat environmental data for sub-30 minute intervals. For example, for 15 minute intervals the temperature at 0:00 is the same as the temperature at 0:15, and the temperature at 0:30 is the same as the temperature at 0:45.) Copy file and rename:data/nsrdb.csv
, or change the default file name in.env
- Copy the
data/config-template.toml
to a new file:config.toml
- Change the parameters in the config file:
-
community
total_number_homes
- int, total number of homes in studyhomes_battery
- int, number of homes with battery onlyhomes_pv
- int, number of homes with pv onlyhomes_pv_battery
- int, number of homes with pv and battery
-
home
-
home.hvac
r_dist
- list, [lower, upper] bound for home resistance, kWh/Kc_dist
- list, [lower, upper] bound for home capacitance, K/kWp_cool_dist
- list, [lower, upper] bound for hvac power, kWp_heat_dist
- list, [lower, upper] bound for hvac power, kW
-
home.wh
r_dist
- list, [lower, upper] bound for water heater resistance, kWh/Kc_dist
- list, [lower, upper] bound for water heater capacitance, K/kWp_dist
- list, [lower, upper] bound for water heater power, kW
-
home.battery
max_rate
- float, maximum rate of charge / discharge for battery, kWcapacity
- float, energy capacity of battery, kWhcap_bounds
- list, [lower, upper] proportional bounds on battery capacity, proportion btw. [0,1]charge_eff
- float, battery charging efficiency, proportion btw. [0,1]discharge_eff
- float, battery discharging efficiency, proportion btw. [0,1]
-
home.pv
pv_area
- float, area of pv array, m2pv_efficiency
- float, pv efficiency, proportion btw. [0,1]
-
home.hems
prediction_horizons
- list of hours for MPC prediction horizon, 0 = no MPCdiscomfort
- depricateddisutility
- depricatedprice_uncertainty
- float
-
-
simulation
start_datetime
- str, "%Y-%m-%d %H" format for when to start experimentend_datetime
- str, "%Y-%m-%d %H" format for when to end experimentrandom_seed
- int, set the seed variable for the experimentload_zone
- str, this corresponds to the ERCOT load zone from which to pull the TOU pricing info fromcheck_type
- str, choice of 'pv_only', 'base', 'battery_only', 'pv_battery', 'all'. defines which homes to run, all will run all homes (typical)run_rbo_mpc
- bool, runs homes using MPC Home Energy Management Systems (HEMS), no reward price signalrun_rl_agg
- bool, runs homes using MPC HEMS, uses RL designed reward price signalrun_rl_simplified
- bool, runs homes against the rl_simplified
-
rl
-
rl.parameters
learning_rate
- float, controls update rate of the policy and critic networkdiscount_factor
- float, depreciation rate of future expected rewardsbatch_size
- int, number of replay episodesexploration_rate
- float, standard deviation of selected action from mu (best action according to policy)twin_q
- bool, whether or not to run two competing critic ("Q") networks
-
rl.utility
rl_agg_action_horizon
- list, number of hours in advace to forecast reward price signalrl_agg_forecast_horizon
- int, number of timestep iterations to forecast the home energy use- (OPTION A)
base_price
- float, price for electricity - (OPTION B)
shoulder_times
- list: int (len=2), electric utility/aggregator time of use times for "shoulder price" tier (time of day - 24hr clock) - (OPTION B)
peak_times
- list: int (len=2), electric utility/aggregator time of use times for "peak price" tier (time of day - 24hr clock) - (OPTION B)
offpeak_price
- float, electric utility/aggregator time of use price for "offpeak price" tier ($/kWh) - (OPTION B)
shoulder_price
- float, electric utility/aggregator time of use price for "shoulder price" tier ($/kWh) - (OPTION B)
peak_price
- float, electric utility/aggregator time of use price for "peak price" tier ($/kWh) action_space
- list, max/min action taken by RL agent in designing price signalaction_scale
- float, scale of reward price signal to actionspace (e.g. for actionspace = [-5, 5] and reward_price = [-0.05, 0.05] action_scale = 100)hourly_steps
- number of price signals per hour
-
rl.simplified
response_rate
- float, determines the response rate of the simplified (linear) response model's response to the RL price signaloffset
- not implemented
-
-
- Install and run a local Redis server.
- Best to put this in some virtualenv and install requirements:
$ cd /wherever/dragg
$ pip install -r requirements.txt
- cd into the root directory for this project and pip install this as a local package. Recommended to install as an editable package using the -e argument:
$ cd /wherever/dragg
$ pip install -e .
- Run
main.py
from the lower dragg directory
$ cd /wherever/dragg/dragg
$ python main.py
- Advised to use the caffiene package to keep the Python process running. (Otherwise Python pauses when Mac goes idle.)
$ homebrew cask install caffeine
- Run
main.py
using the caffeinate command$ caffeinate -i python main.py
- The
-s
argument will keep Python running even when the Mac is asleep (lid closed)$ caffeinate -s python main.py
You will need to have docker and docker-compose installed, but do not need Redis running.
- Add a
.dragg
directory to your home directory:mkdir ~/.dragg
- From the root dragg directory, run docker compose
$ cd /wherever/dragg
$ docker-compose build
$ docker-compose up
- Hope to make into a Dash / plotly webapp
- Not distributed architecture - runs in single process, however, still communicates across Redis. Could implement threading, celery, etc. for scaling workers based on nubmer of tasks. The separate classes help for the aggregator and the MPC solver
- Separate the weather forecasting for the MPC solver so that houses can forecast weather in real time rather than reading a historical JSON
- Although a MongoDB is included in the compose setup, it is not utilized.
- Introduction of Reinforcement Learning Aggregator (in addition to negotiating aggregator)
- Change of MPC houses to a duty cycle run time decision variable
- Change of MPC houses to include a discomfort objective (and hard constraints on system parameters)