Select Page

Demand Response (DR) is finally starting to pick up momentum in Australia. We have an opportunity to get it right the first time, to learn from others and not miss out vital steps.

The idea of Demand Response is to reduce or shift energy use during a particular period. We can do this by collaborating with consumers, influencing them to change behaviour or allowing us to remotely control their devices.

But how do you know if a customer changes their energy use because of your program? How do you know their energy use would not have been the same, even without the DR event?

How do you know what caused the customer’s change in energy use? Was it the financial incentive you offered? Was it the information you made available? Was it the awareness programs you ran? Or did they just go out to meet a friend and turn all their appliances off?

In areas such as academia, it is a widespread practice to design experiments that test cause and effect using well-proven methods. It is a practice our industry must embrace. Recently I was talking to an industry peer who was describing the results of a behavioural DR study. The study aimed to show a change in energy use when customers had access to certain information. Based on the face-value study results, you could easily be led to believe that the study had proven its hypothesis. But because the program was not well designed, cause and effect could not be linked to truly prove the behavior change was caused by the provision of information. There could have been many other contributing factors causing this reduction in energy use.

It reminded me of two of my earlier DR projects. Both projects tested to see if time-based pricing would change consumption behaviour. The projects sat at opposite ends of the spectrum for program design quality. One program spent months planning what became recognised as a gold-standard in pilots. Why? Because the program had a control group, randomised assignment of participants, specific treatments testing specific outcomes, etc. The other project was pretty much the opposite. No control group, no randomized assignment, etc.

Decision makers in both these projects used the results to help prioritise future investments. The issue is, the latter Utility made decisions based on results that were meaningless. That Utility may end up investing millions of dollars in programs that deliver little to no value in the future.

Some Utilities are starting to move beyond DR pilots. The performance of the grid will become more reliant on influencing the energy use of our customers, especially during peak times. To be successful, we need to be confident the results of this behaviour change is accurate. We need to know which of the measures we are taking will (and will not) create behaviour change.

Australia is at the start of its DR journey and has the opportunity to get it right. Is your demand response program following the ‘gold standard’?

.

.

.

Contact me to find out how I can help you design and deliver the right demand response program.

.

#connectedera #digitalutility #smartmeters #demandresponse #opendata #demandsidemanagement #DSM

.