National Grid has now published National Grid ESO’s initial report into the 9 August blackout. The extremely repetitive 26-page report sheds some new light onto the events on the day (and I was rather gratified to see that my hunch relating to Hornsea tripping first was in fact supported by the report), however a number of questions remain unanswered.

First of all, I agree with National Grid’s assessment that the simultaneous generation losses were exceptional and that the system operator was meeting its statutory obligations at the time in terms of the amount of available frequency response. As I said in my previous post, National Grid is at the sharp end of managing the trilemma, and the fact that the last such blackout was eleven years ago is a good indicator that it is managing the challenges of the transition well.

However, that is not to say that this will be enough going forwards, and the 9 August blackout has identified a number of areas of improvement in the way in which the transmission system is managed. (There are also lessons to be learned across the distribution networks and large electricity consumers, but those are for another day.)

The questions I have after reading this report, and some of today’s commentary are:

  • Were the systems at Hornsea sufficiently robust – this is a new facility, still in the commissioning phase, using relatively new 7 MW turbines. The report notes that adjustments have since been made to its configuration – to what extent should the potential for problems in the event of lightning strikes, which are hardly unusual, have been anticipated and mitigated ahead of time?
  • Why are the data provided to the market in the form of BM Reports and REMIT notices are not fully accurate after a reasonable period of time for revisions? The REMIT notices relating to this event were all inaccurate in terms of the start time of the various events, and have not been subsequently revised at the time of writing. Consideration should also be given to publishing 1 second frequency data and providing data on system inertia.
  • What if any role did transmission constraints play in the efforts to restore power, and would the outcome have been different if the outages had happened at different geographic locations on the network?

Sequence of events leading to the blackout

According to the report, the system was operating normally up until 4:52pm on Friday 09 August – the weather was warm and windy, and there was some heavy rain and lightning, but nothing unusual for the time of year. Demand for the day was forecast to be similar to the previous Friday, and around 30% of generation was coming from wind, 30% from gas, 20% from nuclear and 10% from interconnectors.

At 16:50 there was a lightning strike on or near the overhead line connecting two substations at (Eaton Socon near St Neots and at Wymondley near Hitchin). Large currents were seen in the substations, and circuit breakers opened to disconnect one of the two lines on that route (in this case it only took 70 milliseconds). The circuit breakers re-closed automatically after about 20 seconds and remained closed since the large current caused by the lightning strike had ceased – had a fault such as damage to the line persisted they would have locked open.

Three things then occurred almost simultaneously:

  1. The Hornsea off-shore windfarm suddenly de-loaded its supply to the grid from 799 MW to 62 MW (only units 2 and 3 were affected – unit 1 continued to operate at 50 MW throughout the event);
  2. The steam turbine at the Little Barford CCGT which is connected to Eaton Socon 400kV substation at one end of the affected line, tripped; and
  3. The sudden shift in the angle of the voltage caused some distributed generators, mainly be solar and some small gas and diesel plant, to detect “loss of mains” and automatically disconnect from the system.

blackout report map

The reduction in generation from these events was:

  • 737 MW – Hornsea
  • 244 MW – Little Barford
  • c 500 MW – embedded generation

The cumulative amount of lost generation was 1,481 MW, 48% above the 1,000 MW single loss protection level at which the system was running at the time. (The system is designed so that it can continue to operate if the single largest generation unit trips at any time, which might be a large nuclear plant, or an interconnector.)

blackout report timeline

Over 1,000 MW of frequency response was deployed (including 200 MW of generation; 450 MW of batteries and 350 MW of demand response), and the frequency began to recover after an initial dip, however at that point the two gas turbines at Little Barford went offline since they cannot operate without the steam turbine – the first tripped due to excess steam build-up, and the second was manually disconnected about 28 seconds later for the same reason.

blsckout report LFDDAt this point the cumulative generation loss was 1,878 MW and frequency fell below 48.8 Hz, the level at which load-shedding protection measures activate. This system automatically disconnected customers on the distribution network in a controlled way and in line with parameters pre-set by the Distribution Network Operators – around 5% of GB’s electricity demand was disconnected (c 1 GW) to protect the remaining 95%.

The rate at which frequency changed during this event – 0.16 Hz/s – was high in historic terms but not unexpected in a low-inertia system.

Did the system behave as it should on the day?

There are six different responses to consider in respect of this question:

  • The initial local response on the transmission infrastructure to the lightning strike;
  • The responses of the transmission-connected generation at Hornsea and Little Barford;
  • The response of distribution-connected generators;
  • The initial frequency response; and
  • The Low Frequency Demand Disconnection process.

The initial local response to the lightning strike appears to have been adequate with the circuits being restored in around 20 seconds. Lightning strikes on the electricity system are common, and I have seen no suggestion that this response was below that which would be expected.

Was Hornsea appropriately configured?

The response of the Hornsea windfarm on the other hand may not have been as expected. Orsted has subsequently confirmed that equipment at Hornsea saw a system voltage fluctuation with unusual characteristics coincident with the lightning strike, and reacted as expected in attempting to accommodate and address the system condition. However, as the reaction expanded throughout the plant – which covers a large geographic area – the protective safety systems activated.

“Following an initial review, adjustments to the wind farm configuration, and fine tuning its controls for responding to abnormal events, the wind farm is now operating robustly to such millisecond events.”

This is a new windfarm that is actually still in the process of being built, so there are definitely questions to be asked as to whether its internal systems were appropriately configured. The “fine-tuning” referred to in the report, suggest that they were not.

Orsted also did not cover itself in glory in its communications to the market about the event, with its REMIT notices being comparatively late, and incorrectly stating the start time of the event.

Did Little Barford behave as expected and is the substation updgrade relevant?

The situation at Little Barford would appear to be more in line with expectations. The plant detected an anomalous speed measurement and the steam turbine automatically cut out. As the plant cannot run without the steam turbine, the two gas turbines also went down/were taken down shortly after. Commissioned in 1996, the plant was upgraded in 2013, with new gas turbines and is expected to remain operational until 2026.

It is interesting to note that the substation at Little Barford is currently being upgraded – the existing 1960s equipment is being replaced, and the new installation should be operational by 2023. It is unclear whether replacing this equipment earlier would have made any difference to the events of 9 August.

Was the loss of embedded generation in line with expectations for this type of event?

Around 500 MW of embedded generation disconnected after detecting a change in the angle of the voltage. Distributed generators are required to ensure they can safely shut down in the event of a disruption to their local network, and use “loss of mains” protection to achieve this. Loss of mains protection systems respond either to the rate of change of frequency, or, as in this case, to “vector shift” which is triggered by voltage phase angles created by a fault on the transmission circuits. This behaviour is not uncommon with lightning strikes, however, a local loss of 500 MW of embedded generation is not insignificant.

In its first Operability Strategy report in November 2018, National Grid identified that the losses from vector shift protection could be very large:

“We identified that for some faults on the network, the amount of generation that could disconnect due to operation of vector shift protection was larger than the largest loss we normally secure. This issue was localised to networks in the south of England due to the large volumes of generation using this type of protection.”

Initiatives are currently underway to try to immunise embedded generators to make them less sensitive to changes in system voltage – in its latest Operability Strategy update National Grid describes managing the rate of change of frequency to prevent tripping of loss of mains protection to be the main stability challenge it faces in the short term. The cost of managing the system to prevent the tripping of loss of mains protection has risen from £60 million in 2017/18 to £150 million in 2018/19.

Whether these initiatives would have made a difference on 9 August is something I cannot answer…clearly there will be levels of system disturbance which will need to trigger the auto-disconnection of embedded generation, and I cannot say whether the effects of the lightning strikes on 9 August would fall into that category.

Was the initial frequency response adequate?

The initial frequency response systems that came online appeared to have worked in that system frequency did begin to recover until the further loss of the Little Barford gas turbines sent the cumulative loss significantly above the level of the reserve for the day. Of course, question must be asked about the sizing of the reserve, and this is an area that is getting a lot of focus in the press.

National Grid currently spends about £300 million a year on frequency response and reserve products, and is exploring a range of new ways of procuring these services, both to extend access to new technologies/types of participants, and to improve the transparency and competitiveness of the procurement processes.

On 9 August, the system was running with a 1,000 MW protection level – this is actually quite low given there could by 2,000 MW of imports on IFA, or 1,200 MW of output at Sizewell B. On the day, neither of these was running close to capacity so the 1,000 MW sizing reflected the largest load on the system at the time, and no doubt was not larger in order to control costs.

National Grid is required to take account of the potential loss of embedded generation in its determination of the protection level required, however, this loss is considered to be independent of the largest infeed loss – the response holding should cover the larger of the two but does not need to cover both.

It is easy to point to the fact of the blackout and say that it simply wasn’t enough, but it is not unreasonable for National Grid to manage the size of the reserves it holds in response to a changing maximum potential single loss in order to be cost effective. Of course it could increase the protection level, and potentially look at 1.5x the maximum single loss, or the two largest single losses, or any other parameter, but the costs would increase as a result and may simply not be justified particularly given that the last major blackout was eleven years ago.

In the event, power was restored to the transmission system by 17:06, 14 minutes after the lightning strike. The fact that it took longer for some customers to be re-connected, and that the railways were severely disrupted is not really the fault of the transmission system operator.

Did the Low Frequency Demand Disconnection process operate as expected?

At 16:53:49, between the disconnection of the two gas turbines at Little Barford, the system frequency breached the 48.8 Hz trigger level resulting in LFDD – 931 MW of demand was disconnected from the system by the Distribution Network Operators (“DNOs”). The DNOs determine the order in which loads in their networks are disconnected, so the downstream impact of these systems is not the responsibility of National Grid. The report does describe some of these impacts, but as this post relates to the performance of the transmission systems, I will not include them here.

What does this tell us about system stability?

In its Operability Strategy report, National Grid sets out quite clearly the challenges it is facing in managing the electricity system transition, both in terms of de-carbonisation and de-centralisation:

“De-carbonisation has produced high levels of renewable generation which has different operating characteristics, plant dynamics, data quality, flexibility and inertia contribution. This has increased reserve and response requirements and the nature of intermittent renewable generation means that the requirements are more volatile and less predictable…

…As the inertia on the system reduces, the rate of change of frequency increases. The existing frequency response services are specified to deliver full output within a set time. When frequency moves quicker, these services become less effective due to the time it takes them to deliver and a larger volume is required”

The changing generation mix with new forms of generation, connected at different locations, and at different voltage levels are inevitably causing power flows on networks to change. National Grid identifies five areas of potential concern: frequency control, voltage control, restoration, stability and thermal constraints.

Frequency control: increasing levels of intermittent generation will lead to more volatile and less predictable, shorter-term requirements for reserve and response. As the inertia on the system reduces, the rate of change of frequency increases. The existing frequency response services are specified to deliver full output within a set time. Existing procurement of balancing services is becoming less effective in meeting these requirements due to the relatively long-term nature of the products, and the delivery times specified under existing service agreements.

Voltage control: reactive power is required for voltage control, and the requirement is set to increase as network loading becomes more volatile and many conventional generators (which provide reactive power) run less predictably and less often. More absorption is needed to manage pre-fault high voltages, and more injection is needed to support post-fault low voltage.

Restoration: the current restoration approach relies on large, transmission-connected synchronous generators – as there are fewer of these on the network, in the future, a wider range of technology types, connected at different voltage levels, will need to be considered.

Stability: stability is the ability of the system to quickly return to acceptable operation following a disturbance and is supported by synchronous generation. The electricity network has been designed based on the assumption that there will always be a large amount of synchronous generation, so without intervention, the system will become less stable as non-synchronous renewable generation replaces traditional forms of generation. Stability is also threatened by the decline in short circuit levels (the amount of current that will flow on the system in a fault), with regional differences in the extent of this challenge.

Thermal constraints: in the past, the system operator’s ability to instruct the output of a large number of transmission-connected generators met almost all its constraint management needs, however, the number of transmission-connected generators has fallen, and the locations of thermal constraints are changing. There are now more occasions when the options for managing constraints are limited.

SCL map

Was it all just bad luck, or are changes needed?

In response to this initial report, Ofgem has somewhat pompously said there were “still areas where we need to use our statutory powers to investigate these outages”. The regulator’s executive director of systems and networks, Jonathan Brearley, said Ofgem’s own investigation would “ensure the industry learns the relevant lessons and clearly establish whether any firm breached their obligations to deliver secure power supplies to consumers”.

My initial reaction to this was that it was somewhat self-important, and possibly a reaction to the press interest (and possibly to deflect some of the criticisms that have been levelled at it in terms of its regulation of the country’s electricity networks), however, on reflection, I think that the events at Hornsea in particular bear closer investigation.

Why did a generator so far from the location of the initial system disruption trip when no others did apart from the nearest power station? What were these adjustments to its configuration that have subsequently been made, and was the plant overly sensitive to fluctuations in the system voltage (and was this done through lack of care, or a deliberate attempt to protect the new facility from external threats)? And why were Hornsea’s REMIT notices so late and so inaccurate (or even misleading, implying the windfarm disconnected after the Little Barford CCGT)?

The events have highlighted several inadequacies in the provision of market data, and the BM Reports system could do with an overhaul to make it more user-friendly and to increase the granularity, accuracy and breadth of available data. It should be much easier to plot BMU level generation (without needing to know the code for each one first!), frequency data should be more granular and there should be some representation of system inertia and transmission constraints (in the case of inertia, National Grid’s measurement methodology is still in development, but a beta version could still be useful). A common theme in the system operator’s work on the development of its balancing markets is a desire to increase transparency – this would be a good addition to that aim.

Otherwise, these events have highlighted the importance of National Grid’s ongoing work to re-shape the system in response to the de-carbonisation and de-centralisation of generation. The initiatives described in the Operability Strategy reports indicate that National Grid is both aware of the challenges and has plans in place to address them. However, these are not trivial problems and solutions will take time to emerge. It’s not as simple as “build more batteries” as several commentators – often developers of battery projects – might claim.

The final version of National Grid’s report is due early next month, and investigations are also underway by Ofgem and BEIS. This issue isn’t going away any time soon, and no doubt all the affected parties are hoping that lightning won’t strike twice, at least until the furore dies down.

Subscribe to the Watt-Logic blog

Enter your email address to subscribe to the Watt-Logic blog and receive email notifications of new posts.