National Grid has now published National Grid ESO’s initial report into the 9 August blackout. The extremely repetitive 26-page report sheds some new light onto the events on the day (and I was rather gratified to see that my hunch relating to Hornsea tripping first was in fact supported by the report), however a number of questions remain unanswered.
First of all, I agree with National Grid’s assessment that the simultaneous generation losses were exceptional and that the system operator was meeting its statutory obligations at the time in terms of the amount of available frequency response. As I said in my previous post, National Grid is at the sharp end of managing the trilemma, and the fact that the last such blackout was eleven years ago is a good indicator that it is managing the challenges of the transition well.
However, that is not to say that this will be enough going forwards, and the 9 August blackout has identified a number of areas of improvement in the way in which the transmission system is managed. (There are also lessons to be learned across the distribution networks and large electricity consumers, but those are for another day.)
The questions I have after reading this report, and some of today’s commentary are:
- Were the systems at Hornsea sufficiently robust – this is a new facility, still in the commissioning phase, using relatively new 7 MW turbines. The report notes that adjustments have since been made to its configuration – to what extent should the potential for problems in the event of lightning strikes, which are hardly unusual, have been anticipated and mitigated ahead of time?
- Why are the data provided to the market in the form of BM Reports and REMIT notices are not fully accurate after a reasonable period of time for revisions? The REMIT notices relating to this event were all inaccurate in terms of the start time of the various events, and have not been subsequently revised at the time of writing. Consideration should also be given to publishing 1 second frequency data and providing data on system inertia.
- What if any role did transmission constraints play in the efforts to restore power, and would the outcome have been different if the outages had happened at different geographic locations on the network?
Sequence of events leading to the blackout
According to the report, the system was operating normally up until 4:52pm on Friday 09 August – the weather was warm and windy, and there was some heavy rain and lightning, but nothing unusual for the time of year. Demand for the day was forecast to be similar to the previous Friday, and around 30% of generation was coming from wind, 30% from gas, 20% from nuclear and 10% from interconnectors.
At 16:50 there was a lightning strike on or near the overhead line connecting two substations at (Eaton Socon near St Neots and at Wymondley near Hitchin). Large currents were seen in the substations, and circuit breakers opened to disconnect one of the two lines on that route (in this case it only took 70 milliseconds). The circuit breakers re-closed automatically after about 20 seconds and remained closed since the large current caused by the lightning strike had ceased – had a fault such as damage to the line persisted they would have locked open.
Three things then occurred almost simultaneously:
- The Hornsea off-shore windfarm suddenly de-loaded its supply to the grid from 799 MW to 62 MW (only units 2 and 3 were affected – unit 1 continued to operate at 50 MW throughout the event);
- The steam turbine at the Little Barford CCGT which is connected to Eaton Socon 400kV substation at one end of the affected line, tripped; and
- The sudden shift in the angle of the voltage caused some distributed generators, mainly be solar and some small gas and diesel plant, to detect “loss of mains” and automatically disconnect from the system.
The reduction in generation from these events was:
- 737 MW – Hornsea
- 244 MW – Little Barford
- c 500 MW – embedded generation
The cumulative amount of lost generation was 1,481 MW, 48% above the 1,000 MW single loss protection level at which the system was running at the time. (The system is designed so that it can continue to operate if the single largest generation unit trips at any time, which might be a large nuclear plant, or an interconnector.)
Over 1,000 MW of frequency response was deployed (including 200 MW of generation; 450 MW of batteries and 350 MW of demand response), and the frequency began to recover after an initial dip, however at that point the two gas turbines at Little Barford went offline since they cannot operate without the steam turbine – the first tripped due to excess steam build-up, and the second was manually disconnected about 28 seconds later for the same reason.
At this point the cumulative generation loss was 1,878 MW and frequency fell below 48.8 Hz, the level at which load-shedding protection measures activate. This system automatically disconnected customers on the distribution network in a controlled way and in line with parameters pre-set by the Distribution Network Operators – around 5% of GB’s electricity demand was disconnected (c 1 GW) to protect the remaining 95%.
The rate at which frequency changed during this event – 0.16 Hz/s – was high in historic terms but not unexpected in a low-inertia system.
Did the system behave as it should on the day?
There are six different responses to consider in respect of this question:
- The initial local response on the transmission infrastructure to the lightning strike;
- The responses of the transmission-connected generation at Hornsea and Little Barford;
- The response of distribution-connected generators;
- The initial frequency response; and
- The Low Frequency Demand Disconnection process.
The initial local response to the lightning strike appears to have been adequate with the circuits being restored in around 20 seconds. Lightning strikes on the electricity system are common, and I have seen no suggestion that this response was below that which would be expected.
Was Hornsea appropriately configured?
The response of the Hornsea windfarm on the other hand may not have been as expected. Orsted has subsequently confirmed that equipment at Hornsea saw a system voltage fluctuation with unusual characteristics coincident with the lightning strike, and reacted as expected in attempting to accommodate and address the system condition. However, as the reaction expanded throughout the plant – which covers a large geographic area – the protective safety systems activated.
“Following an initial review, adjustments to the wind farm configuration, and fine tuning its controls for responding to abnormal events, the wind farm is now operating robustly to such millisecond events.”
This is a new windfarm that is actually still in the process of being built, so there are definitely questions to be asked as to whether its internal systems were appropriately configured. The “fine-tuning” referred to in the report, suggest that they were not.
Orsted also did not cover itself in glory in its communications to the market about the event, with its REMIT notices being comparatively late, and incorrectly stating the start time of the event.
Did Little Barford behave as expected and is the substation updgrade relevant?
The situation at Little Barford would appear to be more in line with expectations. The plant detected an anomalous speed measurement and the steam turbine automatically cut out. As the plant cannot run without the steam turbine, the two gas turbines also went down/were taken down shortly after. Commissioned in 1996, the plant was upgraded in 2013, with new gas turbines and is expected to remain operational until 2026.
It is interesting to note that the substation at Little Barford is currently being upgraded – the existing 1960s equipment is being replaced, and the new installation should be operational by 2023. It is unclear whether replacing this equipment earlier would have made any difference to the events of 9 August.
Was the loss of embedded generation in line with expectations for this type of event?
Around 500 MW of embedded generation disconnected after detecting a change in the angle of the voltage. Distributed generators are required to ensure they can safely shut down in the event of a disruption to their local network, and use “loss of mains” protection to achieve this. Loss of mains protection systems respond either to the rate of change of frequency, or, as in this case, to “vector shift” which is triggered by voltage phase angles created by a fault on the transmission circuits. This behaviour is not uncommon with lightning strikes, however, a local loss of 500 MW of embedded generation is not insignificant.
In its first Operability Strategy report in November 2018, National Grid identified that the losses from vector shift protection could be very large:
“We identified that for some faults on the network, the amount of generation that could disconnect due to operation of vector shift protection was larger than the largest loss we normally secure. This issue was localised to networks in the south of England due to the large volumes of generation using this type of protection.”
Initiatives are currently underway to try to immunise embedded generators to make them less sensitive to changes in system voltage – in its latest Operability Strategy update National Grid describes managing the rate of change of frequency to prevent tripping of loss of mains protection to be the main stability challenge it faces in the short term. The cost of managing the system to prevent the tripping of loss of mains protection has risen from £60 million in 2017/18 to £150 million in 2018/19.
Whether these initiatives would have made a difference on 9 August is something I cannot answer…clearly there will be levels of system disturbance which will need to trigger the auto-disconnection of embedded generation, and I cannot say whether the effects of the lightning strikes on 9 August would fall into that category.
Was the initial frequency response adequate?
The initial frequency response systems that came online appeared to have worked in that system frequency did begin to recover until the further loss of the Little Barford gas turbines sent the cumulative loss significantly above the level of the reserve for the day. Of course, question must be asked about the sizing of the reserve, and this is an area that is getting a lot of focus in the press.
National Grid currently spends about £300 million a year on frequency response and reserve products, and is exploring a range of new ways of procuring these services, both to extend access to new technologies/types of participants, and to improve the transparency and competitiveness of the procurement processes.
On 9 August, the system was running with a 1,000 MW protection level – this is actually quite low given there could by 2,000 MW of imports on IFA, or 1,200 MW of output at Sizewell B. On the day, neither of these was running close to capacity so the 1,000 MW sizing reflected the largest load on the system at the time, and no doubt was not larger in order to control costs.
National Grid is required to take account of the potential loss of embedded generation in its determination of the protection level required, however, this loss is considered to be independent of the largest infeed loss – the response holding should cover the larger of the two but does not need to cover both.
It is easy to point to the fact of the blackout and say that it simply wasn’t enough, but it is not unreasonable for National Grid to manage the size of the reserves it holds in response to a changing maximum potential single loss in order to be cost effective. Of course it could increase the protection level, and potentially look at 1.5x the maximum single loss, or the two largest single losses, or any other parameter, but the costs would increase as a result and may simply not be justified particularly given that the last major blackout was eleven years ago.
In the event, power was restored to the transmission system by 17:06, 14 minutes after the lightning strike. The fact that it took longer for some customers to be re-connected, and that the railways were severely disrupted is not really the fault of the transmission system operator.
Did the Low Frequency Demand Disconnection process operate as expected?
At 16:53:49, between the disconnection of the two gas turbines at Little Barford, the system frequency breached the 48.8 Hz trigger level resulting in LFDD – 931 MW of demand was disconnected from the system by the Distribution Network Operators (“DNOs”). The DNOs determine the order in which loads in their networks are disconnected, so the downstream impact of these systems is not the responsibility of National Grid. The report does describe some of these impacts, but as this post relates to the performance of the transmission systems, I will not include them here.
What does this tell us about system stability?
In its Operability Strategy report, National Grid sets out quite clearly the challenges it is facing in managing the electricity system transition, both in terms of de-carbonisation and de-centralisation:
“De-carbonisation has produced high levels of renewable generation which has different operating characteristics, plant dynamics, data quality, flexibility and inertia contribution. This has increased reserve and response requirements and the nature of intermittent renewable generation means that the requirements are more volatile and less predictable…
…As the inertia on the system reduces, the rate of change of frequency increases. The existing frequency response services are specified to deliver full output within a set time. When frequency moves quicker, these services become less effective due to the time it takes them to deliver and a larger volume is required”
The changing generation mix with new forms of generation, connected at different locations, and at different voltage levels are inevitably causing power flows on networks to change. National Grid identifies five areas of potential concern: frequency control, voltage control, restoration, stability and thermal constraints.
Frequency control: increasing levels of intermittent generation will lead to more volatile and less predictable, shorter-term requirements for reserve and response. As the inertia on the system reduces, the rate of change of frequency increases. The existing frequency response services are specified to deliver full output within a set time. Existing procurement of balancing services is becoming less effective in meeting these requirements due to the relatively long-term nature of the products, and the delivery times specified under existing service agreements.
Voltage control: reactive power is required for voltage control, and the requirement is set to increase as network loading becomes more volatile and many conventional generators (which provide reactive power) run less predictably and less often. More absorption is needed to manage pre-fault high voltages, and more injection is needed to support post-fault low voltage.
Restoration: the current restoration approach relies on large, transmission-connected synchronous generators – as there are fewer of these on the network, in the future, a wider range of technology types, connected at different voltage levels, will need to be considered.
Stability: stability is the ability of the system to quickly return to acceptable operation following a disturbance and is supported by synchronous generation. The electricity network has been designed based on the assumption that there will always be a large amount of synchronous generation, so without intervention, the system will become less stable as non-synchronous renewable generation replaces traditional forms of generation. Stability is also threatened by the decline in short circuit levels (the amount of current that will flow on the system in a fault), with regional differences in the extent of this challenge.
Thermal constraints: in the past, the system operator’s ability to instruct the output of a large number of transmission-connected generators met almost all its constraint management needs, however, the number of transmission-connected generators has fallen, and the locations of thermal constraints are changing. There are now more occasions when the options for managing constraints are limited.
Was it all just bad luck, or are changes needed?
In response to this initial report, Ofgem has somewhat pompously said there were “still areas where we need to use our statutory powers to investigate these outages”. The regulator’s executive director of systems and networks, Jonathan Brearley, said Ofgem’s own investigation would “ensure the industry learns the relevant lessons and clearly establish whether any firm breached their obligations to deliver secure power supplies to consumers”.
My initial reaction to this was that it was somewhat self-important, and possibly a reaction to the press interest (and possibly to deflect some of the criticisms that have been levelled at it in terms of its regulation of the country’s electricity networks), however, on reflection, I think that the events at Hornsea in particular bear closer investigation.
Why did a generator so far from the location of the initial system disruption trip when no others did apart from the nearest power station? What were these adjustments to its configuration that have subsequently been made, and was the plant overly sensitive to fluctuations in the system voltage (and was this done through lack of care, or a deliberate attempt to protect the new facility from external threats)? And why were Hornsea’s REMIT notices so late and so inaccurate (or even misleading, implying the windfarm disconnected after the Little Barford CCGT)?
The events have highlighted several inadequacies in the provision of market data, and the BM Reports system could do with an overhaul to make it more user-friendly and to increase the granularity, accuracy and breadth of available data. It should be much easier to plot BMU level generation (without needing to know the code for each one first!), frequency data should be more granular and there should be some representation of system inertia and transmission constraints (in the case of inertia, National Grid’s measurement methodology is still in development, but a beta version could still be useful). A common theme in the system operator’s work on the development of its balancing markets is a desire to increase transparency – this would be a good addition to that aim.
Otherwise, these events have highlighted the importance of National Grid’s ongoing work to re-shape the system in response to the de-carbonisation and de-centralisation of generation. The initiatives described in the Operability Strategy reports indicate that National Grid is both aware of the challenges and has plans in place to address them. However, these are not trivial problems and solutions will take time to emerge. It’s not as simple as “build more batteries” as several commentators – often developers of battery projects – might claim.
The final version of National Grid’s report is due early next month, and investigations are also underway by Ofgem and BEIS. This issue isn’t going away any time soon, and no doubt all the affected parties are hoping that lightning won’t strike twice, at least until the furore dies down.
Thank you – very thoughtful. I am suspicious that perhaps the commissioning process at Hornsea was indeed trying to insulate itself from outside influences, perhaps by setting some parameters too tightly. It begs the question of how much influence NG has at this stage and whether it should have more input or control at this stage, rather than just coming in to witness final tests.
I have always wondered why ROCOF is used to take off embedded generation so easily. Some of it may be able to help in the event of such a fault as this. I hope NG will identify how much capacity of what was taken off had inertia that could have helped.
Thirdly am I alone in thinking that since the fault there has been consistently more coal generation on the system, up to 1 GW, than in the few weeks before? Am I being cynical, but there is no mention of this in the report, yet I doubt it is a coincidence. Please correct me if I am wrong.
On Hornsea – I’ve no idea how much influence NG has over those issues, but there are certain requirements in the Grid Code that generators have to meet, and Ofgem is talking about making sure everyone was complying with their licence conditions, so I would expect there to be some rules to prevent generators setting their failsafes too conservatively, Hopefully Ofgem’s investigation will look at that in some detail.
Interesting comment on coal…I noticed there was one unit running on the day, and so kept a bit of an eye on the ESO tweets about generation mix, and coal seems to be at 1.4%-ish most days. I quickly plotted the generation levels since 1 June and can see that coal ran quite a bit in June, very little in July, and more in August than in June.
In the first week of August it ran at similar levels to now, while on 9th it was actually a bit less. It’s basically running during the day in weekdays, mostly at 700-800 MW, but as much as 1100-1200 in a small number of SPs. I think coal is absolutely being run for stability reasons at the moment, and find it very interesting given the whole “coal-free days” hype a few months ago.
You only have to look at the location of Drax on the transmission map to understand that it is key for providing stability to all the Dogger Bank offshore wind connected into Creyke Beck on the North of the Humber and indirectly to Keadby on the South. It’s evident that Drax and the Grid understand that there is inadequate system strength in the lines to London:
http://www.millbrookpower.co.uk/press_release/new-rapid-response-gas-power-station-approved-bedfordshire/
Here’s Keadby’s generation profile by settlement period – nothing all day until
SP…..MW
31 72.86
32 98.06
33 187.76
34 313.47
35 494.96
36 565.36
37 392.46
38 399.16
It appears Killingholme didn’t operate all day
I am aware that NG does have an awful lot of input over final settings etc and witnessing of final testing, having been subject to it on a number of projects. However in my experience this focuses almost exclusively on the final set up, and I don’t recall much being discussed or required over settings during the commissioning period, which Hornsea was in. During commissioning I find it is usually left to the manufacturers to provide appropriate settings … and it would be no great surprise to find them tweaking them or erring on the side of overprotecting their own equipment.
Interestingly enough it looks like a similar thing is what was largely responsible for the train disruption; motors cut out due to low frequency protection (set too tightly) and then couldn’t be reset … all manufacturer issues I strongly suspect, which err on the side of protecting their equipment All on relatively new train units which clearly hadn’t tested these (extreme?) features during their commissioning (but nothing to do with NG of course).
The lightning strike theory leaves me unconvinced as the prime reason for this major disruption.
Such natural events are frequent & dealt with as a matter of course by National Grid (NG).
High speed overhead line protection plus the all important auto reclose usually deals with these events in milliseconds.
This has been the case over many years, a mere blip on the network.
Being cynical could this be an attempt to divert adverse publicity away from rechargeables ?……Barry Wright.
I think the reasons for Hornsea tripping need looking into further, but it would seem to be highly coincidental if the lightning strike wasn’t responsible eg sudden wind gusting at exactly the same time. I suspect in this case it was down to overly sensitive protection measures at the windfarm, as noted above by David, with the settings during commissioning being at the manufacturer’s/operator’s discretion and not where they would typically be set in use.
There are certainly challenges with the high mix of renewables, but I think the primary cause of this incident was lightning, and the grid responded as it would be expected to. The combined loss of Little Barford and the embedded generation could still have been problematic as that was also higher than the 1 GW protection level on the day, but that’s not to say it would have led to load shedding, so I do think the windfarm is responsible for this incident, but not because of the specific technology.
Ooops ! rechargeables should of course read renewables…..apologises…..Barry Wright.
Rechargeables is perhaps apposite. There is a table in the report showing how various categories of frequency response performed. Grid batteries providing Enhanced Frequency Response only provided 165 out 227MW they were contracted for – the lowest proportion.
The Australian Energy Regulator (AER) is suing some wind farms in South Australia over the “system-black” in that state in 2016, for “failing to ensure their plants met requirements to be able to ride through system disturbances”. Requirements are fine, but it seems odd to me that the system operator AEMO is allowed to simply assume that the requirements are being met, surely they should know for sure exactly what the situation is. Maybe this lack of system design control applies to NG.
https://uk.reuters.com/article/us-australia-windfarm-lawsuit/australian-watchdog-sues-four-wind-farm-operators-over-2016-blackout-idUKKCN1UX0AP
That’s very interesting – thanks for the link. I agree that there should be ongoing monitoring by the SO and not an assumption requirements are met. From the comments above, it seems NG sets requirements for plant post commissioning but possibly not during…Hornsea is still being build so could be “commissioning” for quite some time. Probably an area that needs more attention in both markets.
Nothing to say except the best report on the incident to date.
I hope someone will email a link to Andrea Leadsom
Tip my hat to you, Kathryn
Thanks Leo, much appreciated!
I note that the report into the 2008 incident https://web.archive.org/web/20100206093023if_/http://nationalgrid.com/NR/rdonlyres/E19B4740-C056-4795-A567-91725ECF799B/32165/PublicFrequencyDeviationReport.pdf complained about the protection settings for embedded generation. It seems that 11 years later those lessons haven’t been learned.
I wonder if the trouble was caused at Hornsea’s offshore reactive power platform.
https://hornseaprojectone.co.uk/News/2018/06/Worlds-first-offshore-Reactive-Compensation-Station-installed-for-Hornsea-Project-One
Certainly there is evidence that they had shut down the whole wind farm the day before for tie-in work. Bank 2 wasn’t operating on 1st/2nd; bank 1 (actually with the least installed capacity) was out on 5th/6th, and bank 3 was out part way through the 7th, with the whole farm out from early afternoon on the 8th, before all 3 banks came back late in the evening as the wind started to pick up.
I still think there are fundamental questions to be asked about grid inertia and the role of spinning reserve. It is quite evident that Dinorwig was only used at about half capacity. Maybe they had some turbines out for maintenance. Maybe transmission constraints limited the use that could be made of its capacity. But it is supposed to be capable of getting to 1.7GW in 75 seconds. The LFDD trips occurred 76 seconds after the incident started. With more inertia on the grid they would have had more time. Even if they weren’t paying for Dinorwig to be on standby at full capacity, perhaps they need a different kind of contract there they are paid extra if they are called on over and above contracted levels and they perform.
The various incidents earlier in the year that dipped within 0.1Hz of the statutory minimum were surely a warning. In every case there was much more inertia on the grid, without which the drop would have been twice as far. The Grid got lucky through those events. It got caught out with this one.
A different market, but in some ways similar issues:
https://www.energymagazine.com.au/aemo-issues-stark-warning-for-summer-months/
“Wind power will leave us in the dark” in the Mail on Sunday this week is refreshing in that there are those in the media who have a measure of insight into what is really happening on the network, highlighted by events of August 9th & prepared to write about it.
Peter Hitchens like me is unconvinced that the cause was a lightning strike on the network, a common occurrence & dealt with on a regular basis by the UK network operator National Grid.
I am impressed that some one in the media has done his homework on the real problems facing the over reliance on renewables (wind/solar) to provide our energy needs & presents his findings in a simple, non technical article aimed at a wide audience…….Barry Wright.
PS The number of major fires caused by the ubiquitous “electrical fault” springs to mind.
It is worrying that OFGEM seem to being reactive rather than proactive.
Any control system that was being tasked to manage more heterogeneous and smaller (faster reacting) systems would need to be smarter – and deal with a more complex system which might have, effectively, more oscillatory or positive feedback modes – or just more sensitive to “noise”; thus the grid does need to be “smarter” – especially if it is dealing with derivative based sensing (such as rate of change of frequency).
In line with software system testing a model based control could also be used to test against such scenarios (i.e. the model could be run using a mix of historic data and scenario data…). This would (if they are using proper testing techniques) allowed the grid managers to evaluate the robustness/sensitivity of the system and the control – and would be useful evidence to OFGEM.
I finally got around to reading National Grid’s technical report and appendices. Of course the real nitty gritty is hidden in the latter. I think Oersted have been rather economical in their explanation of how the wind farm shut down in a tenth of a second. The Grid have gone overboard in stressing that the transmission network performed properly, while brushing over several issues. It seems that despite the recommendations from the 2008 event, fault protection ride through remains inadequate on far too much embedded generation. They fail to acknowledge that a 1GW loss becomes a 1.5GW loss if embedded generation shuts down all too easily. The Grid seem very keen to avoid the inertia question, by pretending that inertia would be very costly to provide: doubtless it is if you limit yourself to rarely used battery capacity to provide synthetic inertia. I also noted that they threatened to move LFDD to areas where there was little embedded generation, so as not to lose the generation. More power cuts for London can only be a good thing to concentrate minds! Allowing Hornsea to produce at 100% of available capacity before it had run the 100% proving programme looks to have been another faux pas which might have sniffed out the technical problem that lay behind its shutdown, and by limiting them to the 70% level they had already passed, would have been enough to prevent the blackout trips.
Read with a very critical eye, and a view to what they want to cover up. The same will apply to OFGEM and the E3C reports, and the Select Committee.
The report bears out my preliminary conclusions, namely that grid performed OK (to the standard required, though there is room for improvement), but that the Hornsea windfarm and the train failures were victims of over-protective manufacturers (and/or clients).
Oersted I would agree has been economical with its findings, perhaps because they appreciate the failing is likely a breach of their generation licence and don’t want to magnify the failure It will be instructive to see if Ofgem properly pick up on this. NG’s report is technical and as such is not about assigning blame for licence issues.
In my view NG have correctly stated that the Grid performed to expectation – it pretty much did. However they have in my view been economical about the level of oversight provided by NG of the the generators, especially during the design and commissioning phases of the Hornsea project.
Fault protection ride through on embedded generation is an old chestnut. Way back in 1988 I encountered resistance when objecting to implementing ROCOF on a steam turbine generator of < 10 MWe. I noted that of course this would likely take inertia off the system (and I know it did exactly that!) just when it could help, but the grid were wary of the theoretical case of leaving this generator on supply to consumers without any grid connection, and then putting NG liable for the consequences). A short sighted worry, which can be worked around another way if they were so minded to be sensible. It is now recognised that this feature is not very helpful and where embedded generation can cope (and the system!) then use of ROCOF should not be so tight. However this issue is a contributor to the magnitude of the problem, not its cause.
The most troubling aspect to me of the whole episode is not the grid reserve, which can be reset at a higher level, and cost), but why the manufacturers (leading on their clients most likely) are not challenged enough on their design settings. There are clear examples noted when plant internal protection (Hornsea, trains, hospital) took the plant off at 49 Hz when clearly they were supposed to be able to operate satisfactorily to a lower cut off. This is fundamentally a specification compliance issue which has lain unchecked by the manufacturers, the clients and NG. Had these been picked up before operation, as they should then the only issue we would be talking about would be Little Barford, which would have had minor consequences only.
Clearly NG has to base its acceptance of fault ride through based on model simulations, but their is little point in doing that if the manufacturers are allowed to assume what their spec says, i.e cut off for internal protection is < 49 Hz, only to not set it correctly in practice! More rigorous oversight is needed of this process – unfortunately the NG approach tends to be somewhat bureaucratic, which means real issues can get lost. However none of this excuses the manufacturers'/clients for whom Grid Code compliance is an obligation. So it is essential that internal protection settings are reviewed as correct to specification BEFORE commissioning and not left to later (as it appears to be now with the windturbine supplier and train supplier now looking at operation below 49 Hz as if if it were never a specification requirement in the first place.
If the manufacturer can make a good technical case for higher plant settings during commissioning, then for the prolonged period of commissioning of large wind farm then NG should set a higher reserve accordingly (and charge the windfarm accordingly in the NG connection agreement).
In my personal experience over recent years, I have found that much of this kind of issue is led by the manufacturers, not the clients) and some foreign manufacturers in particular are so confident in their own design strengths that they can tend to pay lip service to Grid Code compliance, treating it is as an exercise rather than one they think really applies to their plant. Of course they will deny this officially, but I have observed the culture first hand. Perhaps NG should ask about how their continental colleagues fare on these kind of issues.
Little Barford is a different issue. Once the steam turbine has tripped (root cause of which has not really been established), the gas turbine can theoretically ride through using the steam turbine bypass and/or boiler safety valves, PROVIDING the plant has been specified to do so, and that it has been test as able to do so. I hope that more information will be forthcoming on whether this capability has ever been tested at Little Barford, but I feel sure that other similar plants may also find ride through of loss of the steam turbine challenging for the same reasons. It usually takes a knowledgeable and persistent client/client's engineer to ensure these are carried through from specification to operation. The loss of the gas turbine was not the main issue here though, but why the steam turbine did not ride through the original fault. Of this we know little.
I wonder what happened to this project?
http://watt-logic.com/2017/10/12/inertia/
Incidentally an excellent write-up of the issue. NG claim they are not monitoring inertia as a matter of course.
I’m finally getting the chance to read through all of this – I’m going to set out my thoughts in a new post, but I can reply to this one quickly….National Grid and Reactive Technologies announced a deal on 5 August for the use of the GridMatrix inertia measurement technology.
I discovered that National Grid finally published 1 second resolution frequency data for May and June just before the weekend. At least they are now zipping the data, but you still need file editors that can handle 2.6 million lines and preferably a spreadsheet that does a million rows or more. I found the May incident which lasted about 5 minutes, dropping to a nadir of 49.553Hz for two successive seconds. More work to do.
The inertia chart at your link shows some very rapid changes in levels and particularly low figures at the weekend. I should look back to see the composition of generation over the chart period to see how they compare. Two years on maximum penetration of no inertia sources has surely increased.