In the past few years, we have been repeatedly told by the system operator, now known as NESO (National Energy System Operator) that the GB power grid can be underpinned by a combination of renewables (wind and solar) and imports from other countries. NESO and its predecessor organisations have been a huge cheerleader for interconnectors. The Department for Energy Security and Net Zero (“DESNZ”) is on the same page. The idea is that when wind and solar output are low in Britain, the shortfall can be largely made up with imports.
As I have described previously, there are some significant problems with this assumption, primarily the high weather correlation we have with our connected markets which means that they may also experience shortages of weather-based renewable generation at the same time that we do – this was the case in early November when Britain and many of its neighbours experienced dunkelflaute – dull, still weather which is terrible for wind and solar generation.
Another key problem is that exporting electricity in general causes electricity prices in the exporting country to rise. Ofgem has said that once GB becomes a net exporter of electricity, there will be a consumer dis-benefit and as a result it has rejected almost all of the proposed new interconnector projects with Continental Europe in Window 3 of the Cap and Floor regime.
The problems go further than this. The contribution of both renewable generation and interconnectors to meeting demand in GB is often lower than advertised.
Wind load factors are low and are not rising
In its Energy Generation Costs 2023 report, DESNZ claims that the load factors for offshore wind will be 61% in 2025 and higher in future years. This is in direct conflict with other Government figures quoted in the Digest of UK Energy (“DUKES”) reports which show that offshore wind has a load factor of around 40%.
However a lot depends on what is understood by “load factor”. Some analysts look at the output of the windfarm at the asset level and define the load factor as the amount of electricity generated per unit of capacity. So if a 10 MW wind turbine produces electricity 35% of the time it would have a load factor of 35% and generate, on average, 3.5 MWh/h. However, what matters to consumers is not how much electricity a wind turbine generates, but how much of that electricity is delivered to them (or to the grid more generally). This is a lower figure because Britain lacks the grid infrastructure to efficiently utilise this electricity, so often wind turbines are forced to curtail – ie reduce – their output. Analysis of the actual contribution of wind to demand illustrates this.
I analysed the contribution of wind to meeting demand from the start of 2024 to the end of November, using data from BMRS. Assuming 6.5 GW of embedded wind (as set out in NESO’s Winter Outlook), and a total amount of installed offshore wind of 30.163 GW (which was the year-end capacity quoted in DUKES 2023 – clearly more has been added during 2024 but using the figure for the end of 2023 would be conservative for the purposes of this calculation), I found that the amount of transmission-connected wind meeting demand in 2024 was just 30% of nameplate capacity.
In September, Net Zero Watch wrote to DESNZ requesting an explanation for the 61% figure used in its Energy Generation Costs 2023 report. A response was finally received in mid-November after the matter was raised in a House of Lords debate, in which the following explanation was offered:
“To enable comparison across technology classes it is standard for LCOE estimates to be calculated assuming that they operate at their technical maximum. This differs from actual annual operation that accounts for all reasons for wind plants to be operating at less than maximum capacity. This will include periods of maintenance and curtailment for example, which varies across years and across projects. Another factor is that the Generation Costs Report considers new turbines where improvements in turbine design and larger turbines (higher hub height) enable increases in load factors. Hence, we expect a higher load factor for the newer models assumed in the Generation Costs Report than for the range of turbines in the existing fleet that you refer to in your letter,”
– Jenny Inwood, Energy Infrastructure and Markets Analysis Team, DESNZ
Oh dear! There is so much wrong with the explanation.
Firstly, suggesting that the technical maximum must be used to allow for technological comparisons is nonsense – what matters, particularly in the context of costs to consumers, is what the technology ACTUALLY produces, not its technical maximum.
Secondly, as Net Zero Watch points out, Levelised Cost is supposed to denote the lifetime cost of the generation divided by the lifetime amount of generation. By using the technical maximum, DESNZ assumes the windfarms always produce at their maximum level which is clearly nonsense and counter to the definition of “levelised cost” in the first place.
Thirdly, the idea that larger wind turbines will be developed producing greater load factors is highly speculative – the trend for larger turbines has stalled in the face of significant warranty issues faced by turbine manufacturers as larger turbines fail to perform. This difficulty is easy to understand – the larger a turbine, the greater the distance between the tips of the blades and the higher the chance they will experience different wind speeds. These different speeds impose significant stresses on the blades causing them and their mountings to fail. Although some larger turbines are being developed in China, Western OEMs are shying away from increasing turbine size, so a conservative approach would be to wait to see signs of this changing and not simply assuming that it will.
Net Zero Watch has pointed some of these errors out to DESNZ and awaits a response.
Interconnector availability is lower than expected and falls with age
The problems I have previously described regarding reliance on interconnectors relate to the availability of electricity for exports to GB, or the desire of exporting countries to continue to export. But there’s another factor in play and that is the reliability of the interconnectors themselves. And they are also interesting to consider.
The table below shows the de-rating factors for interconnectors in the next capacity auction. There is the range which ESO (now NESO) suggests, and the values chosen by DESNZ for the auction. The Capacity Market de-rating factors for interconnectors take account of both the physical capacity of the cables and the availability of spare generation in the connected markets for export to GB.
It’s particularly interesting to look at the values for IFA and IFA2 which are virtually identical despite IFA being many decades older than its newer cousin, and being actually available a lot less. It’s reasonable to assume that the spare generation parameters for both are the same since they originate in the exact same place in France, so the difference in de-rating factors should reflect the difference in physical availability, but this does not appear to be the case.
I analysed REMIT availability data for all GB-connected interconnectors (excluding Ireland which I consider to be an additional source of demand on the GB grid) over the past three years. These data are difficult to find since the interconnectors can choose where to report their REMIT notices and can also change as Eleclink has done in the past year. So some of these interconnectors report to Elexon (BMRS) and some to NordPool. ENSTOE also has data but they are not always consistent with the other two which are the ones I relied on for this analysis.
As an aside, I have asked Ofgem to consider requiring all GB-connected interconnectors to report their availability to Elexon, a request I understand Elexon has also made, in order to improve the transparency of market data relating to the GB market. It would also be good if Elexon would report availability data in percentage terms, since extracting the data for my analysis was very time-consuming. I would have liked to go back further but the data gathering simply takes too long. If any of my readers have these data and are willing to share, I would be happy to update this post to include them.
The table shows the percentage of capacity that was available ie not reported as unavailable on REMIT, for each interconnector.
Of course, selected years can be affected by specific improvement works for example, a synchronous condenser is being built at the Sellindge connection point for the French interconnectors, which has led to reduced availability over the past year as various cables must be disconnected while the equipment is tested. And in its first months of operation, the capacity on Viking was restricted by half due to constraints on the Danish grid.
However, it is clear that IFA has much lower availability than IFA2, which is to be expected, given its greater age, so it is difficult to understand why its Capacity Market de-rating factor is just 1% lower than that of the newer IFA2.
Two other things jump out from these data. The first is just how variable some of the interconnectors have been over the past three years. While some of this may be due to projects such as the synchronous condenser at Sellindge which should improve overall availability, it seems that there are always ad hoc “issues” affecting flows.
In 2016 a ship’s anchor severed half of the eight cables making up IFA, putting them out of action for over a year. Around the same time a large part of the French nuclear fleet was offline for inspections as a result of the discovery of the steel carbonisation fraud. In 2022 large parts of the French nuclear fleet went offline again as a result of the stress-corrosion problem. In each case, France went from being a net power exporter, to an importer, changing the balance of flows to and from GB. This suggests it would be unreasonable to discount apparent “one-off” disruptions, since these “one-offs” occur with some regularity.
Another thing which jumps out is the narrow range of de-rating factors determined by DESNZ for the Capacity Market. Despite the differing ages of the links and the market differences, and the very wide range supplied by ESO, DESNZ determined all interconnectors should be de-rated at roughly two thirds of their nameplate capacity. This feels more like a rule of thumb than something scientific. Of course, the Capacity Market should indicate availability in times of system stress and not average availability. It is quite likely that system stress would coincide with both ad hoc availability problems and limited spare generation capacity in the connected markets eg a widespread dunkelflaute.
Trying to model spare generation capacity across the European markets is beyond what I have time for, but I have looked at the extent to which GB exports during times of high GB system demand.
When I looked at this in 2019 I found that GB exports during 12% of the hours with the top 5% of demand. In 2022-23 this rose to 23%, reflecting the extensive French nuclear outages which saw France switch from being a net electricity exporter to a net importer. So far in 2024 (using 5-minute data rather than hourly), the frequency of exports during the periods of highest demand has been more consistent with Winter 2019 at 13%. (2020 and 2021 have not been analysed due to the impact of covid on demand.)
In order for interconnectors to genuinely support security of supply it would be better if exports during periods of high demand were minimal, but 13% of the time is almost the equivalent of one day per week (14%) which is not insignificant.
.
So what does all of this mean? Clearly both renewables and interconnectors contribute less than the Government would like us to believe, and in the case of interconnectors, even Ofgem has warned that being an electricity exporter is not good for consumers. Perhaps the countries that currently export to us will realise this as well and reduce their enthusiasm for cross-border trading.
The approach to renewables is arguably worse. Overstating load factors benefits no-one since the contribution to the grid cannot be faked – either windfarms are generating electricity or they are not. If they are not, other generation or imports need to fill the gap. Expecting windfarms to contribute twice as much as they actually are means these other supplies may not be procured in the required quantities, threatening security of supply.
It also means that the cost of windfarms is higher per unit of electricity generated than the Government would have us believe. If they are only running half as much as advertised, the capital cost of the electricity they generate doubles. This translates into poor value for consumers, undermining the “cheap renewables” narrative still further.
It’s ironic that the term for the psychological manipulation of people to make them believe something they would otherwise think is false is known as “gaslighting”, since these failures mean we will continue to rely on actual gas for lighting for many years to come.
Greg Jackson closed down a critic on renewables recently on Question Time suggesting that ‘EV batteries can store enough electricity to power an average home for the best part of a week’ and ‘we have distributed storage that can last us days on end without wind’
https://www.youtube.com/watch?v=TJQU5ATZPt8
Id be interested in your thoughts on that statement. Im not sure the technology is available as yet to make this breathe, but in principle what would you say on this?
Your answers await at this link, hot off the press today.
https://open.substack.com/pub/chrisbond/p/im-sorry-they-havent-a-clue?r=om40y&utm_campaign=post&utm_medium=web
And has the Government factored in the extra demands on the grid that heat pumps would generate, or electric cars and all thisr new house too.
There seems to be a prevailing madness amongst Labour MPs , which is sleep walking us into a cold miserable future.
And if we stock up with cheap Chinese EVs, what will happen if Tawain gets invaded?
We should never have shut our remaining cosl powered station down, and in fact, kept quite a few running.
Very good analysis. I just wish there was someone who could differentiate between the different renewable power sources in government and NESO and DESNZ, who could show their economic modelling more clearly. The key renewables that need backing-up, due to variability (wind and solar), that are available only some of the time, need to cost only as much as the gas (fuel only) used in CCGT if they are installed centrally, but not if it is being installed by the end-user, where different metrics apply, where a financial benefit is achieved if the price of the generated electricity is less than the cost of the electricity from the supplier.
With the increasing detriment of curtailment payments, the wind capacity now being installed really needs to have storage costs taken into account and technology installed concommitently to deal with the intermittency. You can see that on some days, doubling the wind power would help significantly with reducing the amount of methane burnt, but are they installing cost effective wind turbines, that are only displacing the cost of gas, or are we getting far more costs added due to inaccurate or incompetent financial modelling and financial contractual arrangements?
Renewable electricity does not have to cost more than gas/coal centralised electricity generation, but it has to be recognised that there are differences in the economics when installation is centralised or done by the end-user.
If we are paying curtailment payments, why isn’t that money being forced into investment into installation of storage technologies. I would put a legal obligation onto all companies that receive curtailment payments that the money has to be spent on installation of storage technology. If they are not, it is a stupid waste of money on companies that are not creating a power generation system that works for the UK population.
An end user that installs solar PV or wind never receives curtailment payments. Why should it be different for centralised power? It just makes the economics worse for the end-user for centralised power generation.
If enough end-users install solar PV with battery storage, minimising demand on centralised power generators, then there should be plenty of time over the summer for maintenance to be done on any CCGT generators or interconnectors.
Centralised gas fired CCGT capacity can be reduced if sufficient people install their own battery storage to be recharged over night, shifting demand from the day-time peak. Renewables do allow for fewer CCGT facilities/capacity, but only if the end-users have enough battery capacity installed. What is the difference in peak-trough power generation in winter during dunkelflaute? If you can shift 10GW of peak power demand from daytime to overnight………10GW less CCGT capacity is required!!!!
Perhaps a few blackouts might encourage this to happen?
This idea of displacing the cost of gas is pretty short-termist. For 2 decades, gas prices were low and stable. Then suddenly they shot up for a couple of years. The question is, if we analysed this century as a whole, are consumer better or worse off with renewables? The answer, which can be clearly seen from charts of wholesale vs retail price data, is worse off. Because intermittent renewables are expensive: higher balancing costs, higher network costs, need for backup generation, curtailment costs because grid investment failed to keep pace, and of course, subsidy costs
Surely it is the fundamental economic answer, where renewables, to be cost effective, where we need back-up facilities burning gas, or battery storage, economically to have no detrimental effect the renewables have to be priced at a generation price that represents the avoided fuel cost, i.e. the price of gas or coal if installed centrally or the price of electricity if added by the end-user. We can then afford to keep all the back-up of 50GW of whatever source, and the renewables running whenever they are available, of whatever source they are. This means all the CAPEX and OPEX costs for the renewables needs to be equal to the cost of the fuel going into the CCGT.
The renewables are costing more at the moment because of where they are being installed – centralised, 100s of miles from where they are needed – causing grid upgrades, and curtailment charges, and subsidies. This isn’t a problem with renewables, but the installation strategy that is being followed. I can believe that costs are being added into the system currently, but this is because there appears to be no overview and no regard to the economic way to do it.
When they were subsidising solar PV 15 years ago at the start, I refused to join the bandwagon because economically it was inappropriate for the country. But now solar PV installed by the end user is economically viable with no subsidies, so too is onshore wind, next to a factory or end user, farmer, etc.
Kathryn you are absolutely right about centralised renewables – the worst economics of renewable installation possible. Unfortunately we have politicians hell-bent on showing how much they have achieved, but missing the point that it’s the end-users that should be doing the installations when it is economic to do so. Perhaps that would remove the political aspects of the whole debate, and avoid the problem of centralised renewable power generators gaming the system for maximum profit, increasing the costs for the UK population and businesses.
It isn’t just about displacing the cost of gas. If you can get a system with end-user resilience built in at no extra cost, it’s win-win-win for the environment, economically and for resilience…….greater than the sum of its parts!!!!
Location isn’t the only challenged with renewables – the low energy density is also a problem since it means more wires are needed to connect them. You also have to deal with backup since the capex and opex costs of renewables do not represent their full costs to consumer – you need to include the cost to make them firm.
Exactly, which is why the renewables should be installed by the end-user where they’re needed and can get an economic benefit, and the centralised generation is then there as the back-up, burning gas, nuclear, or renewables with storage (only when economically feasible). Workers in factories have never moaned about wind turbines, chimneys etc, where their livelihoods depend on it. It’s only with the massive expansion of wind adjacent to residential areas and covering the landscape in turbines where there is the resistance.
Unfortunately, as far as I can see, the governments, of all shades, have been acting as a cartel/sole supplier for their own benefit, where profit from the generators is needed to bolster the tax receipts, building in costs into the system, because it means that they can get more tax out…….off-shore wind on Crown Property……nice little earner, but that income goes on the electricity bills. We are paying more for electricity so that the government can get more income to pay for the borrowing.
As the government makes it more expensive, the economic answer is to install end-user power generation and battery storage to counter the poor economics. We have to game the system for our own benefit, if that is what everyone else is doing………governments and centralised power generators, and yes, the end result is poorer economics for the poorer people………if a government isn’t capable of thinking of their best interests.
Interesting stuff, Kathryn.
It’s a concern of mine that the breakdown of generation at any moment refers to interconnector input to the grid as “Imports”. While this is, of course, perfectly accurate, it unfortunately obscures for the general public that most of the biomass generated relies on imported wood chips and a significant proportion of the gas used for electricity generation is imported by pipeline or as LNG.
It would be interesting to know the full extent of our day to day reliance on imported energy and what this tells us about energy security.
That can equally apply to gas as most of the gas we burn in our power stations is now imported. However, I am referencing direct elecricity imports, not wider energy imports, although your point is a good one.
Hello Kathryn,
Re: your “Assuming 6.5 GW of onshore wind (as set out in NESO’s Winter Outlook), and a total amount of installed offshore wind of 30.163 GW (which was the year-end capacity quoted in DUKES 2023”
I’ve just been referring to DUKES_6.2 “Capacity of, generation from renewable sources and shares of total generation” where by end 2023:
Wind installed capacity (MW) = 30,163 total comprising
15,418 MW Onshore, and
14,745 MW Offshore.
I’m not familiar with that 6.5 GW NESO number, looks low: and it looks like you’re using the total Wind capacity as your Offshore number.
Best regards.
Sorry, I should edit this to make it more clear – I was trying to extract transmission-connected wind rather then the onshore/offshore split since the contribution to demand is measured at the TS level. Just under half of the onshore wind is embedded therefore needs to be excluded from the TS data.
Kathryn as you know Putin (and Xi) just love the idea that we rely on undersea inter-connectors and undersea gas pipes for our power. And we only need to look back to WW2 to see why we need our farms to produce food. The last thing we should be doing is destroying the land for the benefit of unreliable diffuse energy technologies. Nuclear power and drilling for our own shale oil & gas are what we urgently need in this increasingly unstable and dangerous new world order.
Good to see you having time to produce another insightful article.
On wind installed capacity the NESO data set supporting this winters outlook report quotes onshore as 13.797GW offshore 16.165GW. The 6.5GW is a reference to embedded wind i believe.
Also looking at Energy Trends table 5.4 and using installed capacity from ET table 6.1 gives a LF of 35% for offshore over first 9mths of 2024. However, Ive separately looked at actual generation on some of the more recent sites on elexon and they are into the mid 40’s. So a modest improvement in LF ie a c5% may come through as the likes of DoggerBank are commissioned.
Anyhow in terms of derated capacity NESO assumption is wind on or offshore has a derated capacity of 14.4% vs 91% for CCGTs!
Interesting that prices get pulled up when countries export although given the inadequacies of our transmission system aren’t going to come close to being fixed before the end of the decade im not sure that will be a regular occurrence anytime soon.
On the i/c’s IFA1 suffered a fire late 21 that knocked out one leg and was part repaired in late 22 and fully restored in 23 so that presumably lowered its availability. That said im still weary of the political factor with the i/c’s becoming a risk.
Yes, I meant embedded wind and have corrected the blog to show this. My analysis was to show the load factors of transmission-connected wind.
The amount of ad hoc issues with interconnectors appears to be quite large – there is always a fire, ship’s anchor, grid issue or other fault impacting availability, so in the end it’s not worth correcting for them. Ad hoc interruptions appear to be part of the deal when relying on interconnectors
“Ofgem has said that once GB becomes a net exporter of electricity, there will be a consumer dis-benefit and as a result it has rejected almost all of the proposed new interconnector projects with Continental Europe in Window 3 of the Cap and Floor regime. ”
They announced that they were minded to accept only one out of seven but then at IPA stage that’s gone up to three out of seven.
“It’s particularly interesting to look at the values for IFA and IFA2 which are virtually identical despite IFA being many decades older than its newer cousin, and being actually available a lot less. It’s reasonable to assume that the spare generation parameters for both are the same since they originate in the exact same place in France, so the difference in de-rating factors should reflect the difference in physical availability, but this does not appear to be the case.”
Yes and no. There is obviously logic in long term assuming that any physical asset will deteriorate, however slowly. However as you note, interconnector availability is mostly about ad-hoc failures such as anchor damage to cables or the IFA fire. There’s also a cyclical factor as there will be longer planned outages at certain points to do larger replacement works. In theory there would be a long term age-related reliability decrease due to component ageing but when I’ve modelled this before for clients, the effect ends up completely lost in the noise of over-running planned outages and cable strikes.
I haven’t looked at how the capacity market de-rating factors were actually calculated but I can tell right away that the longer I/Cs are slightly more de-rated which will reflect an availability estimate driven by cable failures calculated as a risk per unit length.