Cyber-physical reliability of dynamic line rating ICT failures in OHL networks

The integration of information and communication technologies in power networks enables greater ﬂexibility through Smart Grids and increased network adequacy and reliability. Information and communication technology’s functionality and design diversity within the cyber-physical power system should be explicitly deﬁned to quantify the risk of information and communication technology failures. The purpose of this paper is to quantify the cyber-physical reliability risk introduced by the information and communication technology deployed for dynamic line rating implementations, which are usually installed in adverse environments and have a higher failure risk than indoor information and communication technology installations. This cyber-physical reliability study employs a modiﬁed sequential Monte Carlo approach with a Markov state space to capture the dynamic line rating-information and communication technology functionality. The method inte-grates the additional dynamic line rating states within the overhead line operating states, which allows quantifying the risk of dynamic line rating failures against the risk of traditional probabilistic line rating uprating practices. Results from IEEE 24-bus and 14-bus network reliability studies indicate that dynamic line rating-information and communication technology failures (a) increase generation costs in the 24-bus transmission network with high generation ﬂexibility, while (b) they reduce the reliability of the 14-bus network with small generation diversity and multiple load points. Such results can provide insightful recommendations on the quality of dynamic line rating-information and communication technology infrastructure and inform utility’s investment planning processes and maintenance practices.

as well as permanently at normal operation [7,8]. It is with this thermal limit increase afforded by ICT approaches such as dynamic line rating (DLR) that this paper focuses on.
Some studies attempted to capture these cyber-physical interdependencies and their effect on system reliability [9,10]. Others have demonstrated improvements on network reliability through smart monitoring systems for transformers and circuit breakers on substations [11], the use of current and potential transformers for predicting failures on distribution systems [12] and Smart Grid technologies integration, including demand response, storage systems and electric vehicles [13]. The contribution of ICT architectures on wide-area monitoring and control systems has also been investigated [14]. Additional contributions of smart technologies such as DLR systems in network adequacy and flexibility are discussed in [15,16], under different technology implementation scenarios [17], and in relation to measurement accuracy [18]. Experimental evidence for ICT unreliability studying different DLR systems in both instrumentation and communication components is performed by EPRI [19]. However, little attention has been paid to quantify the effect of ICT for DLR on the power network [20], and consequently on identifying the benefits from improving DLR-ICTs.
This work builds and extends on previous work [21][22][23] developing novel reliability modelling practice to consider the benefit of DLR applications but most importantly, to include the risk of their ICT failures that can affect the network's cyberphysical reliability. In particular, [21] introduced modelling ICT in power networks proposing the integration of ICT within DLR instrumentation on both OHL and UGC using a blackbox concept; [22] uses a similar approach whilst focusing on the ageing of the OHLs. An evolution of the black-box ICT modelling that captures the failures of information and communication separately is presented in [23] but without capturing the modelling details of the OHL ageing.
The methodological extension, which is comprehensively covered in this paper, effectively measures the impact of information and communication failures on power networks operation. Hence, the cyber-physical system reliability, when DLR is implemented within the complete network, can be quantified and also compared against the risk of the simpler probabilistic line rating (PLR) uprating practices, which do not require ICTs.
Section 2 describes the reliability modelling framework that can be implemented on OHL power networks with DLR-ICT components' operating states. Section 3 presents the implementation of the methodology on standard IEEE networks. Section 4 summarizes the main impact of DLR-ICT failures on power network reliability and discusses future work.

MODELLING OF NETWORK RELIABILITY WITH DLR-ICTs
Time-varying line rating is a frequent ICT application for reducing thermal constraints on network overhead lines (OHL). It is based on real-time measurements, provided by DLR systems [8,24,25], and thermodynamic calculations to determine the line's current rating at a suitable time interval (minutes, hours, seasons, etc.), specified by the operator. This process is dynamic as the calculations are repeated at every time interval, and hence, it is frequently referred to in the literature as DLR or real-time line rating [8]. Time-varying line rating is not a new concept as it was initially implemented through seasonal ratings without the inclusion of any ICT [26]. This two-level reliability modelling approach is developed to capture the cyber-physical interdependencies between the failures of DLR-ICT and the physical OHL network in a Smart Grid paradigm: a. The impact of the failures of the employed DLR-ICT architecture on the OHLs; b. The impact of the failures of the power components on the power network. This approach can assess the risk of DLR-ICT information failures (I-failures) and communication failures (C-failures), in addition to the risk associated with traditional network component failures. Hence the cyber reliability is measured in I-failures, which indicate any kind of measuring instrument malfunction and C-failures, which indicate the inability of the control centre to receive the measurements.

Computational framework outline
The proposed framework for modelling a power network with OHLs having DLR capability is depicted in Figure 1. The flowchart illustrates the main tasks and key input and output data. It should be noted that the power network and ICT network are interlinked in this methodology via DLR-ICT modelling. Therefore, both power network topology and ICT network architecture need to be defined in order to capture their interdependencies.
The network modelling data determine three main groups of data. The power network topology, which includes network configuration, generation constraints and load profiles. The reliability data (e.g. failure, λ, and repair, μ, rates) of the power network components. The OHL design details, which includes the maximum conductor temperature, total length, installation tension, conductor bundle configuration, conductor technology, and size.
The DLR-ICT modelling data determine the overall ICT system design and operation. It includes the DLR-ICT architecture, which details the communication network interconnection topology to the operation control centre (OCC). Hence it contains lengths of wired communications, wireless communication paths available and the specific locations of the required instrumentation installations. It also includes reliability data (λ and μ) of the ICTs, as well as the weather measurements obtained from the DLR devices (i.e. wind speed and direction, solar radiation and ambient temperature). Thus, the DLR-ICT modelling data determine the OHL adequacy for a pre-specified time interval, set by the operator, and is updated in the next time interval.
The System Simulation Design block performs the calculations required for unifying the time-steps of the different input data implemented in the network's sequential modelling. Thereby, the input data with different sampling frequencies, mainly related to weather recordings of DLR systems, are 'synchronized' to the specified network modelling Step-time (Δt). This data synchronization is executed by averaging the values that were recorded at the previous Δt to produce a single value. This value is used for the present Δt, while the recordings within the present Δt are used to produce the new average value for the next Δt. Consequently, the system simulation design block determines the Step-time (Δt), Analysis-duration (t An ), and Simulation-duration (t Sim ) parameters used for the system modelling. Δt defines the segment (minutes or hours) of the time domain for which the weather and network loading data are used for the branch components rating calculation. t An refers to the duration (month, season or year) for which the network reliability analysis is performed. t Sim determines the maximum number of repetitions (Nr max ) for the analysis, with each repetition (Nr) lasting for t An . Hence, t Sim defines the reliability assessment level of accuracy.
The reliability modelling is implemented using the sequential Monte Carlo (SMC) approach for developing the cyber-physical network transition state mapping (from operating to failed) history, and hence for both OHL and DLR-ICT components. This process is based on time to fail (TTF) and time to repair (TTR) values obtained from random variables of the exponential distribution described by (1) and (2), with U being a uniformly distributed random number ranging from 0 to 1 [27]. In (1) and (2) the λ and μ values obtained from Reliability Data blocks to capture the TTFs and TTRs of each network and DLR-ICT component transitions. This produces the Cyber-Physical Network Transition State Mapping, for the reliability analysis in an Nr.
The Network State Characterization is, then, implemented to determine the operating state for every OHL (detailed in Section 2.3) and ICT (detailed in Section 2.4) component. Thus, it determines each OHL's adequacy at every Δt, which is also weather dependent when DLR-ICT is operational (i.e. 'upstate'). The simulation is completed when a pre-set covariance value of an output index is reached (e.g. 5% in Figure 1); otherwise, it is limited by the Nr max . Finally, the cyber-physical reliability metrics (detailed in Section 2.5) are calculated expected values from the distributions produced by multiple repetitions.

Current rating calculations of OHLs
Three standard methods calculate OHL ratings considering different assumptions [28]. The IEEE current-temperature calculations described in [29] are used in the paper and modified as shown by (3). This allows the implementation of OHL current rating in a semi-steady-state for every Δt. Thus, (3) provides the DLR using the weather at every Δt to compute the con-

FIGURE 2
Decision flowchart for and OHL j system with DLR j operation states vective heat loss q c (Δt), radiated heat loss q r (Δt), and solar heat gain q s (Δt). The conductor electrical resistance R(T C,pre-Δt ) is determined by the OHL operating conditions that is calculated using its previous Δt temperature T C,pre-Δt . The DLR weatherdependent data, recorded within every Δt, are reduced to a single value for each OHL in the system simulation design block ( Figure 1). .
(3) However, the conductor temperature is system dependent and hence it is dependent of the cyber-physical network state. Therefore, the conductor temperature calculated at the previous Δt, is used in the present Δt to perform the AC optimal power flow (ACOPF) considering the present network state. This approach allows for more accurate calculation of the new OHL temperature, resistance, rating and current flow at present Δt. Furthermore, it improves the calculation accuracy on power flows and losses as it considers a temperaturedependent resistance, which common network analyses neglect [30].
The static line rating (SLR) is based on a single (independent of time) use of (3) and conservative weather (i.e. small probability of exceedance), while the PLR is basically an SLR with the use of less conservative weather conditions [26].

Operation states modelling of DLR-OHL
The power network state analysis in this paper considers three different states for every smart OHL j to capture the DLR capability of OHL j . Figure 2 illustrates the flowchart of the three DLR-OHL j operation states: (a) the OHL j failure; (b) the ICT j failure, which describes the OHL j operating at SLR; and (c) the intact DLR-ICT j , which describes the OHL j operating at DLR. Consequently, the DLR-OHL state is determined at every Δt, and when both OHL and its ICT are fully operational, then OHL's adequacy is produced by (3) and [29]. When only the DLR-ICT j component of the OHL j fails then, its predetermined SLR j dictates its adequacy. It should be noted that the SLR is determined by the utility, which may wish to implement less conservative values during DLR failures.
In Figure 2, I-failures and C-failures are separated due to the different causes of their occurrence. In principle, instrumentation failure cannot be identified when the communication system is failed (i.e. hidden failures), although the instrument is operational, it is impossible to communicate to verify its state and communication repair needs to be performed first. This is captured in the flowchart described in Figure 2. In addition, this separation is also based on the assumption that instrumentation might require OHL interruption resulting in different repair actions (and costs) when compared to communication failures. It also allows measuring network performance as the result of DLR instrumentation and communication system unavailability recorded by others [8]. The flowchart in Figure 2 considers that in the presence of a C-failure the measurements cannot be transmitted to the control centre and the operation decisions are made using SLR (in Figure 2). However, this strategy can be modified based on the DLR architecture and operator practices.

2.4
Cyber system architecture modelling The cyber system architecture needed for the network-wide DLR realization is shown in Figure 3. It defines the necessary links that capture the interactions between the physical OHL and the cyber ICT networks employed for the DLR implementation. For the purpose of the modelling, this is simplified by assuming a single 'multi-purpose' instrument installed on the OHL j (i.e. I OHLj ) to obtain weather and conductor data. The measurements are transmitted via a wireless system communication (e.g. Global System for Mobile) from the multi-purpose instrument (installed on the conductor or tower) to the near- Markov state space of DLR-ICT j cyber system to monitor the OHL j est data logger at the bus k (DL k ), which is physically located inside the substation. The DL transmits the recorded signals to the central server via a communication wire that is assumed to follow, for simplicity, the OHLs' path, as shown in Figure 3 with the blue dashed lines (although different communication cable lengths can be used). The server is next to the OCC that makes the network operation decisions. A DL could receive data from multiple OHLs and thus link more than one OHL with the OCC (e.g. DL 4 ), while substations might not have a DL (e.g. bus 5). The exact cyber network architecture can vary; nevertheless, the approach described here captures the main DLR-ICT network components [8,31,32]. Each DLR-ICT j system for the network's OHL j is modelled using the Markov state-space approach of Consequently, all components in Figure 4 have to be functional for an operational DLR-ICT system. The repairs are assumed to be done by different teams (in parallel), while cyber failures are associated with DLR-ICT hardware and software faults and assuming no available recovery system [33], that is, there is no backup hardware and software system to provide redundancy and in a failure event a repair is necessary.

Information component state transitions
The information component I OHLj is modelled in such a way that it requires all instruments to be intact for the DLR on the OHL j to be operational. This assumption simplifies the modelling and simulation time, which makes the model applicable to different DLR systems and real (large) networks. Thus, the information component of the DLR-ICT j system (for OHL j ), has a transition to failure probability of λ I,c,OHLj and λ I,tw,OHLj for instruments installed on the conductor and tower, respectively, calculated by (4) and (5). N Inst,c,OHLj and N Inst,tw,OHLj indicate the total number of instruments installed on the conductor and tower, respectively.
The three main components of the DLR-ICT2 communication infrastructure modelled as connected in series formation , for conductor instr.
The mean time between failures (MTBF) of the sensors (from the manufacturers) can be used to calculate their λ I,c,i.j and λ I,tw,i.j but these values might not consider failures caused by adverse conditions that OHLs often experience (i.e. lightning strikes, switching impulses, temporary overvoltages). A single repair transition probability, μ I,c,DLRj , is used for all OHL instruments connected on the conductor, while another repair probability, μ I,tw,DLRj is used for all tower instruments. This modelling approach assumes the complete replacement of the conductor instrumentation, to minimize line interruption time and the easier replacement of sensors installed on towers. This is shown in Figure 4 using the information component block with two distinct repair rates calculated from the mean-time-to-repair (MTTR) values based on the utility's maintenance method.

This infrastructure is modelled using three distinct groups
The communication cable, the server system, and the wireless communication from the substation to OHL shown in Figure 4 with the communication infrastructure block.
Communication cable failures are related to cable cuts (CC) and depend on the cable length [31]. Server (Sv) failures are related to the availability of the central DLR calculation platform. It also assumes a delay before the server restarts due to lack of direct monitoring personnel and a redundant power supply, but not against facility-wide outages [8]. Wireless communication (WC) failures allude to the failures of the local router, DL acquisition system, and antenna required for each DLR-ICT j as shown in Figure 3 [8].
Therefore, the failure probability of the communication infrastructure, λ C,OHLj , that connects the DLR instrumentation system of the jth OHL to the OCC is calculated by (6). From (6) it is evident three main reasons dictate these failures: a. The failure of communication cable (λ CC,k ) that connects the OCC server to the kth DL (i.e. DL k ), which locally collects the data from the DLR instrumentation on the jth OHL. b. The server failure (λ Sv ) that performs the DLR calculations for the complete network. c. The wireless communication system failure (λ WC,k ) that connects the instruments on the jth OHL with the kth DL.
Three different repair rates, μ CC , μ Sv , and μ WC , are used for the cable, server and wireless communication, respectively, as shown in Figure 4. These are used to capture the different nature of repair work and expertise required. Hence the communication infrastructure model is represented with three different independent events that may occur depending on the outcome of (1) and (2)

Cyber-physical reliability performance and ageing metrics
The standard network performance indices are calculated based on recorded network performance at every Δt. These are the expected duration of load curtailment (EDLC), the expected frequency of load curtailment (EFLC), the expected frequency of line failures (EFLF), the expected energy not supplied (EENS), and the total cost of generation (TCOG) with their formulations adopted from [34]. Each OHL's performance is recorded using the expected equivalent line ageing (EELA), and the expected annual line losses (EALL) using (7) and (8), while the expected equivalent network ageing (EENA) and the expected annual network losses (EANL) are the sums of all network lines EELA and EALL values, respectively [35]. The OHL conductor ageing, OHL,T C , is calculated using the method in [36].
The DLR-ICT j performance on the OHL j is recorded using the expected frequency of information failures (EFIF j ) and communication failures (EFCF j ), as well as, the expected duration of information (EDIF j ) and communication failures (EDCF j ). The values produced for each OHL are summed to form a single value (of each metric) for the complete network.
The number of failures (NoF) for the information and communication components in (9) and (10) depends on the DLR-ICT topology that links each OHL j to the OCC. In (11), D I j ,Nr is the duration of the instrumentation failure calculated by μ Inst,c or μ Inst,t , while in (12) the D CC k ,Nr , D WC k ,Nr , and D Sv k ,Nr are determined by the μ CC , μ WC , and μ Sv , respectively, for the jth OHL that utilizes the kth DL. In (12) the maximum repair time is used for the three main components (i.e. cable repair, wireless communication system repair, and server repair) to determine the EDCF in a single Δt when multiple failures co-exist. Thus assuming that repairs can be achieved at the same time (with different teams). Nr max is the total number of repetitions performed for the same network analysis.

EVALUATION OF CYBER FAILURES ON POWER NETWORKS
To quantify the impact of ICT failures of DLR systems on the network's cyber-physical reliability, IEEE-96 24-bus reliability test system (IEEE-24) and IEEE 14-bus test system (IEEE-14) are used [37,38]. These networks are selected due to their different topology, with the IEEE-14 simulating a meshed

Power network components topology
All IEEE-24 and IEEE-14 network lines are assumed to be OHLs with their conductor properties shown in Table 1. These are typical UK network conductors obtained from [29,39]. To allow harnessing the extra transmission capacity provided by the DLR systems, both the generation and demand are increased evenly to 1.8 p.u. and 1.

Cyber components topology
The cyber network topology is based on the DLR-ICT architecture described in Figure 3. The ICT system installed on the OHL is simplified by one I-component and three Ccomponents. This is a realistic assumption as the cyber failures can practically be identified under four conditions: 1. A failure of the instrumentation when the system is unable to record values at the local DL while the DL is functional. 2. A failure of the DL system at a (substation) bus, when the central server is not able to read the DL's status while the substation's router is functional and visible to the server. This condition includes failures of the wireless infrastructure connection with the DL required to receive the data from the instrumentation (i.e. DL's wireless interface with the OHL instrumentation-router and antenna) and transmit the data to the server. 3. A failure of the communication wire when the server cannot establish a connection with the specific DL. 4. A server failure when the operator cannot see any operational DLs in the whole network.
The DLR-ICT topology for both IEEE-24 and IEEE-14 networks is described in Table 2 with OHL lengths from [37,38], while λ CC is calculated from (13) and μ CC The DLR data requisition and communication components are modelled with the same failure and repair rates for all OHLs. Their availabilities are obtained from [8,31] and assuming the MTTR, due to the lack of data, the MTBFs are produced, as shown in Table 3.

OHL rating and network operating scenarios
The STR and PLR values are based on the weather data from [40] and the calculations described in [26]. PLR allows for an OHL's maximum operating temperature exceedance risk when considering historical weather data and an excursion probability [41]. Figure 6 (top) illustrates the current rating curves for different excursion probabilities of the three conductors used as described in Table 1. These current rating curves are produced assuming a 75 • C thermal rating. The STRs (in Table 1) are calculated based on 3% excursion risk as common practice for STR [39]. When PLR is used for calculating OHL adequacy, a 15% excursion is implemented, indicating a higher risk practice. This excursion risk is highlighted (for Araucaria) in Figure 6 (bottom) with the area that is formed between the frequency distribution curves and the 75 • C line. Table 4 summarizes the scenarios explored to quantify the effect of DLR-ICT unavailability on power network reliability. This is quantified in two aspects, the network performance and risk of OHL ageing, which is associated with current SLR and PLR practices. Specifically, Sc-1 and Sc-2 represent the low and high risk current rating practices ( Figure 6 top) with corresponding low and high risk of conductor ageing [35]. Sc-3 to Sc-6 model the cyber-physical reliability of the networks with increased adequacy due to DLR based on ICT implementation. Sc-3 to Sc-6 weigh the different risk levels associated with ICT failures from the ideal Sc-3 to the realistic case that considers all possible cyber failures Sc-6 ( Table 4).

3.4
Impact of DLR systems on power network reliability All scenarios employ the described modelling approach (Figure 1) with 5% covariance on EENS and 1500 year repetitions as stopping criteria.
The comparison between Sc-1, which is the base case against Sc-2 and Sc-3 highlights the benefits of network increased adequacy (through current uprating) from SLR to PLR and DLR practices on the network reliability, while at the same time it quantifies the ageing of DLR deployment (Sc-3) against the selected 'risk-adverse' PLR (Sc-2) uprating [26,35]. This comparison, however, does not quantify the risk of DLR deployment associated with ICT failures on network reliability. This cyber-physical reliability risk is captured with the comparison of Sc-3 with Sc-4 to Sc-6 with different ICT unavailability options of DLR systems.

3.4.1
Physical network reliability assessment Table 5 summarizes the outputs produced from the two most popular OHL uprating methods, PLR and DLR, in addition to the base case SLR, [8,42,43], for IEEE-24 and IEEE-14 networks. The comparison of the PLR and DLR against STR indicates a significant improvement either in network reliability (i.e. EDLC, EFLC, EENS) or in reducing the total costs of generation (TCOG). The increase in reliability is prominent on the IEEE-14 (distribution-like) network that has limited generation flexibility, while the TCOG improvement is dominant on the IEEE-24 (transmission-like) network with its larger generation disparity.
There is no doubt that PLR is a good and simple solution for improving the overall network performance, and most utilities are implementing this uprating option [42,44]. However, PLR increased conductor operating temperatures resulted in an increased expected network ageing (EENA) risk. This EENA increase is evident in Table 5 with the IEEE-24 being severely affected (18 h), while IEEE-14 remains insignificant (0.02 h).
DLR has reduced the ageing of the network to zero, while all other network metrics remained similar to the PLR scenario. Consequently, DLR has negated the OHL overloading risk introduced by PLR while providing equivalent network performance, owing to the ICTs' 'real-time' control of conductor temperatures. The PLR overloading risk may result in additional failures and perhaps rare but severe cascading events [45].
The high network ageing (EENA), under PLR scenario, indicates the need for assessing individual OHLs in the network. The most loaded OHLs are shown in Table 6, along with their losses (EALL) and ageing (EELA) values. EALL is used as an additional metric to indicate a critically loaded line when the EELA is zero and it can help the operator to identify the available adequacy in the network. From Table 6 the OHL loading asymmetry in the IEEE-24 network is indicated by the high EELA on L28 and L23, which is due to the generation diversity, load distribution, and network topology. This asymmetry is not observed in the IEEE-14 network.
The unequal ageing among the OHLs on the studied networks (Table 6), even when all OHLs have equal design risk (i.e. 15% excursion), indicates that PLR does not always introduce a high risk. Hence, deploying DLR selectively on L28 of the IEEE-24 and L14 of the IEEE-14 can be an effective method to reduce ageing. The operating temperature profiles of these two OHLs for the SLR, PLR and DLR scenarios are shown in Figure 7. The shaded area captures the ageing, as manifested by the frequency of exceedance events (i.e. operation above 75 • C) for L28 and L24 when PLR is used. These exceedance events produced the EELA in Table 6 (Sc-2). The PLR resulted in maximum recorded temperature at 118 • C (one event) for the L28 IEEE-24 network and 79 • C for the L14 IEEE-14 network. As expected, the conservative SLR and DLR are designed with zero excursion probability, and thus no events exceeding 75 • C conductor temperature are recorded (Figure 7) [46].
Implementing DLR to minimize this overloading risk introduced by PLR could be justified. However, the ICT failure risk on the DLR-OHL system has to be quantified to thoroughly assess the risk involved with these two popular uprating practices. This risk is quantified in the next section.

Cyber-physical reliability of network with DLR systems
The studies of the cyber-physical network performance consider the DLR-ICT architecture in Table 2 and the ICT component and their availability in Table 3. The produced data from the analysis are shown in Table 7 with instrument failures (Sc-4), communication interface failures (Sc-5), as well as ICT failures (Sc-6). The physical network reliability performance is described by the results of Sc-3 (also shown in Table 5). It is evident, from Table 7, that no ageing occurs due to ICT failures, which constrain the OHL operation to the predetermined and conservative SLR state ( Figure 2).
ICT failures (Sc-3 vs. Sc-6) have a significant impact on networks reliability (EENS) and operation (TCOG) costs, due to the network adequacy constraints they introduce. The IEEE-24 and IEEE-14 results in Table 7 indicate that network topology, available generation and load disparity, have a significant impact on cyber-physical reliability of the system. However, a closer observation of Sc-4, Sc-5, and Sc-6 in Table 7 indicates that I-failures (Sc-4) are less critical (in increasing EENS) than the C-failures (Sc-5).
The EENS results of Sc-2 and Sc-6 indicate that when DLR is taken in to account its cyber-physical reliability (Sc-6 in Table 7), is worse than the simplified PLR uprating approach (Sc-2) in Table 2; when the poor ICT unavailability reported in [8] is used. However, the PLR introduces a conductor degradation risk, which is small in most of the cases, while the DLR introduces a customer interruption risk (due to ICT failures) which cannot be neglected. As a result, a combination of DLR installation on line 28 of the IEEE-24 while all other lines operated with PLR can reduce the ageing risk by 16.5 h/y. DLR-ICT architecture has an insignificant effect on network performance, which is evident from the small (±5%) variation among the individual DLR-ICTs deployed for each OHL in   Table 8. This is in contrast with the individual OHL performance in Table 6, which shows high variation between the different OHLs and hence the significant effect of the power network topology on network performance. The almost identical effect of ICT failures on all the network lines could be due to the high ICTs unavailability recorded in [8] and their 'secondary' role with regard to network power delivery. The EENS effect of the ICT failures under the different scenarios and network varies from approximately 15 to 30 MWh/line. The higher EENS recorded for the 14-bus network.

IEEE-24
To investigate the effect of DLR-ICT unavailability on the cyber-physical reliability, a sensitivity study is performed with up to a thousand times better DLR-ICT reliability to the values provided by EPRI [8]. The results of this study in Figure 8 show that increasing the ICT availability by a hundred times (from the recorded values in [8]) results in significant improvement of the 14-bus network performance; however, increasing the ICT reliability further and above 99.9% has a small effect on both networks. Consequently, DLR-ICT reliability is not as critical as the 'copper-based' power components reliability, which usually has to be in the range of five nines (i.e., 99.999%).
It should also be noted that C-failures are heavily affected by the communication infrastructure, which can be neglected in the case of a meshed multi-path communication architecture.

SUMMARY AND CONCLUSIONS
A cyber-physical methodological advancement for network reliability evaluation is developed to allow quantifying the risk included in DLR and quantified against traditional PLR and SLR rating approaches used for OHL networks. The proposed method is used to quantify the risk (EENS and TCOG) of DLR-ICT failures on the power system and to provide insightful recommendations on the quality of DLR-ICT infrastructure and maintenance practices on OHLs to increase network transmission adequacy and flexibility. An additional application of this method is to help identify the ICT components of OHLs that have increasing effect on the network and inform utility's investment planning processes as well as maintenance priority on instrumentation and communication devices. Finally, it can be used to inform utilities on selecting between DLR and PLR practices based on prioritizing the reduction of TCOG with an increase in EENA or the vice versa.
The cyber-physical reliability studies on the IEEE 24-bus and IEEE 14-bus networks indicated that: a. DLR-ICT failures can increase generation costs (TCOG) in networks with high generation flexibility. b. DLR-ICT failures reduce the reliability of networks with small generation diversity and multiple load points. c. Increasing the availability (investing in higher quality components) of the DLR-ICT systems beyond the low values reported in [8] improves the distribution network performance. However, there is no need for more than three nines (i.e. ≈99.9%) availability.
Further work is needed to advance the modelling approach to assess issues related to non-repairable failures (i.e. measurement error) and cybersecurity such as unauthorized access on DLR systems. Furthermore, investigating approaches to optimize DLR-ICT architecture. This will enable a more holistic assessment of the cyber-physical system reliability associated with DLR practices.