Early View
Open Access

Evaluation method for electromagnetic interference complexity of high voltage switch based on feature extraction and GCC-GRU network

Wenchao Lu

Wenchao Lu

School of Electrical Engineering, Xi'an University of Technology, Xi'an, China

Search for more papers by this author
Jiandong Duan

Corresponding Author

Jiandong Duan

School of Electrical Engineering, Xi'an University of Technology, Xi'an, China


Jiandong Duan

Email: [email protected]

Search for more papers by this author
Lin Cheng

Lin Cheng

Power Research Institute of State Grid Shaanxi Electric Power Co., Ltd, Xi'an, China

Search for more papers by this author
Jiangping Lu

Jiangping Lu

Power Research Institute of State Grid Shaanxi Electric Power Co., Ltd, Xi'an, China

Search for more papers by this author
Jiaxin Tao

Jiaxin Tao

School of Electrical Engineering, Xi'an University of Technology, Xi'an, China

Search for more papers by this author
Jianning Yin

Jianning Yin

School of Electrical Engineering, Xi'an University of Technology, Xi'an, China

Search for more papers by this author
First published: 25 March 2024

Associate Editor: Xiaoxing Zhang

Wenchao Lu and Jiaxin Tao are contributed equally to this work.


When some new-type sensors are subjected electromagnetic interference (EMI), the traditional standard can no longer serve as the guidance for their EMI protection. Therefore, it is necessary to evaluate the complexity of the new sensing equipment's suffering from electromagnetic interference. A hybrid method of feature extraction and complexity evaluation of Electromagnetic Interference signals based on generalised correntropy criterion (GCC) is proposed. First, a variational mode decomposition algorithm for adaptive extraction of decomposed signals is constructed using GCC, and Generalised S-transform is used for processing and reorganisation. Then, according to the time-frequency space model, a new quantitative evaluation criterion interference factor is proposed, which overcomes the shortcomings of the traditional qualitative classification and the inability to solve the cross classification of indicators. Then, the Gated Recurrent Unit is introduced, and the GCC-GRU evaluation model with GCC as the loss function is derived. The authors conduct EMI tests and measured signal evaluation with Medium Voltage and High Voltage switches, and the results show that the method proposed is correct and effective.


With the development of intelligent sensors in the power system, some new types of power sensing equipment deal with more complex Electromagnetic Interference (EMI), especially in the process of switching. Due to the coupling and superposition of thermal force and electromagnetic field, the failure of power sensing devices occurs frequently, which has caused great damage to the safe operation of the power system [1, 2].

Typically these sensors have undergone rigorous anti-interference design before leaving the factory, such as extremely high integrated circuit device port withstand voltage, extremely high external structure shielding efficiency, and some merchants' designed performance even exceeds the maximum tolerance level specified by the standard [3]. However, these measures did not significantly reduce the equipment damage rate. Taking digital communication systems as an example, especially serial communication systems where signals can also be considered as time series, they may be strongly affected by such transient electromagnetic environments [4]. Some researchers have pointed out that compared to single excitation, the use of repetitive low amplitude interference may also generate interference [5], indicating that there must be some deep-seated reasons, such as energy saturation or cumulative effects. It is no longer possible to measure the threat faced by systems using a single physical quantity such as voltage or current, but modern electronic systems typically exhibit complex non-linear behaviour [6]. Therefore, scholars have extensively studied and hope to use complexity to uniformly characterise the severity of the electromagnetic environment. Evaluating the complexity of electromagnetic interference environments has become an urgent issue in current research [7].

Because of their specific complexity, electromagnetic interference signals have been widely used in electronic warfare, which is divided into objective complexity and subjective complexity. Generally, indicators are divided into four levels: mild complexity, general complexity, moderate complexity and severe complexity. The evaluation results are related to the selection of evaluation indicators and evaluation methods. A large number of scholars have conducted in-depth research on the selection of complexity evaluation indicators and evaluation methods. Krug and Russell [8] proposed a novel real-time time-domain electromagnetic interference measurement system and calculated the evaluation parameters using Fast Fourier Transform. Azpurua et al. [9] used Empirical mode decomposition (EMD) and Instantaneous Mode Decomposition algorithms to decompose the measured EMI signals in the time domain and successfully separated the interference components with multiple characteristics. Jaekel [10] adopted signal frequency and power as the main evaluation indicators of the electromagnetic environment grading evaluation method, thus giving the grading evaluation standard of electromagnetic signal. Yin [11] and Wang et al. [12] proposed a ‘four domain method’ based on time domain, frequency domain, energy domain and space domain and selected time occupation, spectrum occupation, average power density spectrum and space coverage as complexity evaluation indicators. They use the four domain method to define the complexity level of the electromagnetic environment [13, 14]. Zeddam et al. [15] on the basis of the ‘four domain method’, electromagnetic signal density and frequency coincidence are added as complexity evaluation indicators. On the basis of the article proposed in Ref. [16], Dong et al. added the abnormal signal rate as an evaluation indicator to evaluate the radio signal's impact on the electromagnetic environment in specific areas. Yin et al. [17] conducted a complexity evaluation study using time–frequency spatial coincidence and root mean square as indicators and used the Fast S-Transform algorithm to extract signal features for the interference test in wireless transmission.

From the perspective of evaluation methods, there are many evaluation methods currently studied, such as AHP [18], Dempster-Shafer (D-S) evidence theory method [19], multiple connection number method [20], cloud barycenter evaluation method, neural network [21, 22] etc. Dawson J.M [23] uses AHP to search the complexity of the electromagnetic environment, then establishes a hierarchical structure, and gives specific evaluation criteria. Zhou Y [24] used AHP to assess the complexity of the electromagnetic environment in order to solve the problem that the electromagnetic environment monitoring system only monitors the electromagnetic signal of the actual area and does not evaluate the complexity of the electromagnetic environment in real time. However, the AHP evaluation method is based on expert experience, which is greatly affected by human subjectivity. Dai et al. [25] use the fuzzy mathematics method to evaluate the complexity of the electromagnetic environment under the guidance of the D-S evidence theory. This method makes up for the lack of subjectivity when relying on expert experience for evaluation, but the fuzzy algorithm relies on prior knowledge and has weak adaptive adjustment ability. Wang et al. [26] introduced the method of multiple connection numbers to evaluate the complex electromagnetic environment, which is similar to the D-S evidence theory and is too subjective. Cheng et al. [27] used a cloud barycenter evaluation method to evaluate six evaluation indicators of electromagnetic signals and realise the quantitative evaluation of the environmental complexity of electronic targets. However, the cloud barycenter evaluation method needs to set corresponding weights manually. Li et al. [28] analysed the complex electromagnetic environment and made analysis and evaluation using Pilot Neural Network (PNN) and achieved good evaluation results. However, the PNN structure is relatively complex, the model is not easy to build, and it has high requirements for computer hardware.

Although there are many complex factors that affect the EMI signal, many scholars have carried out active research from different perspectives. The evaluation indicators have evolved from a single indicator to multiple indicators, and the evaluation methods have evolved from traditional calculation and comparison to research relying on intelligent methods and mathematical models. However, these methods still have some problems. The first is the problem of parameter extraction. Many methods do not calculate parameters synchronously, and the cross evaluation of parameters cannot conduct qualitative and quantitative evaluation of electromagnetic signals. Secondly, there is a lack of application foundation in the electromagnetic interference of power equipment. The electromagnetic interference of the power system has evolved into the interference problem of multi field circuit superposition, but many existing methods can only analyse a single signal. Finally, the evaluation indicator lacks the corresponding overall evaluation model, and the accuracy is poor in the grading process of the evaluation object.

From the perspective of engineering applications, the more physical quantities involved in the evaluation indicator of electromagnetic environment complexity, the higher the degree of restoration to the actual engineering site, and the better it can guide the design of sensors with strong anti-interference performance. This is mainly due to the following three reasons. (1) The multidimensional complexity indicator comes from the actual application scenarios of the equipment. (2) Sensors have expanded from previously 'failed' or 'normal' to complexity levels corresponding to multiple states, providing a representation of critical states to assist engineers in deeply understanding the development of equipment failures. (3) The evaluation indicators can be adjusted according to the different kinds of interference on the sensors, and a reasonable evaluation can be made.

Based on the Generalised S-transform (GST) algorithm proposed in Ref. [17], combined with the emerging time-domain signal processing method Variational mode decomposition (VMD) [29] and Generalised Correntropy Criterion (GCC) [30, 31] in the information theory, this paper proposes a GCC-VMD-GST method to extract the characteristics of electromagnetic interference signals. The advantage of this method is that it can simultaneously extract the complexity of the target in time domain, frequency-domain and energy domain, and has strong robustness and adaptability to signals with high complexity. In order to solve the problem that the traditional four domain method cannot deal with the intersection of indicators, this paper creates a new indicator called Interference Factor (IF), which combines with the traditional four domain method to form a comprehensive electromagnetic interference complexity evaluation indicator, and uses Gated Recurrent Unit (GRU) based on GCC for training and learning. The proposed model is verified by the interference test data of Medium Voltage (MV) and High Voltage (HV) switches.

To summarise, the main contributions of this article are as follows.
  • 1)

    Based on GCC, VMD and GST, a feature extraction method of electromagnetic interference complexity is established, which significantly enhances the extraction accuracy of target signals in the time–frequency domain, and enhances the robustness to signals with high complexity and outliers.

  • 2)

    In order to overcome the problem that the traditional evaluation system cannot explain the intersection of indicators, and combine the interference characteristics of power equipment, we propose a new evaluation indicator called IF, which can unify the strength, complexity and high-frequency characteristics of the signal, and combine it as one of the evaluation indicators with the traditional time–frequency domain and energy domain indicators to build a new comprehensive evaluation indicator.

  • 3)

    The GRU electromagnetic interference complexity identification model based on GCC is constructed by using the proposed comprehensive evaluation indicator. The performance of the model to deal with complex signals is improved by taking GCC as the loss function. The test data of MV and HV switch interference tests are used to verify the effectiveness and correctness of the method for the evaluation of the complexity of the electromagnetic interference of power equipment.

The rest of this paper is organised as follows. The second part introduces the feature extraction method of thd electromagnetic interference signal. The third part introduces the construction of the complexity evaluation indicator and evaluation model. The fourth part evaluates and verifies the measured data of MV and HV. The fifth part is the conclusion.


2.1 Generalised correntropy criterion (GCC)

Information theory learning is an adaptive non-parametric learning framework based on entropy and discrete theory, which can transform the data in the input space into infinite dimensional kernel Hilbert space by means of non-linear mapping through the kernel method. The correntropy criterion has adaptive non-parametric characteristics and can learn more information from many data, which reflects its superior performance in processing non-Gaussian signals. In recent years, some scholars have successfully applied it to similarity measurement. There are also some scholars who construct correntropy into fitness function to solve optimisation problems. When VMD decomposes non-Gaussian EMI signals, correlation entropy can also be used to study how to obtain the optimal combination of K and α. The expression of correntropy criterion V θ ( · , · ) ${V}_{\theta }(\cdot ,\cdot )$ between random variables X and Y is given in Equation (1).
V θ ( X , Y ) = E k θ ( X , Y ) = k θ ( x , y ) f X Y ( x , y ) d x d y ${V}_{\theta }(X,Y)=E\left[{k}_{\theta }(X,Y)\right]=\iint {k}_{\theta }(x,y){f}_{XY}(x,y)\mathrm{d}x\mathrm{d}y$ (1)
where E [⋅] represents mathematical expectation; k θ ( · , · ) ${k}_{\theta }(\cdot ,\cdot )$ represents a kernel function with a kernel width of θ; and fXY ( · , · ) $(\cdot ,\cdot )$ represents the joint probability density function.
In practice, it is usually unknown, and most of the data collected by sensors are discrete and limited data x i , y i i = 1 N ${\left\{{x}_{i},{y}_{i}\right\}}_{i=1}^{N}$ , xi and yi represent two sets of discrete variables with a length of N. So the expression of the correntropy criterion of discrete variables can be obtained by using sample estimation:
V ^ θ ( X , Y ) = 1 N i = 1 N k θ x i y i ${\hat{V}}_{\theta }(X,Y)=\frac{1}{N}\sum\limits _{i=1}^{N}{k}_{\theta }\left({x}_{i}-{y}_{i}\right)$ (2)
When the kernel function k θ ( · , · ) ${k}_{\theta }(\cdot ,\cdot )$ of the correntropy criterion selects the Generalised Gaussian Density function G ξ , β ( · , · ) ${G}_{\xi ,\beta }(\cdot ,\cdot )$ and is used as the learning criterion for adaptive system training, it is called the GCC [32]. Then, the GCC can be expressed as follows:
V ^ ξ , β ( X , Y ) = 1 N i = 1 N G ξ , β x i y i = 1 N i = 1 N γ ξ , β exp λ | x i y i | ξ ${\hat{V}}_{\xi ,\beta }(X,Y)=\frac{1}{N}\sum\limits _{i=1}^{N}{G}_{\xi ,\beta }\left({x}_{i}-{y}_{i}\right)=\frac{1}{N}\sum\limits _{i=1}^{N}{\gamma }_{\xi ,\beta }\,\mathrm{exp}\left(-\lambda {\vert {x}_{i}-{y}_{i}\vert }^{\xi }\right)$ (3)
where γ ξ , β ${\gamma }_{\xi ,\beta }$ is the normalisation constant, ξ $\xi $ is the shape parameter, β $\beta $ is the proportional bandwidth and λ $\lambda $ is kernel parameters.

In Ref. [31], it is proposed to use GCC as a new similarity measurement evaluation criterion, which makes it applicable to heavy tailed non-Gaussian distribution data at the same time. From Equation (3), it can be seen that the generalised correlation correntropy criterion can be flexibly adapted to different types of non-Gaussian distributions by adjusting shape parameters and kernel parameters λ $\lambda $ . When ξ = 2 $\xi =2$ , GCC was the correntropy criterion MCC with a radial basis kernel. When X = Y, GCC has a maximum value [33].

2.2 VMD and GST

The EMD method proposed by Norden E. Huang et al is the most classical method [34], which can adaptively decompose K Intrinsic Mode Functions (IMFs) sequences and a residual sequence. However, this method, being sensitive to noise and sampling and prone to frequency spectrum mixing, lacks mathematical basis [35]. Therefore, relevant scholars have continued to develop many kinds of like-EMD algorithms which are used to solve the problem of low robustness of EMD to noise signals [36].

VMD is a new signal processing method with a multi-resolution non-recursive variational structure combining Wiener filter, Hilbert transform and alternating direction multiplier proposed by Dragomiretskiy K et al. in 2014 [29]. Its effect can be briefly explained as decomposing an initial signal into multiple sub signals, which have their own time–frequency characteristics, and the algebraic sum is still the initial signal. Thanks to its reliable theory and excellent decomposition effect, it has strong anti-aliasing performance. Compared to EMD, short time fourier transform, and wavelet et al, VMD has become a widely used new signal processing method by researchers [37].

The mechanism is that the original EMI signal uK is constructed as a constrained variational problem as shown in Equation (4).
min K = 1 K ( t ) δ ( t ) + j π t u K ( t ) e j ω K t 2 2 $\min \left\{\sum\limits _{K=1}^{K}\Vert \partial (t)\left[\left(\delta (t)+\frac{\mathrm{j}}{{\uppi }t}\right)\ast {u}_{K}(t)\right]{\mathrm{e}}^{-\mathrm{j}{\omega }_{K}t}{\Vert }_{2}^{2}\right\}$ (4)
s . t . K = 1 K u K = f $\mathrm{s}.\mathrm{t}.\sum\limits _{K=1}^{K}{u}_{K}=f$
where { u K ${u}_{K}$ } := { u 1 , , u K ${u}_{1},\text{\ldots },{u}_{K}$ } and { w K ${w}_{K}$ } := { w 1 , , w K ${w}_{1},\text{\ldots },{w}_{K}$ } represent the set of all sub-modes and corresponding centre frequencies, ( t ) $\partial (t)$ is the partial derivative about time, δ ( t ) $\delta (t)$ is the unit impulse response function, j is the imaginary unit, * represents convolution and f represents the input signal.

The Alternate Direction Method of Multipliers is used to alternately iterate and update the submode uK, the centre frequency wk and the lagrange multiplier λ, complete the iteration quickly and find the optimal solution. However, VMD also has obvious defects, that is, it needs to preset the number of decomposition levels K and Penalty factor α, different K and α have a huge impact on the decomposition effect, and the mode aliasing caused by the improper value is very common.

S-transform: Assuming the target signal h(t), the traditional S-transform is defined as Equation (5) [38].
S ( τ , f ) = h ( t ) w ( t τ ) e j 2 π f t d t $S(\tau ,f)=\int \nolimits_{-\infty }^{\infty }h(t)w(t-\tau ){\mathrm{e}}^{-\mathrm{j}2{\uppi }ft}\mathrm{d}t$ (5)
where h(t) is the signal, f is the frequency, τ $\tau $ is time shift factor and w(t −  τ $\tau $ ) is the Gaussian window function.
w ( t τ , f ) = | f | 2 π e f 2 ( τ t ) 2 2 $w(t-\tau ,f)=\frac{\vert f\vert }{\sqrt{2\pi }}{e}^{-\tfrac{{f}^{2}{(\tau -t)}^{2}}{2}}$ (6)
Later, Pinnegar et al. [39] introduced the adjustment factor δ into the Gaussian window function. Rewrite formula (6) to (7).
w ( t τ , f , δ ) = | δ f | 2 π e δ 2 f 2 ( τ t ) 2 2 $w(t-\tau ,f,\delta )=\frac{\vert \delta f\vert }{\sqrt{2{\uppi }}}{\mathrm{e}}^{-\tfrac{{\delta }^{2}{f}^{2}{(\tau -t)}^{2}}{2}}$ (7)
Therefore, Equation (7) can be rewritten as Equation (8), hich is the so-called GST. It takes into account the time and frequency resolution, and has superior performance in time–frequency distribution.
G ( τ , f , δ ) = h ( t ) w ( t τ , f , δ ) e j 2 π f t d t $G(\tau ,f,\delta )=\underset{-\infty {}}{\overset{\infty }{\int }}h(t)w(t-\tau ,f,\delta ){\mathrm{e}}^{-\mathrm{j}2{\uppi }ft}\mathrm{d}t$ (8)

The introduction of δ can control the change speed of f, making the time–frequency analysis more adaptive and flexible.

2.3 Extraction method

Through the discussion in Section 2.2, the research purpose of this paper (improving the time–frequency aggregation of VMD decomposed signals) needs to be constrained by GCC. The constraint condition can be characterised as a function, which can reflect the frequency correlation between K sub-modes IMFk after VMD decomposition. The framework of this method is described in detail below.

Applying GCC to the similarity between two scalars, the loss function is expressed as follows:
J = γ λ , ξ 1 N i = 1 N e λ x i y i ξ $J={\gamma }_{\lambda ,\xi }\frac{1}{N}\sum\limits _{i=1}^{N}{\mathrm{e}}^{\left(-\lambda \Vert {x}_{i}-{y}_{i}{\Vert }^{\xi }\right)}$ (9)
According to Equation (9), xi and yi represent two different groups of discrete signals, let xi-yi = Δdi, and use delta Δdi to measure the correlation between the dominant frequencies of K sub modal signals; then, Equation (9) can be rewritten as Equation (10).
J = γ ξ , λ 1 N i = 1 N e λ d i ξ $J={\gamma }_{\xi ,\lambda }\frac{1}{N}\sum\limits _{i=1}^{N}{\mathrm{e}}^{\left(-\lambda \Vert {\increment}{d}_{i}{\Vert }^{\xi }\right)}$ (10)

According to the nature of GCC, the smaller the absolute value of Δdi is, the larger the max J is. And J has the optimal value. At this time, it is necessary to find a criterion to map it to Δdi, and the absolute value of the criterion is positively correlated with the frequency correlation measurement between each sub-mode and the other, so J can be used for constraint.

In other words, when the dominant frequency correlation between the sub-modes is the weakest, its time-frequency resolution is higher, and the corresponding max J is larger; On the contrary, the lighter the frequency spectrum mixing is, the smaller the corresponding max J value is.

Name the spectral data of each IMF obtained through decomposition as IMFf. Suppose that the length of each IMFf is N, the correlation between each IMFf can be mapped to the Δdi of GCC in Equation (10).

Construction of the constraint function: When analysing the spectrum of VMD decomposed signals, usually each modal IMFi (i = 1,2,3,…, K) can get the corresponding spectrum curve, which contains 1–3 spectral peaks (the number depends on the decomposition performance). For the data of N in length, K submodes IMF are obtained by VMD decomposition. The length of each submode IMFi is N, so the length of its spectral decomposition IMFf is also N. Therefore, the correlation between each IMFf and the other can also be equivalent to the Δdi of the GCC criterion in Equation (11).

The correlation between the sub-modes is calculated as matrix C.
C = C 11 C 12 C 13 C 1 K C 21 C 22 C 23 C 2 K C 31 C 32 C 33 C 3 K C K 1 C K 2 C K 3 C K K $\boldsymbol{C}=\left[\begin{array}{@{}ccccc@{}}{C}_{11}& {C}_{12}& {C}_{13}& {\cdots}& {C}_{1K}\\ {C}_{21}& {C}_{22}& {C}_{23}& {\cdots}& {C}_{2K}\\ {C}_{31}& {C}_{32}& {C}_{33}& {\cdots}& {C}_{3K}\\ {\vdots}& {\vdots}& {\vdots}& {\ddots}& {\vdots}\\ {C}_{K1}& {C}_{K2}& {C}_{K3}& {\cdots}& {C}_{KK}\end{array}\right]$ (11)
where Cij represents the correlation value between the IMFfi and IMFfj (i, j = 1,2 … K). Since the correlation of each sub mode itself is meaningless and the matrix is strictly symmetric, matrix C in Equation (11) can be rewritten as Equation (12).
C = 0 C 12 C 13 C 1 K 0 0 C 23 C 2 K 0 0 0 C K 1 , K 0 0 0 0 ${\boldsymbol{C}}^{\prime }=\left[\begin{array}{@{}ccccc@{}}0& {C}_{12}& {C}_{13}& {\cdots}& {C}_{1K}\\ 0& 0& {C}_{23}& {\cdots}& {C}_{2K}\\ 0& 0& 0& {\ddots}& {\vdots}\\ {\vdots}& {\vdots}& {\vdots}& {\ddots}& {C}_{K-1,K}\\ 0& 0& 0& {\cdots}& 0\end{array}\right]$ (12)

Matrix C′ is essentially an upper triangular matrix without diagonal elements. Compared with Equation (12), it retains its significance and reduces the computational complexity in the future.

In terms of correlation measurement, this paper selects Spearman Correlation Coefficient (SCC) as the calculation criterion, which can be effectively used to measure the similarity between x and y of two groups of data. Its expression ρ is shown in Equation (13). See Table 1 for common measurement performance.
ρ = 1 6 d i 2 N N 2 1 $\rho =1-\frac{6\sum {d}_{i}^{2}}{N\left({N}^{2}-1\right)}$ (13)
where N is the length of data, di is the difference between the ith value of two sets of data.
| ρ < I MFf i , I MFf j > | = | n = 1 N I MFf i n I MFf i ¯ I MFf j n I MFf j ¯ n = 1 N I MFf i n I MFf i ¯ 2 n = 1 N I MFf j n I MFf j ¯ 2 | $\vert \rho < {\boldsymbol{\text{I}}}_{\mathbf{\text{MFf}}i},{\boldsymbol{\text{I}}}_{\mathbf{\text{MFf}}j} > \vert =\vert \frac{\sum\limits _{n=1}^{N}\left({\boldsymbol{\text{I}}}_{\mathbf{\text{MFf}}in}-\hspace*{.5em}\bar{{\boldsymbol{I}}_{\mathbf{\text{MFf}}i}}\right)\left({\boldsymbol{\text{I}}}_{\mathbf{\text{MFf}}jn}-\bar{{\boldsymbol{I}}_{\mathbf{\text{MFf}}j}}\right)}{\sqrt{\sum\limits _{n=1}^{N}{\left({\boldsymbol{\text{I}}}_{\mathbf{\text{MFf}}in}-\hspace*{.5em}\bar{{\boldsymbol{I}}_{\mathbf{\text{MFf}}i}}\right)}^{2}\sum\limits _{n=1}^{N}{\left({\boldsymbol{\text{I}}}_{\mathbf{\text{MFf}}jn}-\bar{{\boldsymbol{I}}_{\mathbf{\text{MFf}}j}}\right)}^{2}}}\vert $ (14)
TABLE 1. Spearman correlation coefficient evaluation criteria.
| ρ | $\vert \rho \vert $ 0.8–1 0.6–0.8 0.4–0.6 0.2–0.4 0∼0.2
Degree of relevance Extremely strong correlation Strong correlation Moderate correlation Weak correlation Irrelevant
Accordingly, Cij in Equation (12) can be recorded as |ρ < IMFfi, IMFfj > | (i ≠ j). According to Equation (14), the calculation formula for SCC between two IMFf can be obtained, and Equation (15) can be obtained by substituting Equation (14) into Equation (10). The optimal solution of this formula can be used to constrain the parameters of VMD decomposition.
max J = γ ξ , λ 1 N = 1 K 1 N i = 1 K 1 j = i + 1 K e λ | ρ < I MFf i , I MFf j > | ξ $\max \hspace*{.5em}J={\gamma }_{\xi ,\lambda }\frac{1}{\sum\limits _{N=1}^{K-1}N}\sum\limits _{i=1}^{K-1}\sum\limits _{j=i+1}^{K}{\mathrm{e}}^{\left(-\lambda \Vert \vert \rho < {\boldsymbol{\text{I}}}_{\mathbf{\text{MFf}}i},{\mathbf{\text{I}}}_{\mathbf{\text{MFf}}j} > \vert {\Vert }^{\xi }\right)}$ (15)
Therefore, the main process of the GCC-based VMD method can be simply constructed. The whole process is shown in Figure 1.
  • Step 1: Initialise electromagnetic interference signal data, initialise K = 2, α = 2000

  • Step 2: KK + Δ K, αα + Δα

  • Step 3: Perform VMD decomposition

  • Step 4: Calculate the loss function value according to GCC.

  • Step 5: The loss function value reaches the optimal value or the iteration ends to get max J.

  • Step 6: Get the optimal K and α numerical value. Named Kbest and αbest.

  • Step 7: Execute specific < Kbest, αbest > VMD decomposition under combination to obtain Kbest group IMFs.

  • Step 8: Perform GST transformation on Kbest group IMFs one by one, and obtain the complete time–frequency distribution of the signal by superposing Kbest group time-frequency spectrums. Afterwards, it can be used to calculate indicators.

Details are in the caption following the image

Schematic diagram of target jamming signal time frequency feature extraction.

Among them, due to the VMD rule, K is an integer, Δ K is 1, and the value of Δα is self-made by researchers. In this paper, 500 is taken for Δα, and the upper limit of iteration is generally 40 for K and 30,000 for α [40].


3.1 Objective complexity evaluation indicator

IEC standard 61,000-2-5:2017 [41] and some mature military standards have made corresponding provisions on the complexity of the electromagnetic environment. The mainstream methods mainly focus on time domain, frequency domain, energy and other classic indicators, which can be expressed by Equations 16-18. TP is the time domain occupancy, FP is the frequency domain occupancy and EP is the energy occupancy.
T P = t 1 t 2 1 t 2 t 1 f 1 f 2 max | S ( t , f ) | S 0 f 2 f 1 × max [ S ( t , f ) ] d f d t ${T}_{\mathrm{P}}=\int \nolimits_{{t}_{1}}^{{t}_{2}}\frac{1}{\left({t}_{2}-{t}_{1}\right)}\int \nolimits_{{f}_{1}}^{{f}_{2}}\left[\frac{\max \left[\left(\vert S(t,f)\vert -{S}_{0}\right)\right]}{\left({f}_{2}-{f}_{1}\right)\times \max [S(t,f)]}\right]\mathrm{d}f\mathrm{d}t$ (16)
F P = f 1 f 2 1 f 2 f 1 t 1 t 2 max | S ( t , f ) | S 0 t 2 t 1 × max [ S ( t , f ) ] d t d f ${F}_{\mathrm{P}}=\int \nolimits_{{f}_{1}}^{{f}_{2}}\frac{1}{\left({f}_{2}-{f}_{1}\right)}\int \nolimits_{{t}_{1}}^{{t}_{2}}\left[\frac{\max \left[\left(\vert S(t,f)\vert -{S}_{0}\right)\right]}{\left({t}_{2}-{t}_{1}\right)\times \max [S(t,f)]}\right]\mathrm{d}t\mathrm{d}f$ (17)
E P = min | S ( t , f ) | S 0 max | S ( t , f ) | S 0 , 100 % ${E}_{\mathrm{P}}=\min \left\{\frac{\left(\vert S(t,f)\vert -{S}_{0}\right)}{\max \left[\left(\vert S(t,f)\vert -{S}_{0}\right)\right]},100\%\right\}$ (18)
where S(t, f) represents the current power of the evaluation object in the time–frequency domain.
Assume that the expression of objective complexity indicator COC here is as follows:
C OC = T P × F P × E P ${C}_{\text{OC}}={T}_{\mathrm{P}}\times {F}_{\mathrm{P}}\times {E}_{\mathrm{P}}$ (19)

3.2 Subjective complexity evaluation indicator

Because the subjective complexity of electromagnetic interference reflects the relationship between the disturbed object and the electromagnetic environment in terms of frequency, time and energy, it actually depicts the response of the disturbed object under a specific signal. According to the subjective complexity of electromagnetic interference signal here, we can take an Electronic Current Transformer (ECT) based on the principle of Rogowski coil as an example to analyse and explain. The main structure is shown in Figure 2.

Details are in the caption following the image

Electronic current transformer lumped parameter equivalent circuit.

In Figure 2, I(t) is the current in the primary side coil; U0 is the voltage induced by a Rogowski coil; M is the mutual inductance between the primary conductor and the coil; R0 is the sum of resistance of coil winding and lead; L0 is the inductance of the coil; C0 is the equivalent stray capacitance of the coil; Rs is the sampling resistance (also called load resistance); and Us is the voltage at both ends of the sampling resistor. In this electronic current sensor, R0 = 3.5 Ω, L0 = 5.73 μH, C0 = 200 pF, Rs = 0.5 Ω and M = 0.1326 μH.

Through circuit transformation, the transfer function of the loop can be obtained as Equations (20) and (21).
H ( s ) = U 0 ( s ) I ( s ) = R s M s C 0 L 0 R s s 2 + C 0 R 0 R s + L 0 s + R 0 + R s $H(s)=\frac{{U}_{0}(s)}{I(s)}=\frac{{R}_{\mathrm{s}}Ms}{{C}_{0}{L}_{0}{R}_{\mathrm{s}}{s}^{2}+\left({C}_{0}{R}_{0}{R}_{\mathrm{s}}+{L}_{0}\right)s+{R}_{0}+{R}_{\mathrm{s}}}$ (20)
H ( s ) = U 0 ( s ) I ( s ) = 6.63 × 10 8 s 5.73 × 10 14 s 2 + 5.73 × 10 6 s + 4 $H(s)=\frac{{U}_{0}(s)}{I(s)}=\frac{6.63\times {10}^{-8}s}{5.73\times {10}^{-14}{s}^{2}+5.73\times {10}^{-6}s+4}$ (21)

It can be seen from Figure 3 that when considering the stray parameters of the transformer, the frequency of the resonance point is about 29.7 MHz, and the signal near the resonance point is amplified. When the interference signal is at this frequency, the threat to ECT from the interference signal coupled by conduction or radiation is fatal.

Details are in the caption following the image

Electronic current transformer amplitude frequency and phase frequency response diagram.

At the same time, the higher the amplitude of the interference signal coupled by the high amplitude is, the greater the amplitude of the interference signal. Assuming that the location of the interfered equipment remains unchanged and the original protective measures remain unchanged, the amplitude of the interfered signal is positively correlated with the electromagnetic interference signal. However, the specific data needs to be calculated based on the electromagnetic theory. Here, we refer to the two most advanced views in the current research: the square method and the cubic relationship method for measurement. The concept of IF is proposed instead of being constructed with a simple amplitude or frequency. The expression of IF is shown in the Equation (22).
I F = max | S ( t , f ) | S 0 e | f max [ S ( t , f ) ] f sub | × R α $IF=\frac{\max \left[\left(\vert S(t,f)\vert -{S}_{0}\right)\right]}{{\mathrm{e}}^{\vert {f}_{\max [S(t,f)]}-{f}_{\text{sub}}\vert }\times {R}_{\alpha }}$ (22)
Among them, f max [ | S ( τ , f ) | ] ${f}_{\max [\vert S(\tau ,f)\vert ]}$ is the frequency point corresponding to the maximum time–frequency distribution intensity of the interference signal, f sub ${f}_{\text{sub}}$ is the characteristic frequency of the subjectively determined sensitive equipment, Rα is the Rényi entropy [42], and it represents the time–frequency aggregation of the electromagnetic interference signal. The smaller Ra is, the smaller the signal aggregation is, and the wider the time–frequency band of the potential effective interference is, R α = 1 1 α log 2 f 1 f 2 t 1 t 2 | S ( t , f ) | 2 α d t d f f 1 f 2 t 1 t 2 | S ( t , f ) | 2 d t d f α ${R}_{\alpha }=\frac{1}{1-\alpha }{\log }_{2}\frac{\int \nolimits_{f1}^{f2}\int \nolimits_{t1}^{t2}{\vert S(\mathrm{t},f)\vert }^{2\alpha }\mathrm{d}t\mathrm{d}f}{{\left(\int \nolimits_{f1}^{f2}\int \nolimits_{t1}^{t2}{\vert S(t,f)\vert }^{2}\mathrm{d}t\mathrm{d}f\right)}^{\alpha }}$ . Therefore, two evaluation indicators, namely the square method and the cubic method, can be described as follows:
IF 2 = max | S ( t , f ) | S 0 e | f max [ | S ( t , f ) | ] f sub | × R α 2 ${\mathit{\text{IF}}}_{2}={\left[\frac{\max \left[\left(\vert S(\mathrm{t},f)\vert -{S}_{0}\right)\right]}{{e}^{\vert {f}_{\max [\vert S(t,f)\vert ]}-{f}_{\text{sub}}\vert }\times {R}_{\alpha }}\right]}^{2}$ (23)
IF 3 = max | S ( t , f ) | S 0 e | f max [ | S ( t , f ) | ] f sub | × R α 3 ${\mathit{\text{IF}}}_{3}={\left[\frac{\max \left[\left(\vert S(t,f)\vert -{S}_{0}\right)\right]}{{e}^{\vert {f}_{\max [\vert S(t,f)\vert ]}-{f}_{\text{sub}}\vert }\times {R}_{\alpha }}\right]}^{3}$ (24)

3.3 Multitarget complexity evaluation model

In the past, the evaluation criteria of electromagnetic interference signal complexity based on traditional objective complexity are shown in Table 2.

TABLE 2. Traditional standard for electromagnetic environment evaluation.
Complexity grade Levels Occupancy
TP (%) FP (%) EP (%)
General 1 0–10 0–10 0–10
2 10–20 10–20 10–20
3 20–30 20–30 20–30
Light 4 30–40 30–40 30–40
5 40–50 40–50 40–50
Medium 6 50–60 50–60 50–60
7 60–70 60–70 60–70
Heavy 8 70–80 70–80 70–80
9 80–90 80–90 80–90
10 90–100 90–100 90–100

The traditional evaluation method of objective complexity of the electromagnetic interference utilises separate indicators to classify the complexity levels of electromagnetic environment and makes use of the ten-level evaluation model to qualitatively divide the time occupation, frequency occupation, airspace occupation, background noise and other indicators into four types of complexity. As shown in Table 1, the traditional electromagnetic environment complexity evaluation method quantifies a single evaluation indicator but neglects the overlap or intersection of different indicators, so this method cannot make an objective and correct evaluation of the electromagnetic environment. For example, when TP = 5 and FP = 40 of an interference signal, its complexity level cannot be determined. This type of signal can be seen in transient processes caused by switch opening and closing, with a large amplitude of the wave head (1.5–2.5 times the standard unit value) and rich frequency (0–100 MHz), but with extremely fast attenuation (usually completed within 1–10 μs).

In addition, despite reasonable grading, traditional evaluation indicators mainly focus on the processing and transformation of the signal itself, lacking specific physical significance.

Therefore, the EMI signal complexity evaluation indicators constructed in this paper are mainly based on the interference factors IF mentioned in Section 3.2, let CMCS = IF. This approach not only overcomes the situation where multiple indicators crossing cannot be evaluated but also has a physical significance compared to traditional evaluation methods, directly establishing connections with the characteristics of interference signals.

In practical applications, sensitive equipment is not only subjected to a single electrical action but also under multiple actions of electromagnetic field and conducted voltage and current.

However, it has always been a difficult problem for scholars to solve which specific amount of electricity plays the leading role in the disturbed equipment. It is also the limitation of the existing mainstream expert experience method; at the same time, even the same disturbed device may be affected by multiple interference sources in the same system. The previous approach of only focusing on the total effect of multiple sources and ignoring the effects of each part has blurred the definition of the effect, making quantitative analysis difficult and difficult to guide subsequent anti-interference and prevention measures.

Therefore, it is assumed that the object to be evaluated mainly affected by K kinds of interference signals or consider K interference sources for specific objects, so an objective complexity evaluation model under the condition of multiple interference superposition is constructed.
C MCS = C MCS 1 2 + C MCS 2 2 + · · · C MCS K 2 K ${C}_{\text{MCS}}=\sqrt{\frac{\left({{C}_{\text{MCS}1}}^{2}+{{C}_{\text{MCS}2}}^{2}+\cdot \cdot \cdot {{C}_{\text{MCS}K}}^{2}\right)}{K}}$ (25)

3.4 Grading standards

After the model construction in Section 3.3, we can propose the indicators under this evaluation method as shown in Table 3.

TABLE 3. Proposed evaluation standard in this article.
Complexity grade Levels CMCS
General 1 0 ≤ CMCS ≤ 0.1
2 0.1 < CMCS ≤ 0.2
3 0.2 < CMCS ≤ 0.3
Light 4 0.3 < CMCS ≤ 0.4
5 0.4 < CMCS ≤ 0.5
Medium 6 0.5 < CMCS ≤ 0.6
7 0.6 < CMCS ≤ 0.7
Heavy 8 0.7 < CMCS ≤ 0.8
9 0.8 < CMCS ≤ 0.9
10 0.9 < CMCS ≤ 1

Figure 4 shows the grading of the evaluation model proposed in this paper.

Details are in the caption following the image

Complexity grade proposed in this article.


The interference effect of equipment in complex electromagnetic environments is also highly dependent on experimental data research, but existing methods have two drawbacks: (1) it is difficult to effectively utilise massive experimental data and directly search for valuable data; (2) Relying on traditional modelling research can easily overlook some important variables, and correlation studies generally yield correlations between the effects and the environment, without exploring causal relationships. Therefore, it is necessary to construct an evaluation model based on machine learning.

4.1 GRU network

The GRU is a deep learning neural network based on RNN, which has excellent performance in learning temporal data due to its ability to learn distant temporal information, and the ‘gate’ part of its structure enhances the learning capability of the network [43]. Its structure is shown in Figure 5 and the direction of information transfer in the GRU neural network is along the arrow, and the expression of the network can be derived from the structure of the GRU.

Details are in the caption following the image

Structure of gated recurrent unit.

Gated Recurrent Unit is similar to other neural networks in that a Back-Propagation algorithm can be applied to iteratively update the parameters in the network. When GRU is used for forecasting, the learning criterion for the network is the MSE.

4.2 GCC-GRU network

The current that widely used learning criterion for neural networks, including GRU, is the second-order statistical mean square error MSE, the superiority of which relies heavily on the assumption of Gaussianity. For data with non-Gaussian performance, the traditional MSE criterion will no longer enable the neural network to have stable learning performance. Therefore, in this section, GCC is used as the learning criterion for GRU networks to study GCC GRU networks (GCC-GRU) applicable to non-Gaussian non-linear data. The learning criterion of the GCC-GRU network can be expressed as Equation (26).
max J = γ α , λ 1 N i = 1 N e λ y i x i α $\max \hspace*{.5em}J={\gamma }_{\alpha ,\lambda }\frac{1}{N}\sum\limits _{i=1}^{N}{\mathrm{e}}^{\left(-\lambda \Vert {y}_{i}-{x}_{i}{\Vert }^{\alpha }\right)}$ (26)
where N denotes the number of training samples; yi is the predicted value; ti is the actual value; γ α , λ ${\gamma }_{\alpha ,\lambda }$ is the normalisation constant; λ > 0 $\lambda > 0$ is the kernel parameter; α > 0 $\alpha > 0$ is the shape parameter and the equation has a maximum value when the predicted and actual values are equal.

Because of the introduction of the exponential term in (26) right part, which makes the GCC of the GCC-GRU network close to zero when a large difference between the training value and the actual value is encountered during the training process. Therefore, the studied GCC-GRU network is theoretically more suitable than the traditional GRU network for non-Gaussian data with noise interference and the presence of outliers.

4.3 Hybrid evaluation model

Based on Section 4.1 and 4.2, GCC is introduced into GRU network as an error function to replace the traditional MSE criterion. It is expected that GCC will have advantages in dealing with non-Gaussian signals and outliers, and build an adaptive EMI complexity grading model with high fitness and freely adjustable weights and evaluation indicators. The specific flow chart is shown in Figure 6.

Details are in the caption following the image

Framework of evaluation model.

Owing to the decentralised nature of the actual system structure or equipment manufacturing process, the evaluation results are bound to have slight fluctuations for different test data. Therefore, the subjective complexity COC and the CMCS proposed in this paper can be mixed according to a certain weight. At the same time, when it is difficult to measure the 4-dimensional data of current, voltage and electromagnetic field simultaneously due to the restriction of measurement condition, the CMCS can also be selected flexibly for low dimensional evaluation.


In order to verify the correctness of the evaluation method proposed in this paper, two test schemes are constructed here to simulate the frequency and energy of interference signals, and the evaluation model proposed in this paper is used to conduct a full evaluation test of electromagnetic interference complexity.

5.1 Experimental test

Firstly, the model interference test circuit is built under MV and HV conditions to obtain the measured data. The frequency scanning range of the interference signal is 0–100 MHz, the amplitude of the magnetic field signal can reach 500 A/m, and the amplitude of the electric field signal can reach 15 kV/m.

5.1.1 MV interference test

The MV test circuit was designed as shown in Figure 7.

Details are in the caption following the image

Schematic diagram of medium voltage test.

As shown in Figure 7, switch Sc is used to control the generation of power interference, and switch St is the tested switch. The power supply voltage is 0–50 kV, C (3300 pF) is the equivalent capacitance on the load side, the load resistance is 1 Ω, there is no inductance, and lightning arresters are not installed on both sides of the test switch. On the test switch side, use the Rogowski coil and the high-voltage probe to measure U and I, and use the B-dot magnetic field probe to measure the magnetic field signal (3 dB turning frequency is 106.26 MHz. The theoretical value of antenna sensitivity calculated is 15.79 × 10−10 H m), and the three measurement signals are transmitted to the oscilloscope through optical fibre for recording. During the test, the power supply voltage is 2.5 kV, 5 and 7.5 kV. Limited by the withstand voltage of the test switch, 20 tests have been carried out, fsub = 10 MHz and S0 = 0.05.

The typical waveform in the MV test is shown in Figure 8, which is the transient current on the primary side of sensor.

Details are in the caption following the image

Current waveform on the primary side of the sensor (medium voltage).

5.1.2 High voltage test

The HV test circuit was designed as shown in Figure 8.

As shown in Figure 9, the high-voltage test is carried out on the 330 kV Gas Insulated Switch test platform. The rated voltage of the power supply is 330 kV/50 Hz, and the length of the busbar is 19.8 m. The voltage and current measurement tests are conducted with electronic sensors. The electric field measurement is conducted with capacitive voltage divider sensors. The magnetic field measurement is still conducted with small loop antennas. Four measurement signals are transmitted to the oscilloscope through optical fibres for recording. The load is adjustable capacitance. The range is 1000–6000 pF. The adjustment step is 1000 pF, and two tests are conducted for each additional 1000 pF, a total of 12 times. fsub = 15 MHz and S0 = 0.1.

Details are in the caption following the image

Schematic diagram of the high voltage test.

The typical waveform in the HV test is shown in Figure 10, which is the transient current on the secondary side of the sensor.

Details are in the caption following the image

Current waveform on the primary side of the sensor (high voltage).

5.2 Comparison and analysis

Pre-process and appropriately expand the test data to form a training set (90%) and a test set (10%), and import them into the GCC-GRU model for learning and training. The validation set is the measured data. Note that all simulations were performed on a desktop computer with an Intel(R) i5-10,400F CPU with 16 GB RAM.

Firstly, the traditional EMI complexity indicator, COC indicator and CMCS indicator proposed in this paper are used to test the two groups of data. The results are shown in Table 4. The performance comparison of three different methods under MV data is shown in Figure 11.

TABLE 4. Comparison of recognition accuracy under three different methods.
Accuracy (%) MV data HV data
Test Validation Test Validation
Trad 0.95 0 1.05 8.33
COC 93.9 90.5 94.8 91.5
CMCS(IF2) 98.9 91.4 96 93.5
CMSC(IF3) 99.4 95 95.2 92.8
Details are in the caption following the image

Performance comparison of three different methods (medium voltage data).

According to the identification results in Figure 11, the identification accuracy is extremely low, because the traditional EMI complexity indicators cannot measure the cross of indicators. At the same time, whether IF2 or IF3 is used as the complexity evaluation indicators constructed in this paper, their identification accuracy is superior to the Coc indicators.

In order to compare the effects of the two forms of the proposed IF, test the data with IF in the form of Equations (23) and (24). The results are shown in Table 4. According to the identification results, when facing the same identification samples, the identification accuracy of IF3 as the IF is higher than that of IF2, which to a certain extent reveals that the complexity of the electromagnetic interference environment may have a stronger correlation with the third power of the interference signal.

The mixed indicators of COC and CMCS are used for identification. The results obtained according to different weights are shown in Table 5. The mixing ratio is h (0 – 1).
C oM = h C OC + ( 1 h ) C MCS ${C}_{\text{oM}}=h{C}_{\text{OC}}+(1-h){C}_{\text{MCS}}$ (28)
TABLE 5. Recognition accuracy of HV and MV data under different h.
Condition Accuracy (%)
h 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
HV data IF2 Test 96 96 96 95.9 95.9 96 96 96.2 97 99.1 94.8
Validation 91.5 95.5 94.5 93.5 93.5 93.5 93.5 93.5 93.5 93.5 93.5
IF3 Test 94.8 97.9 97.9 98 98 98 98 98 98 96 95.2
Validation 92.8 94.5 96 96.4 95 94.2 94.1 94.3 94.3 93 91.5
MV data IF2 Test 98.9 99 98.9 99.1 98.1 97.9 95.8 95.3 93.1 93.1 93.9
Validation 90.5 93.5 94.5 95.5 94.5 96 95 90.8 90 90.7 91.4
IF3 Test 99.4 99 99.4 99.3 96.6 98.8 96.6 97.8 97.2 95.9 93.9
Validation 90.5 96.5 96 97 92.4 94.5 91.9 95 95 94.6 95

According to the recognition results shown in Table 5 and Figure 12, (1) The recognition accuracy curves of test set and verification set have salient following when recognising electromagnetic inference signal samples. (2) The high precision interval of the validation set appears in the case of low COC and high CMCS mixing, reflecting the importance and superiority of the evaluation indicators CMSC proposed in this paper. (3) The highest accuracy of the verification set appears in the curve with IF3 as the criterion, which can provide a reference for the next study of the internal correlation of electromagnetic signal complexity.

Details are in the caption following the image

Recognition accuracy curve under different h: (a) high voltage data; (b) medium voltage data.


In view of the situation in which the electromagnetic interference signals in the existing power equipment are becoming more and more complex but the evaluation indicators are missing, this paper proposes an electromagnetic interference complexity level evaluation method based on GCC combined with VMD-GST, and GCC combined with GRU respectively. The proposed GCC-VMD-GST decomposition method can simultaneously extract six electromagnetic signal features. Based on the time–frequency signal theory, the electromagnetic signal complexity evaluation factor IF is created. By using this evaluation factor and GCC-GRU, an expandable multidimensional electromagnetic interference signal complexity evaluation model CMCS was constructed. Compared with traditional evaluation indicators, this evaluation model solves the problem that cross indicators cannot be evaluated. Compared with objective complexity indicators, this evaluation model has more precise and reasonable grading, stronger correlation between features and higher recognition accuracy. This method provides a new idea for the comprehensive evaluation of EMI signal complexity. By building a test platform to test and identify the electromagnetic interference signals with different characteristics under MV and HV, the overall evaluation model can accurately reflect the complexity of electromagnetic interference, which fully verifies the evaluation indicators and correctness of the evaluation model proposed in this paper.


Thank Professor Yin Baiqiang of Hefei University of Technology for his useful discussion on this article. This work is supported by National Natural Science Foundation of China (Grant Nos. 51877174 and U1866201) and Doctoral Dissertation Innovation Fund of Xi'an University of Technology.


    The authors declare no potential conflict of interests.


    The data that support the findings of this study are available from the corresponding author upon reasonable request.