SAR image synthesis based on conditional generative adversarial networks

: In recent years, synthetic aperture radar (SAR) has played an increasingly important role in the military and civil fields. Since the SAR image reflects the scattering characteristics of the target ， it is of great significance to achieve multi-angle fusion of the target. However, there is a problem of angular loss in real SAR images. Through the electromagnetic simulation method, SAR images of 0–360° can be obtained, but the similarity to real images is low. Here, the authors combine electromagnetic simulation with conditional generative adversarial networks (cGANs). The image obtained by the electromagnetic simulation is taken as the input of the cGANs, and then the generator generates photorealistic SAR images containing the label information. Thereby, authors’ method complement the missing angles in the real SAR image dataset. Finally, they qualitatively and quantitatively evaluated the synthetic images generated through their model to verify the quality of the dataset.


Introduction
Synthetic aperture radar (SAR) enables full-time, all-weather, highresolution imaging. In recent years, with the development of SAR, SAR has played an increasingly important role in the military and civil fields. Different from the optical image, the SAR image reflects the scattering characteristics of the target, and the imaging results of the same target at different angles differ greatly [1]. Therefore, it is of great significance to achieve multi-angle fusion of the target. In the traditional field, multi-angle fusion can overcome the influence of unfavourable factors such as occlusion, top-bottom inversion, and shadow in a single-angle SAR observation, and achieve better target reconstruction [2]. In the field of deep learning, continuous frame images are used as training set ratios. The images provide more information and detection accuracy has improved.
However, the real SAR image acquisition process is limited by the radar observation angle, and there is a lack of target image acquisition angle. The cost of acquiring SAR images through electromagnetic simulation is low, and all angles of SAR images can be obtained through simulation. However, considering that there is an error in the model of the target and there is an approximation to the process of the target RCS, there is a certain deviation between the SAR image obtained by this method and the real SAR image.
In recent years, in the field of deep learning, Goodfellow proposed a deep generation model for generating confrontation networks [3] and then generated data similar to the original data distribution by fitting the data distribution. Generative adversarial network (GAN) is composed of generator and discriminator. Through the training of the generator and discriminator, the image generated by the final generator can successfully deceive the discriminator. However, the original GAN training process is unstable. To increase the stability, convolutional networks and batch normalisation are introduced to deep convolutional GANs (DCGANs) [4] is proposed.
GAN has relatively few applications in SAR image generation. In [5], more realistic SAR images are generated based on DCGANS. Generated images are of higher quality and have a higher degree of similarity to real SAR images. However, considering that this method is unsupervised learning, the resulting image lacks marking information.
For the generated data lacking tag information with unsupervised GAN, people have carried out GAN's semi-supervised and supervised research in recent years. In particular, image-to-image translation network proposed by Isola et al. [6] in November 2016, which can generate high-quality images by combing generator of U-Net [7] and discriminator of PatchGAN [8]. Based on this, [9] extends the conditional GANs (cGANs) [10] application from RGB images to multi-spectral images and achieves denoising of radar images.
For the problems of unsupervised learning in the generation of SAR images, we propose to integrate electromagnetic simulation with cGANs and supplement the missing angle information in real SAR image data through supervised learning. The SAR image obtained by electromagnetic simulation contains part of the information of the real SAR image (including angle information and partial scattering characteristics). Therefore, we use it as an input to the cGANs to improve the quality of the generated images. Furthermore, we combine cGANs to learn the mapping relationship between electromagnetic simulation images and real SAR images.

Methodology
In order to generate labelled and closer to real SAR image data, we combine electromagnetic simulation with GAN to divide the SAR image generation process into two steps (the flow chart of the entire process is shown in Fig. 1): stage1: The real target is modelled and the radar cross-section (RCS) of the target is solved by using electromagnetic simulation method. Then, inversion imaging was performed to obtain electromagnetic simulation SAR image. stage2: The SAR image obtained by electromagnetic simulation in the first step is taken as input, and the real SAR image is used as an output to make a training set. Then, the input and output mapping relationship is obtained by training cGANs.

Stage1: acquire electromagnetically simulated SAR images
The purpose of the first step is to obtain an electromagnetically simulated SAR image as input in the second step. First, a simulation model is established based on real targets and design simulation parameters. Then, the target's RCS is calculated by using electromagnetic simulation software. The method we use to calculate target's RCS is shooting and bouncing ray method, which is extremely efficient in calculating RCS properties for complex target. Then, we can image the target for the RCS that we J. Eng calculated. Suppose that radar is fixed to a certain position and the observation target rotate around a centre. The xy coordinate system is fixed at the target rotation centre and the uv coordinate system is fixed at the centre of the observation radar. The imaging geometry is shown in Fig. 2. When the target observes the target from an observation angle, the distance from any point in the target surface to the detection radar can be expressed as: In far-field conditions, (1) can be approximated as: Assuming that the transmitted signal is chirp signal, the signal is reflected back to the radar receiver at the target surface, and the baseband signal obtained after the mixing process is: (see (3)) f (ucos θ − vsin θ, usin θ + vcos θ) is the projection of target scattering coefficient f (x, y). Correspondingly, according to projection-slice theorem, S(k, θ) is a slice of F(k x , k y ) in the frequency domain. Therefore, we can get the F(k x , k y ) in the domain k − θ based on the RCS obtained and then get the target scattering function.

Stage2: synthetic realistic SAR images based on cGANs
The purpose of the second step is to obtain the mapping relationship between the electromagnetic simulation SAR image and the real SAR image. Our training model is based on the imageto-image model. The nature of the network is cGANs. Similar to the objective function of the traditional GAN, the objective function of cGANs can be expressed as: At the same time, in order to make the generator-generated image more closely approximate the real image, an L1 cost item is added to the objective function: The final objective function is: In the above equation, λ is the weights of generator's L1 cost function. The effect of L1 cost function on our model is changed by modifying the weight value. The structure of the generator is U-net model. Synthetic SAR images are generated by encoding and decoding the electromagnetically simulated SAR images. The discriminator determine authenticity of real SAR images and synthetic SAR images is determined by the discriminator whose structure is patch-GAN. By alternately training the generator and discriminator, the synthetic SAR image is finally photorealistic. Considering that our training dataset is grayscale and the image area we are concerned with is the target area, the structure of model in [6] is modified about depth of network and the quantity of channel. The detailed architecture of our network is shown in Fig. 3.
Specific parameters of our network in this work are shown in Table 1. The layer of convolution and batch normalisation is represented by C, B, R, respectively. The layer of Leaky ReLU in encoder and discriminator are represented by LR, and the layer of ReLU used in decoder of generator is represented by R. The letter T in decoder indicates the tanh activation function and the letter S in discriminator indicates the sigmoid activation function. The numbers in parentheses are the number and the stride of filters in the convolutional layer, respectively. In this article, we selected the data of the T72 tank and conducted experiments. Among them, the real SAR image comes from the public dataset -MSTAR dataset, and we select the data from the case of an incident angle of 45°. Due to the lack of azimuth angle information, the real SAR image includes a total of 300 images with different angles. In the electromagnetic simulation process, we use the computer simulation technology software to first calculate the target RCS. The simulation model is a 1:1 model with a real T72 tank, and the simulation parameters are consistent with MSTAR. The SAR image with azimuth angle of 0-359° was obtained by electromagnetic simulation. The real SAR images of various angles are paired with electromagnetically simulated SAR images and then they are divided into training sets and testing sets. The training sets are trained by our model to get the mapping relationship between the real SAR images of the T72 tank and the electromagnetically simulated SAR images. The data in the test set is used to verify the accuracy of the model. At the same time, the real dataset can be complemented with the electromagnetically simulated images of the missing angle in the real image through the network of our model.

Qualitative assessment
After training 2000 epoches on our network, we took the test set data to test it. The electromagnetically simulated SAR images are taken as the input, and the result generated by the generator was shown in Fig. 4. As can be seen from the figure, the images generated by our model have a higher degree of similarity to the real SAR images at the corresponding angles.
The electromagnetically simulated images of remaining angles are used to compensate for the images of missing angles in the real data set, and the generated result is shown in Fig. 5. Based on our experience and comparison of real images, the completed SAR images are photorealistic.

Quantitative assessment
3.2.1 Inception score: For the quantitative evaluation of datasets, we used Inception Score, a method of quantifying datasets that was proposed by Goodfellow in 2016 [11]. To make the generated dataset more diverse, the paper proposes a metric: among which y denotes the label of target, x denotes the SAR image and KL is the KL divergence. The datasets to be tested pass through the model, and the results obtained are shown in Table 2. Among them, the electromagnetically simulated SAR image dataset has the highest score. The reason for this analysis is that there is no noise interference in simulated images compared with real images, and the inception model testing result is closer to a class of labels. Comparing the real data set with the synthetic dataset, it can be found that the real dataset score is higher than the synthetic dataset score, but the scores are also close.
This result reflects to some extent that the synthetic data obtained through our model approximates real dataset. However, the highest score in the electromagnetically simulated dataset also shows that this evaluation standard has limitations.

Image similarity:
Inception score is a method of quantitative evaluation of datasets in the deep learning domain. In order to s 1 (m, n), s 2 (m, n) are two images for calculating the correlation coefficient. To evaluate the generated images, we calculate correlation coefficient between electromagnetically simulated images and the real images, and also calculate correlation coefficient between synthetic images generated through our model and the real images. Meanwhile, considering that our area of interest is primarily a target area, the assessment area was selected as the target area to reduce the impact of the background part on the results. The results is shown in Table 3. It can be seen that the degree of similarity of the dataset generated by our method is higher than that obtained directly through electromagnetic simulation, and the correlation coefficient has increased by 20%.
Although the correlation coefficient of our method has increased by percentage, the diversity is higher than the dataset of DCGAN.

Conclusion
In the paper, we study SAR image generation based on cGANs.
Combining advantages of electromagnetic simulation and GAN, we get the mapping relationship between electromagnetically simulated SAR images and real SAR images by training the network of our model and realise the complement of real SAR image data. The model uses the SAR image generated by electromagnetic simulation as the input of cGANS, which greatly improves the training speed. At the same time, since the electromagnetically simulated images and the real SAR images of all angles in the training set exist in pairs, this is equivalent to the fact that the training process includes the angle information, so it can be guaranteed that the generated images are marked images. Although the similarity between the dataset we generated and the real dataset is high, we still need to further improve the quality of the generated image. Moreover, we use less category data for evaluation.
In the future, we will experiment with multiple types of targets and study the generation of higher quality SAR images, and we need to study the application of the synthetic images, such as improving the accuracy of image classification.

Acknowledgments
This work is supported by the Shanghai space science and Technology Innovation Fund Project (No.SAST2017040) and the