Design and analysis of an optical camera communication system for underwater applications

: The growing interest in underwater explorations has stimulated considerable interest in advancing the enabling technologies of underwater wireless communication (UWC) systems. The study presents the design and implementation techniques of a short-range unidirectional UWC system which uses optical camera communication (OCC) principles. OCC systems have light emitting diodes at the transmitter, with the receiver being a camera. Overall system design considerations, synchronisation and frame selection, and a novel spatial position-based image processing algorithm to decode data from frames are discussed. Also, the optimisation techniques used in the image processing algorithm are also elaborated. Importance of few of key features of an OCC system, namely camera quality, transmission distance and the threshold value used for image binarisation are demonstrated through numerical results using simulations via synthetically generated images. Further, the bit error rate (BER) performance of the system is evaluated using the experimental proof of concept setup in clear water, muddy water and under turbulence conditions. Overall BER performance in the order of 10 −3 within the communication distance of 1 m is observed, and it shows that the proposed system design and algorithms can be used to realise an UWC link for short-range applications successfully.


Background and motivation
Underwater wireless communication (UWC) systems are gradually attracting interest among the research community due to its wide range of applicability in underwater observation and in subsea monitoring systems [1,2]. In the future, the UWC technologies are predicted to be in the forefront in realising applications such as, monitoring biological and ecological changes in the ocean, pollution monitoring in environmental systems, new resource discovery and in helping to control and maintain oil production facilities and harbours using unmanned underwater vehicles, submarines, ships, buoys and divers.
Most of the present UWC technologies are based on acoustic waves [3,4]. Even though these longitudinal waves travel well through water, there are practical trade-offs between source strength, frequency and propagation distance, which leads to high transmission losses and time-varying multi-path propagation. In addition, the speed of acoustic waves in the ocean is ∼1500 ms −1 , thus, resulting in long range UWC systems bottled necked by high latency, which poses a problem for real-time response, synchronisation and multiple-access protocols.
Visible Light Communication (VLC) can be considered as a possible alternative to realise a UWC system. VLC is a data communication variant which uses the portion in the electromagnetic spectrum which the human eye is capable of seeing [5]. At the transmitter, the light intensity of the LEDs is modulated depending on the data to be transmitted. At the receiver, in traditional VLC systems, photo detectors are used, and by evaluating the received power using photon counting, the received data are analysed for decoding [1,2,[5][6][7][8]. Considerable research effort has been concentrated towards VLC by the research community in realising terrestrial VLC systems, and even the upcoming 5G technology has acknowledged its importance [9].
In any environment, with distance, power of the propagating light pulses suffers from attenuation due to absorption and scattering in accordance with the Beer-Lambert Law [6]. However, due to the presence of solids and dissolved materials in water, these absorption and scattering artefacts are considerably high in aquatic medium, compared to terrestrial medium [1]. Thus, this high absorption leads to lower power detection at the receiver in traditional underwater VLC systems. Also on the other hand, high scattering leads to high adjacent channel interference when used in a multiple LED transmitter setup. Thus, despite the promise, extending the terrestrial VLC system works directly for UWC applications and implementing a VLC system in underwater with photo detectors as the receiver in the traditional manner is highly challenging. Thus VLC for UWC still remains an open problem for researchers to explore.
Optical camera communication (OCC) is a branch in VLC which operates also in the visible light spectrum [5,10]. OCC is the pragmatic version of VLC which uses a camera or image sensors as the receiver, instead of photo detectors [5,[11][12][13]. Data to be transmitted is encoded into the LEDs which are kept in the field of view (FOV) of the camera. The lens of the camera gathers light from the LEDs and focuses it onto the optical sensor to illuminate the pixels in the captured image (or the frame of the captured video). Thereafter, those images are processed to decode the original data.
When a camera is used as the receiver in VLC (i.e. OCC), it is comparatively easier to identify the presence or absence of the light source by analysing the images without the use of any complicated hardware. Also, it is possible to logically separate different light sources using image processing techniques, when a camera is used at the receiver by exploiting spatial dimensions, thus leading the way to multiple input-multiple output (MIMO) systems. However, in spite of the advantages which a camera receiver possess in VLC, as of now traditional VLC has excelled in terms of research and applications for terrestrial applications mainly due to two disadvantages which OCC possesses, namely, (i) the transmission speed of an OCC system is inherently limited by the camera frame rate, which limits the achievable data rate, and (ii) OCC is mostly suitable for medium and short-range applications [14]. However, in recent years, imaging technologies and high-speed cameras are advancing gradually and it is helping OCC in gaining ground in the VLC domain as a competitor for traditional VLC.
OCC systems seem to be more relevant for UWC purposes than terrestrial communication purposes. This is because, in underwater, most of the traditional wireless communication means, present practical difficulties while implementing. Further, even though OCC is mostly suitable only for medium and short-range applications, in most UWC applications, it is only necessary to transmit data from a submerged diver or from a launcher to a ship or a boat which falls in short to medium range transmission. These have resulted in OCC being considered as a promising candidate to realise a UWC system. Thus, motivated by this, through this work we try to design and implement an OCC system for UWC purposes.

Related work
A summary of the existing state of the art OCC literature can be found in Fig. 1. Most of the current OCC works have targeted terrestrial applications [12,[15][16][17][18][19][20][21]39]. Survey on OCC highlighting the challengers and the applications of OCC are presented in [10,14]. Different aspects of smart phone camera aided OCC systems are analysed in [18,21,22]. In [12] the authors have proposed OCC principles to realise vehicle-to-vehicle (V2V) communication. An indoor environment monitoring system with the aid of OCC is proposed in [16]. Moreover, OCC for smart cities is proposed in [28], and an intelligent transport system using OCC is proposed in [29].
In [6,17,[30][31][32] synchronisation issues relating to OCC are discussed and, in [30] specifically an under sampling-based OCC mechanism is presented. Asynchronous scheme for OCC for infrastructure-to-vehicle communication is demonstrated in [17]. In recent studies, authors of [36] have proposed VLC/OCC hybrid optical wireless systems for indoor applications and they have demonstrated the feasibility of their proposed architecture through experimental results. Also, non-line-of-sight OCC in a heterogeneous reflective background is studied in [38]. It has to be noted that, the data rates under discussion in almost all the aforementioned literature related to OCC are considerably low, as briefed in Fig. 1. Further, difference interference cancellation techniques to improve the performance of VLC systems are also provided in [40,41].
Much effort has been focused on increasing the data rate and communication distance of OCC by maximally utilising the spatial, frequency, intensity or colour dimension [11,[22][23][24][25][26][27]. In [24][25][26] MIMO OCC systems and its applications are presented. Also in [11], a novel spatial modulation technique which can be applied even in an MIMO OCC is presented. Further, principles and implementation techniques of a 2D-OFDM system for OCC is presented in [37]. Although a great amount of research effort has been paid towards terrestrial OCC, the applicability of OCC for underwater applications has been rarely considered in the literature. On the other hand, even though the authors of [1,2,22,24,33] discuss about VLC for UWC in brief, no considerably influential work has been done as of now with OCC for underwater applications, except for the recent studies in [24]. Further, none of the studies are presenting a practically implementable complete system for OCC for UWC applications. Identifying the above limitations present in the existing state of the art literature, through this work we study an OCC system for UWC purposes.

Our contributions
This paper presents the design and analysis of a short-range unidirectional underwater OCC system. We concentrate on the study of a transmission link in which, data is transmitted from a fixed submerged body to a floating receiver through the line of sight (LOS) [24]. Initially, system design constrains of the underwater OCC is presented. A geometrical arrangement of the LEDs in the transmitter which can facilitate image processing at the receiver is proposed, and the desired receiver camera specifications are then discussed. Thereafter, a synchronisation scheme suitable for OCC, which is based on oversampling for frame selection, is discussed.
In an OCC system, at the camera receiver, a collection of frames are received as the output. Thus by processing those images individually, data corresponding to those frames are decoded at the receiver individually. Hence it can be stated that the performance of an OCC system mainly depends on the receiver (camera) performance and the applied image processing algorithm. Thus we present in detail a novel spatial position-based image processing algorithm for decoding frames which can be used in the proposed MIMO OCC system. Importance of few of the features of an OCC system, namely camera quality, transmission distance and the threshold value used for image binarisation are demonstrated through numerical results using simulations via synthetically generated captured images. Moreover, techniques which can be used to optimise the decoding algorithm for better bit error rate (BER) performance are also presented.
At the later part, we present the proof of concept test bed which was implemented to validate and assess the BER performance of the proposed OCC system. Finally, the performance of the proposed algorithm is evaluated in clear water, in muddy water and when the floating receiver is subject to controlled oscillation due to surface turbulence for varying transmission distances in the prepared control experimental setup, and the results are presented.
The main contributions of this work can be summarised as follows: (i) We present the overall framework of an MIMO OCC-based underwater wireless link, and its design and implementation techniques are discussed in detail. To the authors' best knowledge, this is the first paper which addresses the question of the applicability of OCC for UWC purposes under different adverse conditions such as turbulent and muddy conditions. A transmitter LED geometrical arrangement which can facilitate image processing at the receiver is proposed, and the desired receiver camera specifications are then discussed. Also, an under samplingbased frame selection and synchronisation scheme which can be used effectively in an OCC system is also presented. (ii) The applied image processing algorithm for decoding frames greatly impacts the performance of the OCC system. Thus, we propose a novel spatial position-based image processing algorithm for decoding frames which can be used in the proposed MIMO OCC system. In addition, the optimisation techniques and modifications which are carried out during key steps of the presented image processing algorithm are also elaborated. (iii) Simulations were carried out to demonstrate the importance of a few of the key features of an OCC system, namely camera quality, transmission distance and the threshold value used for image binarisation. For this purpose, possible captured images at the camera receiver were synthetically generated and these images were sent through the spatial position-based image processing algorithm to evaluate the BER performance. (iv) Finally, we present the results obtained from the proof of concept hardware test bed which was implemented to validate and assess the BER performance of the proposed OCC system. Using the experimental data, error performance was evaluated in clear water, in muddy water and when the floating receiver is subject to controlled oscillation due to surface turbulence at different transmission distances.

Organisation
The remaining part of this paper is organised as follows. Section 2 gives an overview of the OCC-based UWC system. The proposed synchronisation and frame selection scheme is presented in Section 3. Image processing techniques used for decoding, and the used performance enhancement techniques of the decoding algorithms are discussed in Section 4. The simulations which were carried out to highlight the significance of a few features in an OCC system are presented in Section 5. Thereafter, the experimental proof of concept setup is presented in Section 6. The results obtained from this experimental setup and a discussion is presented in Section 7. Finally, the conclusion is drawn in Section 8.

Overview
The architecture of a unidirectional underwater OCC system is shown in Fig. 2. The transmitter is made of multiple LEDs and the receiver is a camera. At the transmitter, the bit data which are to be transmitted is converted into symbols of size N. Then each bit in the symbol is mapped into a specific LED in the LED array to form a specific LED pattern (i.e. the first bit of the symbol to first LED, the second bit of the symbol to the second LED and so on). ON and OFF operations of all N LEDs are made to occur at the same time. When transmitter and the receiver are aligned and when the receiver camera is focused towards the transmitter LEDs, the light beams will reach the camera through the water medium. At the camera receiver, a video stream is captured. If the symbol rate is made less or equal to the frame rate of the camera, every transmitted LED pattern will be available in at least one frame of the captured video stream. So by selecting those frames, and then analysing them one at a time, data bits corresponding to that symbol can be decoded. Hence, by doing this continuously on the consecutive frames, the bit stream is formed back using the consecutive frames.
In our study, we specifically investigate an underwater OCC scenario in which, data are transmitted from a fixed submerged body to a floating receiver through LOS [24]. Also, transmitter and the receiver are aligned (i.e. the transmitter is within the FOV of the camera). In most UWC applications, it is necessary to transmit data from a submerged diver or from a launcher to a ship or a boat as in Fig. 3. Thus the scenario which we study is of more relevance.

Transmitter
The achievable data rate of OCC systems is constrained by the frame rate of the camera. Hence, to enhance the data rate, the spatial dimensions have to be exploited by having multiple LEDs in the setup. The geometric arrangement of the LEDs of transmitter is very important for the MIMO OCC system, as an adjacent channel interference and alignment problems can be negated to a reasonable extent with the proper use of LED arrangement.
We propose the use of a multiple LED transmitter in which LEDs are arranged as shown in Fig. 4. The four corner LEDs, {L 1 , L 2 , L 3 , L 4 }, which are always kept in ON condition throughout the data transmission, are used for reference purposes, while the other N LEDs are used for data transmission (i.e. total transmitter LEDs, N T = N + 4).
By incorporating different colour LED's in the transmitter design, the LED colour information can also be utilised to increase the data rate. However, if different colour LEDs are used, out of focus blur will be introduced in the captured frames due to chromatic distortion. Chromatic distortion is an effect resulting from dispersion in which there is a failure of a lens to focus all the colours to the same convergence point. Hence, we propose the use of all LEDs from one specific colour.
When light propagates from LEDs for medium to long distance, the impact from adjacent channel interference can increase due to the spread of light pulses, which is modelled by the Lambertian emitter model [42]. Hence, LEDs which have low spreading angles have to be used in the transmitter design. Camera exposure time can be at any time within the symbol duration. Thus, to make sure the transmitted light intensity is same throughout the symbol period, and as negative voltages cannot be represented using LEDs, incoming bit stream is modulated into LEDs using on-off keying (OOK).

Receiver
The performance of an OCC system mainly depends on the receiver (camera) performance and the applied image processing algorithm. Also, the data rate of the OCC system is inherently limited by the frame rate of the camera. Further, for efficient data decoding, exposure time of the camera should be significantly smaller than the transmitter signal period. Thus, a camera which has high resolution, high frame rate and smaller and fixed exposure time is desired to be used as the receiver.
A typical camera is made out of imaging lens, image sensor and readout circuits. The light coming towards the imaging lens is projected onto the image sensor. The multiple photo detectors which make up the image sensor detect the incident photon radiation. The signal detected at individual photo detectors can be viewed as the signal corresponding to individual pixels in an image. Then, each activated pixel generates a voltage proportional to the number of photons that impinge on it to form an image.
Smart device cameras and off the shelf professional cameras typically operate at 30-60 frame per second (fps) which ultimately limits the achievable data rate of OCC systems. However, recently, high-speed cameras are getting popular and they are applied in other scientific research, military test and in industry [29,31]. High-speed cameras capture more information at high frequency, compared to the conventional video cameras. However, the specific image processing technique might be necessary when these cameras are used.
In an OCC system, at the receiver collection of frames are received as output. Hence, by selecting the proper frames and then analysing these frames, data is decoded. Thus going forward, in Section 3 we will be investigating on synchronisation and frame selection and in Section 4, how captured frames were processed to decode the transmitted data will be presented.

Synchronisation and frame selection
Synchronisation and frame selection are essential tasks in an OCC system. Improper selection of frames which correspond to a transmitted symbol can lead to degraded bit error performance. In an OCC system at the receiver, in order to not to miss any transmitted data, it is necessary to capture at least one unique frame corresponding to every transmitted symbol. As the frame rate of the camera cannot be changed due to camera hardware limitations, the symbol rate has to be changed with respect to camera frame rate to capture unique frames corresponding to every transmitted symbol.
Consider the following three possible cases. (T S is symbol duration, T f is camera sampling interval, t sw is switching time of symbols and t e is exposure time of camera).
i. Case 1: T S < T f (Fig. 5a) -Under this condition, the camera will fail to capture every transmitted symbol through its frames, resulting in some symbol data being totally missed. Therefore, to capture at least one unique frame corresponding to every transmitted signal, T S has to be made such that, (Figs. 5b and c) -Under this condition, there will always be one unique frame corresponding to every transmitted symbol. However, the captured frames can be erroneous or free of errors due to the camera sampling process.
If the camera's image sensor is exposed to light during the switching state of symbols (Fig. 5c), erroneous frame capture will occur. In contrast, if the camera's image sensor is exposed to light during the steady state of the symbols (Fig. 5b), frames which are free of errors due to camera sampling process will be captured. Therefore, to capture all transmitted data without any errors introduced due to the camera sampling process, T S has to be selected such that, T S > T f [32]. iii. Case 3: T S is made an integer multiple of T f , even under the worst case scenario (i.e. camera exposures time during the switching state of symbol), there will be at least one error-free frame corresponding to every transmitted symbol as in Figs. 5d and e. However, achievable bit rate of the system decreases with increasing value of L. Thus we propose the use of T S in such way that it is as given by (1) [29].
Further, the first frame which appears immediately after the end of every symbol switching period is selected for image processing for bit data decoding (i.e. a first frame which appears immediately after the end of every symbol switching period will always be free of errors introduced due to camera sampling process).
However, to select the first frame which appears immediately after the end of every symbol switching period, the information on the exact time at which symbol switching happens at the transmitter is necessary for the receiver. Thus in order to identify this information, at the start of a data transmitting session, all the transmitter LEDs have to be switched ON and OFF for two cycles.
If a camera sampling occurs when all the LEDs are in ON condition, the received power P i , measured in terms of the accumulated pixel intensity values of the frame as in (2) will be high and if the camera sampling occurs when all the LEDs are in OFF condition, the received power will be low (see Fig. 6a). However, if camera sampling occurs during the transient state of ON to OFF or OFF to ON, only a moderate received power will be detected as in Figs. 6b and c.
where F i BW (m, n) is the (m, n)th pixel intensity value of the binarised ith frame and M, N represent the image dimensions. Thus by identifying the received power variation using (2) of all the frames which appear until synchronisation and then by identifying its gradient pattern, the error-free frames were identified. Thereafter by looking into the time stamps of these frames, the exact time at which the first symbol switching occurred (T initial ) was identified. This process is illustrated in Algorithm 1 (see Fig. 7).
Thereafter by selecting the first frame which appears after the time stamp t = T initial + r × T s where r = 1, 2, 3, …, all the frames which appear at the end of every symbol switching period (i.e. error-free frames corresponding to all transmitted symbols) are identified.

Image processing for decoding
In this section, we will describe the LED position-based image processing algorithm that is used to process a single frame in an OCC system.
The power of the propagating light pulses reduces with distance due to absorption and scattering. This is correctly predicted using Beer-Lambert Law [6]. Compared to terrestrial propagation, due to the presence of solids and dissolved materials in water, absorption and scattering artefacts in an aquatic medium are considerably high. Further, the underwater environment is a light limited environment. This is because light penetration in water is severely restricted due to the high back reflection at the water surface and high absorption within the water medium. Hence capturing and processing images in underwater introduces a completely new set of challenges.
For an image processing algorithm to be applied efficiently in real-time decoding for an OCC system, image analysis on one frame should finish before the next frame arrives. Hence, even though there exist various algorithms to recover images with minimal image quality degradation [35], computationally less complex algorithms had to be used.
Light coming from a single switched ON LED towards the camera will be represented by an approximately circular size blob in a specific location in the captured image. Hence, by identifying the presence of the blob in that specific location in the image, the LEDs ON or OFF state can be estimated.
When multiple LEDs are switched ON at the transmitter, at the receiver the corresponding image will have several blobs than just one. These blobs will be spatially separated in this image with or without overlaps depending on factors such as LEDs geometrical arrangement, which LEDs are switched ON, transmission distance, image resolution etc. Hence, if the transmitter is of multiple LEDs, to decide which blob correspond to which specific transmitter LED and thereby to decide, the ON or OFF state of all the LEDs individually, the information on the spatial positions in which the blobs corresponding to every LED will appear if they are switched ON is necessary. To this end, in our transmitter design, four reference LEDs L 1 , L 2 , L 3 , L 4 (as shown in Fig. 4) which are always kept in ON condition are utilised.
As all the four reference LEDs are always kept ON while data transmission, the blobs which correspond to those LEDs will always be present in any captured data image at the receiver. Thus, in the developed algorithm, we first try to identify the spatial position (pixel indices) of the blobs which correspond to these four reference LEDs. Then as we have the prior knowledge on the positioning of the remaining LEDs with respect to the reference LEDs from the transmitter geometry, we deduce the spatial positions where the blobs which correspond to all the data LEDs, if they are switched ON will appear. Thereafter by checking the number of pixels which is illuminated around those spatial positions, the ON or OFF states of the LEDs were identified individually. Hence, the proposed LED position-based algorithms can be divided into three main steps, and they are as follows: i. Task 1: Identifying the spatial positions of the blobs which correspond to the reference LEDs. ii. Task 2: Deduce the spatial positions where the blobs which correspond to all the data LEDs will appear, if they are switched ON. iii. Task 3: Check the number of pixels which is illuminated around those spatial positions, to identify the ON or OFF states of the LEDs.
Next, we will discuss the image processing algorithm in detail in Section 4.1 and some of the important steps and modifications are further elaborated in Section 4.2.

Image processing algorithm
The intensity values of the pixels which are illuminated due to a LED light (i.e. the pixels on a blob) will be more or less in saturated level, as light is directed directly towards the camera, while the pixels which are not affected by the LED light will show very low intensity values. Hence, by binarising the image using a proper threshold value, the blobs which appear due to the LED light can be separated from the rest of the pixels easily. Thus, when a data frame F r (i.e., a frame corresponding to rth symbol) is received, first the image is converted into grey scale image (F r G ) and then binarised using a threshold value of thr d to obtain the black and white (BW) image F r BW as in the following equation: where F r BW (m, n) is the intensity value of the (m, n)th pixel value of  N) are the dimensions of the captured image. In spite of having the access to different colour spectra in a given image, only the grey scale image is used for binarisation. This is because, (i) all transmitter LEDs are of the same colour, thus no colour specific information is stored in images and (ii) processing of an image has to be completed within a very short duration for real-time implementation of OCC system, thus, performing colour specific image processing can result in longer processing time.
After binarisation, all the blobs in an image are labelled using pixel labelling and flood filling algorithm. The blobs corresponding to reference LEDs and the data LEDs are all included in the labelled blobs. Then, the set which corresponds to the centroids (C r = {c r, 1 , c r, 2 , c r, 3 , …}) of these blobs are identified. Thereafter, the convex hull (Conv(C r )) of the set of centroids, C r is identified. The convex hull is the smallest polygon which encloses all the points in a given set. Thereafter the set which includes the vertices (V r ) of the convex hull, Conv(C r ) is identified. Here V r = vertices[Conv(C r )] = {v r, j ; where j = 1, 2, 3, 4}, and V r ⊆ C r .
In our transmitter setup, as the four reference LEDs are kept always in ON condition, the blobs which correspond to those reference LEDs will always be present in any captured data image at the receiver. Also as these reference LEDs are at the corners of the transmitter LED configuration, once we identify their vertices of the detected centroids of all the LEDs, vertices should correspond to the reference LEDs. Based on this argument, the positions corresponding to the reference LEDs v r, j are identified ( Task 1).
Thereafter, the positions in which the blobs corresponding to data LEDs will be detected in the image if they are illuminated (referred to as LED position signatures, p r, i , where i = 1, 2, …, N and N is the total no. of data LEDs in the transmitter) are identified using the known transmitter geometry and v r, j . The x and y coordinates of the LED position signature is given by where r, j correspond to the frame index and the vertex index, respectively. i corresponds to the data LED index or its corresponding LED position index. Also, w i ( j) is the coefficient given to the jth vertex in calculating the ith LED position signature's x and y coordinate values. Also, D is the vector distance between the two reference LEDs. Further, d i, j a1 , j b , d i, j a2 , j b are the distances from the ith LED to the opposite sides (with respect to the jth vertex v r, j ) of the square which has the vertices as the edges as shown in Fig. 8. After the identification of p r, i (i.e. Task 2), the task left to do is to identify whether, the corresponding LED is switched ON or OFF (Task 3). This can be easily found by quantifying the number of illuminated pixels around every p r, i inside the square of size w × w using where F r BW (m, n) is the (m, n)th pixel intensity value of the binarised rth frame w = (w − 1)/2. Thereafter, finally, by comparing n r, i with a predefined value n, ON or OFF state of the ith LED is determined (i.e. if n r, i > n, LED is determined as ON and if n r, i < n, LED is determined as OFF). Here where γ 1 is a value which is 0 < γ 1 < 1. The size w was selected as in (8) in a way such that, the adjacent squares of size w × w will not overlap. Here where d is the distance between centroids which corresponds to two adjacent LEDs. This image processing steps for decoding when applied to a random frame, captured in clear water at the distance of 70 cm is explained in Fig. 9. (The corresponding experimental setup will be explained in Section 6)

Vertex identification:
In the proposed spatial position-based algorithm, in order to identify the LED position signatures, it is essential to find the vertex set v r, j with minimum error as possible.
From the known transmitter configuration, theoretically, the vertices v r, j should form a perfect square. However, due to several factors, the obtained vertices might not form a perfect square. If the blobs corresponding to the reference LEDs are of irregular shape, identified centroids of those blobs might not represent the exact LED positions. Also if the blobs corresponding to a reference LED overlap with its adjacent data LED's blob, the two blobs which did overlap will only be identified as a single blob and will be labelled accordingly. Hence, the centroid of this erroneous blob will only be deduced as a centroid of a reference LED, which is an incorrect deduction. Moreover, even if the error is not introduced in vertex calculation due to above said reasons, still the vertices might not form a perfect square due to perspective distortion, either resulting due to misalignment or due to transmitter experiencing continuous movement due to surface waves. Thus, before trying to deduce the LED position signatures, p r, i using the obtained vertices, it is essential to eliminate any possible errors in the calculated vertices. To this end, calculated vertices of a given frame are compared with the vertices of the previous frame and by checking how much the vertices are correlated, the calculated vertices of the current frame is adjusted.
If the transmitter and the receiver are in static condition, vertices calculated of two consecutive frames, should be almost identical. Even if the transmitter is subject to movement due to waves, still the amount of shift of vertices from one frame to the other will be minimal within the 1/T f time duration. Hence, by continuously checking whether the vertices of the current frame have drifted by a large percentage from the previous frame, and if so the current frame's vertices are changed until it is within the acceptable drift from the previous frame. (The allowable upper limit of this percentage is defined through Δ, Δ = f (w) and for the static transmitter and receiver system Δ = w/4, and for the dynamic system Δ = w/2), This process in explained in Algorithm 2 (see Fig. 10). Thereafter the updated vertices were used in the process of calculating LED position signatures, p r, i .

Training phase:
Light coming from a single LED towards the camera, will be represented by a blob in the captured image. The size and the shape of this blob is affected by many factors such as the transmission distance, camera quality, relative camera position with respect to a LED, water turbidity, out of focus blur etc. So when multiple LEDs are switched ON in a symbol, the blobs which correspond to each LED will have different shapes and sizes as in Fig. 9. Some blobs might fill the whole square of the size w × w which is used in decoding and some might not.
Thus, rather than deciding the ON or OFF state of every LED based on the same predefined value n in (7), n values which are unique to each LED positions, referred as n i is identified Thus in order to identify n i values for i = 1, 2, …, N, N LED patterns were transmitted after the synchronisation LED patterns and before the transmission of any data patterns, and the corresponding frames F r¯, where r¯= 1, 2, …, N (referred to as training frames) were analysed. In each of these patterns, in addition to the four corner reference LEDs, only one data LED will be kept in ON condition. (In the r¯th pattern only the r¯th LED is kept ON, where r¯= 1, 2, …, N).
Then these corresponding frames were sent through the decoding algorithm procedure to identify n r¯, i values. As only the r¯th LED is kept in ON state in the r¯th frame F r¯, n r¯, i will be almost equal to zero for r¯≠ i. Also n r¯, i when r¯= i will represent the maximum no. of pixels illuminated around ith the LED position, p r, i if the ith LED is switched ON. Hence, n i is defined as a function of n r¯, r¯, as in n i = γ 1 × n r¯, r¯ (9) when r¯= i. Thereafter, using the obtained n i values, the ON or OFF state of each LED is determined at the decoding stage.

Soft thresholding scheme:
When the proposed spatial position-based image processing algorithm is used to process the captured frame, bit error performance of the OCC system is affected due to the numerical values taken by (i) threshold value which is used to binarise the images, thr d as in (3) (ii) the size of the square inside which the number of illuminated pixels is counted, w (w is adjusted through γ 2 as is (8)) (iii) the number of pixels which should be illuminated, as a percentage of the total pixels inside the square w × w around a particular LED position signature, p r, i for a LED to be inferred as in ON condition, n (n is adjusted through γ 1 as is (7) and (9)) To optimise BER, correct numerical values have to be selected for thr d , γ 1 and γ 2 . For different transmission distance and for different water conditions these values will take different optimal values. Thus rather than relying on manual adjustment of these values for BER optimisation, the suitable threshold values, thr d were identified using a one-dimensional search algorithm at the training frames using the proposed iterative and modified adaptive thresholding scheme while using predefined values for γ 1 and γ 2 .
Thus, an iterative and modified quick adaptive thresholding scheme which tries to optimise BER by trying to eliminate inter channel interference is proposed and it is explained in Algorithm 3 (see Fig. 11).

Simulations
As explained in Sections 2 and 4, the performance of the OCC system depends on many factors such as no. of transmitter LEDs, transmitter LED geometry, channel characteristics, turbidity of the water, transmission distance, camera specifications, image processing techniques etc. Thus in this section, the importance of few of the above said features in an OCC system are demonstrated through Monte Carlo-based simulations.
At first, binary images (i.e. BW images) which emulate possible captured images in an underwater OCC system are synthetically generated. Different features in an OCC system are embedded into these images through the parameters used in generating these images and processing these images. Then, the generated images are sent through the LED position-based image processing algorithm which is explained in Section 4.1 to assess the error performance of the system.

Simulation image generation
We assume that the transmitter and the receiver are perfectly aligned and the transmitter and the receiver are kept fixed and free from any movements.
The light coming from a single LED towards the camera will be represented by an approximately circular size blob in the captured image [30]. The size of this blob is affected mainly by the transmission distance and the threshold value used for image binarisation. Irregularity in this blob shape, deviating from a perfect circular shape happens due to noise addition to the captured image. In [34], the author's model distinguishes three types of distortion that happen in the VLC channel, affecting the quality of the message received. They are the background noise, the directional noise and the path-loss. However, in OCC systems, noise addition by the camera image capturing process is more significant that the aforementioned three noise sources. One singular property of the noise generated by cameras is that a particular noise level does not have a uniform effect on the transmitted signal [34].
Thus taking into account the characteristics explained above, in the synthetically generated images, a blob which appears due to a LED illumination is represented by a circular blob with the radius of r(θ). By making the radius of the blob as a function of an angle θ as in (10), purely random shaped blobs are obtained.
where r¯ is mean of the blob radius. ψ is the Brownian noise which is characterised by where p(ψ) and S ψ (ω) are the probability density function and the power spectrum density function of the random variable ψ, respectively. Also, σ ψ 2 is the variance of ψ. The mean radius value of the blob specified in (10) can be related to the threshold value, thr d stated in (3). With decreasing thr d value, no. of illuminated pixels corresponding a switched ON LED will increase. Thus, thr d ∝ 1/r¯.

Simulation I
The impact of the camera quality and the use of proper threshold value, thr d while processing images in an OCC system is highlighted through this simulation.
Possible images captured at a specific distance were synthetically generated which have specifications as shown in Table 1 and the corresponding numerical results are plotted in Fig. 12. By observing one single curve in the figure, it can be realised that, with decreasing camera quality (i.e. with increasing σ ψ 2 ), error performance worsens. Also by comparing the multiple curves in the figure, it can be realised that, with decreasing threshold value thr d (i.e. with increasing blob radius r¯), error performance of the OCC system worsens.

Simulation II
The impact of the transmission distance and the camera quality in an OCC system is highlighted through this simulation.
When the light coming from a single LED is captured by a camera at different distances, with increasing distance, the size of the blob which represents the LED decreases (i.e. the radius of the blob r¯ decreases). Also, with increasing transmission distance, the distance between centroids of two blobs (d) which correspond to adjacent LEDs will decrease. Thus, by incorporating these phenomena, possible images captured at different distances were synthetically generated which have specifications as shown in Table 2 and the corresponding numerical results are plotted in Fig. 13.
By observing one single curve in the figure, it can be realised that, with decreasing transmission distance (i.e. with increasing d), error performance improves. Also by comparing the multiple curves in the figure, it can be realised that, with decreasing camera quality (i.e. with increasing σ ψ 2 ), error performance worsens.

Experimental proof of concept setup
A proof of concept test bed is developed to assess the BER performance of the proposed OCC system. The off-line experimental setup was made of a transmitter system, a receiver system and the water channel. The underwater channel was emulated within a glass tab with different water samples. (its dimensions 1.3 m × 0.4 m × 0.2 m, glass thickness 10 mm). As the bottom surface of the glass, tab was made of glass, by placing the transmitter setup right beneath the bottom surface of the glass tab as in Fig. 14, submerged transmitter was emulated. The camera was kept floating on the water surface inside a glass case. Also by changing the water level inside the water tab, UWC system transmission distance was changed. In our experimental study, minimum and maximum transmission distances were restricted to 0.5 and 1 m, respectively.

Transmitter
A total of 29 oval top commercial available red LEDs (i.e. N = 25) were used to construct the 21 cm×21 cm transmitter LED array. The separation between two adjacent LEDs were 3 cm and LEDs which were used in the setup had spreading angles of ∼40°. Symbol generation from the incoming bit stream and then mapping the bits in the symbol to different LED's were carried out using Arduino micro controller.

Receiver
Apple Iphone 6 camera was used as the receiver camera. The camera was kept floating on the water surface inside a glass case. The operated camera specifications are shown in Table 3. Frame selection scheme implementation, processing frames to decode symbol data and reconstructing the bit stream from the consecutive symbols were carried out offline on MATLAB ® platform.
Here based on the camera specifications, camera frame rate (i.e. camera pulse frequency) is 60 fps. Thus, the camera sampling interval is 0.016667 s. Also, as mentioned in Section 3, as the frame rate of the camera cannot be changed due to camera hardware limitations, symbol rate at the transmitter has to be changed with respect to camera frame rate to capture unique frames corresponding to every transmitted symbol. Thus, the symbol duration at the transmitter of the symbols is decided  according to (1) and the receiver camera frame rate. As such, the symbol duration of the symbols used during the experimental setup is T S = 0.033334 s (2 × 0.016667).

Results and discussion
In this section, experimental results, obtained when the images obtained from the experimental setup explained in Section 6 were processed is presented. BER performance was evaluated by transmitting known bit stream and decoding it at the receiver. For real-time decoding in an OCC system, maximum bound of the image processing execution time should be less than the symbol duration of the camera. When the code explained in Section 4 was executed on a processor with Intel Core i5-4210U processor with 4 GB RAM running at 1.70 GHz on the images captured using the experimental setup mentioned in Section 6, the average execution time to process one frame was 0.0053 s which is in within the acceptable limit in our case when implementing the said system online.
The use of forward error correction (FEC) limit paradigm is one of the prevalent practices when designing optical communication systems [43][44][45]. The rationale for this approach is that, a certain BER without coding, referred to as pre-FEC BER, can be reduced to the desired post-FEC BER by previously verified FEC implementations. The desired post-FEC BER values are in the order of 10 −12 to 10 −15 . As such, as per optical communication literature, the expected pre-FEC BER for optical communication system to obtain the desired post-BER values to meet higher layer quality of service requirements is 3.8 × 10 −3 [43][44][45].
The optical communication system of this paper is different from traditional optical communication systems due to its use of cameras at the receiver. As such, the FEC limit of our OCC system might be subject to slight variations from the FEC limit of traditional optical communication systems, as the FEC limit depends on many factors. A detailed analysis is needed to identify this value, which seems to be out of the scope of the current work. However, for comparison, the FEC limit of optical communication systems are drawn in the BER curves of Figs. 15-17.

Transmission in clear and muddy conditions
The observed BER performance, when the medium was clear water (Turbidity level 0 − 10 NTU) and muddy water which had a turbidity of 260 NTU to 290 NTU are shown in Figs. 15 and 16.
When transmission distance increases, the BER performance worsens. This effect was observed because, with increasing transmission distance the group of pixels which were illuminated due to different LED illuminations overlap with one another, giving rise to adjacent channel interference.
Also, BER performance worsens in muddy water compared to clear water. This is because, an underwater photograph not only captures the scene of interest but also the water column as well [33]. Also with increasing turbidity, the presence of solids and dissolved materials in the water, which promotes absorption and scattering artefacts in an aquatic medium increases. Thus at high  turbidity, the captured images might be below the expected quality. However, the BER values are still within the accepted limits.

Transmission in turbulence condition
Results obtained in Figs. 15 and 16 are achieved when the transmitter and the receiver are kept fixed. In an underwater communication environment, achieving these constrains might be practically difficult especially if the receiver is on a floating body as it is the case in our studied scenario. If surface waves exist, the receiver platform can oscillate, resulting in perspective distortions and misalignments. However, in the proposed algorithm, as processing of one frame was carried out independent of the other, even under oscillation, as long as the transmitter LEDs are within the FOV of the camera, decoding can be carried out efficiently with the above-explained algorithm.
In the proposed hardware setup, surface waves with an amplitude of 3 cm (±2 cm) were generated in clear water and the BER performance was evaluated. The observed results are shown in Fig. 17. As it can be observed, even when the transmitter was subject to controlled oscillations the BER performance was within acceptable limits for some distances.

Achievable data rate:
The transmission speed of an OCC system is inherently limited by camera frame rate, which limits the achievable data rate. The achievable bit rate of the implemented system which is presented in the paper is given by Bit Rate = 1/2 × N × fr (13) where N and fr are the no. of transmitter data LEDs and camera frame rate, respectively.

Signalling overheads:
In this work, we analyse a unidirectional communication link. As per the proposed architecture, before transmitting the required data through the link, synchronisation and channel estimation overheads are transmitted to facilitate the processing of frames received at the receiver. Synchronisation overheads are made of predefined synchronisation LED patterns which facilitate synchronisation between transmitter and receiver clock (explained in Section 3). The channel estimation overheads (i.e. training overheads) include N predefined LED patterns which are transmitted after the synchronisation LED patterns to identify n i values for all i = 1, 2, …, N, N which are key parameters necessary while processing data images using the proposed LED position-based image processing algorithm (explained in Section 4.2.2). Also, we assume that the receiver has complete knowledge of the transmitter LED configuration and the transmitter and the receiver are aligned. If the channel is not static, the calculated n i values can be the same throughout the data transmission process. However, if the channel is varying due to oscillations, it might become necessary to transmit the channel estimation overhead frames, more frequently increasing the amount of overhead in the communication link.

Comparison with previous work:
As mentioned in the Introduction Section, although a great amount of research effort has been paid towards terrestrial OCC, the applicability of MIMO OCC for underwater applications has been rarely studied in the literature. On the other hand, even though the authors of [1,2,22,24,33] discussed about VLC for UWC, no major work has been done on OCC for underwater applications, except a recent study reported in [24]. Further, none of the studies is present a practically implementable complete system. Therefore, to highlight the advantages of the presented underwater OCC system, in this section, the presented OCC system is compared with terrestrial OCC systems and traditional underwater VLC systems.
(a) Simulation results: In our work, simulation results are presented in Section 5 and only some straightforward schemes are selected as benchmarks. In [15], a novel space-time recursive leastsquares adaptive algorithm is proposed for OCC image processing.
There, the effect of out of focus blur and moving blur effects are analysed. A BER performance of the order of 10 −2 is obtained using the presented synthetic simulation experiments. However, this BER performance is obtained after processing a single image recursively, which might result in higher image processing time, causing problems in real-time OCC system implementation.
On the other hand, with respect to studies on underwater VLC systems, authors in [6] present a long distance underwater VLC system with single photon avalanche diodes and BER in the order of 10 −1 to 10 −6 are observed under simulation conditions in pure sea water.
(b) Experimental hardware results: As it can be observed from Figs. 15-17, bit error performance obtained from the experimental setup is in the order of 10 −3 . However, this BER performance can be improved further while using the same above mentioned image processing algorithm if a camera with better resolution can be used.
Authors of [37] have presented the principles and implementation techniques of a 2D-OFDM system for terrestrial OCC applications. They use a screen to camera communication system and the corresponding proposed system is validated through experimental results. The presented BER of the 2D-OFDM OCC system is in the order of 10 −2 which is less than the BER values obtained through our experimental setup. However, the image processing techniques and cameras which are used in [37] is different from our work. Also, a VLC/OCC hybrid optical wireless systems for indoor applications is presented in [36] and BER performance in the order of 10 −6 to 10 −2 is obtained for the distance of 3 to 5 m. However, the complexity of the proposed system in [36] is considerably high due to its VLC/OCC hybrid implementation. Also, in [20] an MIMO OCC system is presented and through the presented experimental setup, BER of 10 −6 order is obtained using OFDM and application of FEC.
On the other hand, authors of [46] have demonstrated an underwater VLC system which uses spatial diversity. The BER performance of this system is shown to be in the order of 10 −4 to 10 −2 for the distance of 0.4 to 1 m. As for the effect of turbulence on VLC/OCC-based communication schemes, authors of [47] have characterised the performance of a vertical underwater VLC links in the presence of turbulence. However, they have limited their studies to simulations.

Conclusion
In this paper, we present a complete analysis of an underwater OCC system which can be applied to a scenario where data is transmitted from a fixed submerged body to a floating receiver through the LOS. Initially, an analysis on the system design constrains of the underwater OCC system, a novel synchronisation and frame selection scheme, and a novel spatial position-based image processing for decoding are presented. Also, the optimisation techniques and modifications which are carried out during some of the key steps of the presented image processing algorithm are also elaborated. Then, importance of few key features of an OCC system, namely camera quality, transmission distance and the threshold value used for image binarisation are demonstrated through numerical results using simulations via synthetically generated captured images. Finally, the techniques proposed in this paper have been validated using an experimental proof of concept setup in clear water, in muddy water and when the floating receiver is subject to controlled oscillation due to surface turbulence for varying transmission distance. Based on the obtained low BER performance, it can be concluded that the proposed system can be used to realise a short-range UWC system successfully while recording overall BER performance of 10 −3 within the communication distance of 1 m.
The achievable data rate in current underwater LED to camera communication techniques remains rather low compared to other UWC systems. Although such low data rate might not be sufficient for a mainstream underwater communication system, the proposed underwater OCC system can be utilised to realise a UWC task which prefers error performance over data rates. In future, in order to improve the robustness against a wide range of underwater disturbances and turbulent conditions, we plan to analyse an extend the underwater OCC system design in which the floating platform will have a control mechanism to adjust its position and orientation.