Guest Editorial: Internet of things and intelligent devices and services

Internet of Things (IoT) refers to uniquely identifiable objects (things) and their virtual representations in the Internet and web applications. The concept was initially applied in the Radio-Frequency Identification RFID-tags to mark the Electronic Product Code (Auto-ID Lab). IoT concept is later extended to refer to the world where physical objects are seamlessly integrated into the information network, and where the physical objects can become active participants in business processes. The recent fusion of IoT and intelligent devices and services provoke the emergence of Internet of Intelligent Things (IoIT) and Robot as a Service (https://en.wikipedia.org/wiki/Robot_as_a_service), which make IoT a part of other emergency technologies, including artificial intelligence, robotics, big data processing, service-oriented computing, cloud computing, fog computing, and edge computing. Gartner, Inc. forecastedin 2015 that 6.4 billion connected things would be in use worldwide in 2016, up 30% from 2015, and would reach 20.8 billion by 2020. In 2016, 5.5 million new things will get connected every day (https://www.gartner.com/newsroom/ id/3165317). IoT connects to the Internet and devices through general purpose Internet protocols, such as HTTP, TCP, and IP, as well as specially developed protocols, such as industrial internet protocol and industrial control systems protocols. The data generated from IoT devices are typically poly-structured, including structure, semi-structured, and unstructured. The data are represented as web data in forms such as HTML, JSON, XML, and URI. The data can be further transformed and organized into other format for efficiently processing by different applications, such as key-value pairs and ontology triples. The data is typically processed in service-oriented and web-based computing environments. If the data amount is big, it can be processed by cloud computing, big data, and machine learning algorithms. At this end, the IoT and its data are fully integrated into the web and the virtual world, and all the technologies and applications developed can be applied to process IoT data and control the physical world connected to the IoT on the other side. In this Special Issue, six research papers are selected to cover a number of key domains in IoT and intelligent devices. The topics include the development of Internet of Simulation that provides an infrastructure for an array of technologies to be accommodated; Image processing and fusion; Natural language processing; Sensor data preprocessing as a phase of crowd sensing application; Solving security issue in healthcare system through blockchain system; and IoT application development through visual programming. In the first paper, the evolution of the Internet of Things (IoT) and Smart Cities are introduced and a new trend of the Internet of Simulation (IoS) is presented, which utilises the technologies of Cloud, Edge, Fog computing, and HPC for design and analysis of complex cyber-physical systems using simulation. These technologies have been applied to the domains of big data and deep learning, but they are not adequate to cope with the scale and complexity of emerging connected, smart, and autonomous systems. This paper explores the existing state-of-the-art in automating, augmenting, and integrating systems across the domains of smart cities, autonomous vehicles, energy efficiency, smart manufacturing in Industry4.0, and health care. This study is expanded to exam the existing computational infrastructure and how it can be used to support IoT and smart city applications. A detailed review is presented of advances in approaches providing and supporting intelligence as a service. Finally some of the remaining challenges due to the explosion of data streams; issues of safety and security; and others related to Big Data, a model of reality, augmentation of systems, computation are discussed. The second paper in this special issue studies the multi-focus image fusion and integration through machine learning. Sparse representation has been widely applied to this domain in recent years. As a key step, the construction of an informative dictionary directly decides the performance of sparsity-based image fusion. To obtain sufficient bases for dictionary learning, different geometric information of source images is extracted and analysed. The classified image bases are used to build corresponding sub-dictionaries by principle component analysis. All built sub-dictionaries are merged into one informative dictionary. On the basis of constructed dictionary, image patches are converted to corresponding sparse coefficients for the representation of source images by using compressive sampling matched pursuit algorithm. Finally, the obtained sparse coefficients are fused by Max-L1 fusion rule and inverted to fused image. Multiple comparative experiments are simulated to confirm the feasibility and effectiveness of proposed multi-focus image fusion solution. The third paper in this special issue deals with automated hash tag recommendation through natural language processing. Hashtags of microblogs can provide valuable information for many natural language processing tasks. It has attracted considerable attention on recommending reliable hashtags automatically. The existing studies assumed that all the training corpus crawled from social networks are labelled correctly. A study based on alarge sample of statistics on real social media shows that 8.9% of microblogs with hashtags are marked with wrong labels. This notable influence of noisy data to the classifier is ignored before. Meanwhile, data recency also plays an important role in microblog hashtag. This information is also not used in the existing studies. For example, some temporal hashtags such as World Cup will ignite at a particular time, but after the time window, the number of people talking about it will sharply decrease. To address these twofold of shortcomings above, the paper proposes a Long Short-Term Memory based model, which uses temporal enhanced selective sentence-level attention to reduce the influence of wrong labelled microblogs to the classifier. Experimental results using a dataset of 1.7 million microblogs collected from SINA Weibo microblogs demonstrated that the proposed method can achieve significantly better performance than the state-of-the-art methods. The fourth paper in this special issue is on crowd sensing application based on generic sensors installed in mobile devices. The existing studies do not effectively solve the problem of sensor data processing and optimization at the acquisition end in crowd sensing process. Aiming at the problem of high overhead incurred through the algorithm and the problem of redundant data records in the numerical sensor data acquisition, this paper presents an improved sliding average method. The approach uses a dynamic window and improves processing time to achieve data compression. It reduces the time complexity for the average CAAI Transactions on Intelligence Technology


Introduction
Internet of Things (IoT) refers to uniquely identifiable objects (things) and their virtual representations in the Internet and web applications. The concept was initially applied in the Radio-Frequency Identification RFID-tags to mark the Electronic Product Code (Auto-ID Lab). IoT concept is later extended to refer to the world where physical objects are seamlessly integrated into the information network, and where the physical objects can become active participants in business processes. The recent fusion of IoT and intelligent devices and services provoke the emergence of Internet of Intelligent Things (IoIT) and Robot as a Service (https://en.wikipedia.org/wiki/Robot_as_a_service), which make IoT a part of other emergency technologies, including artificial intelligence, robotics, big data processing, service-oriented computing, cloud computing, fog computing, and edge computing. Gartner, Inc. forecastedin 2015 that 6.4 billion connected things would be in use worldwide in 2016, up 30% from 2015, and would reach 20.8 billion by 2020. In 2016, 5.5 million new things will get connected every day (https://www.gartner.com/newsroom/ id/3165317).
IoT connects to the Internet and devices through general purpose Internet protocols, such as HTTP, TCP, and IP, as well as specially developed protocols, such as industrial internet protocol and industrial control systems protocols. The data generated from IoT devices are typically poly-structured, including structure, semi-structured, and unstructured. The data are represented as web data in forms such as HTML, JSON, XML, and URI. The data can be further transformed and organized into other format for efficiently processing by different applications, such as key-value pairs and ontology triples. The data is typically processed in service-oriented and web-based computing environments. If the data amount is big, it can be processed by cloud computing, big data, and machine learning algorithms. At this end, the IoT and its data are fully integrated into the web and the virtual world, and all the technologies and applications developed can be applied to process IoT data and control the physical world connected to the IoT on the other side.
In this Special Issue, six research papers are selected to cover a number of key domains in IoT and intelligent devices. The topics include the development of Internet of Simulation that provides an infrastructure for an array of technologies to be accommodated; Image processing and fusion; Natural language processing; Sensor data preprocessing as a phase of crowd sensing application; Solving security issue in healthcare system through blockchain system; and IoT application development through visual programming.
In the first paper, the evolution of the Internet of Things (IoT) and Smart Cities are introduced and a new trend of the Internet of Simulation (IoS) is presented, which utilises the technologies of Cloud, Edge, Fog computing, and HPC for design and analysis of complex cyber-physical systems using simulation. These technologies have been applied to the domains of big data and deep learning, but they are not adequate to cope with the scale and complexity of emerging connected, smart, and autonomous systems. This paper explores the existing state-of-the-art in automating, augmenting, and integrating systems across the domains of smart cities, autonomous vehicles, energy efficiency, smart manufacturing in Industry4.0, and health care. This study is expanded to exam the existing computational infrastructure and how it can be used to support IoT and smart city applications. A detailed review is presented of advances in approaches providing and supporting intelligence as a service. Finally some of the remaining challenges due to the explosion of data streams; issues of safety and security; and others related to Big Data, a model of reality, augmentation of systems, computation are discussed.
The second paper in this special issue studies the multi-focus image fusion and integration through machine learning. Sparse representation has been widely applied to this domain in recent years. As a key step, the construction of an informative dictionary directly decides the performance of sparsity-based image fusion. To obtain sufficient bases for dictionary learning, different geometric information of source images is extracted and analysed. The classified image bases are used to build corresponding sub-dictionaries by principle component analysis. All built sub-dictionaries are merged into one informative dictionary. On the basis of constructed dictionary, image patches are converted to corresponding sparse coefficients for the representation of source images by using compressive sampling matched pursuit algorithm. Finally, the obtained sparse coefficients are fused by Max-L1 fusion rule and inverted to fused image. Multiple comparative experiments are simulated to confirm the feasibility and effectiveness of proposed multi-focus image fusion solution.
The third paper in this special issue deals with automated hash tag recommendation through natural language processing. Hashtags of microblogs can provide valuable information for many natural language processing tasks. It has attracted considerable attention on recommending reliable hashtags automatically. The existing studies assumed that all the training corpus crawled from social networks are labelled correctly. A study based on alarge sample of statistics on real social media shows that 8.9% of microblogs with hashtags are marked with wrong labels. This notable influence of noisy data to the classifier is ignored before. Meanwhile, data recency also plays an important role in microblog hashtag. This information is also not used in the existing studies. For example, some temporal hashtags such as World Cup will ignite at a particular time, but after the time window, the number of people talking about it will sharply decrease. To address these twofold of shortcomings above, the paper proposes a Long Short-Term Memory based model, which uses temporal enhanced selective sentence-level attention to reduce the influence of wrong labelled microblogs to the classifier. Experimental results using a dataset of 1.7 million microblogs collected from SINA Weibo microblogs demonstrated that the proposed method can achieve significantly better performance than the state-of-the-art methods.
The fourth paper in this special issue is on crowd sensing application based on generic sensors installed in mobile devices. The existing studies do not effectively solve the problem of sensor data processing and optimization at the acquisition end in crowd sensing process. Aiming at the problem of high overhead incurred through the algorithm and the problem of redundant data records in the numerical sensor data acquisition, this paper presents an improved sliding average method. The approach uses a dynamic window and improves processing time to achieve data compression. It reduces the time complexity for the average method. Aiming at the problem that the algorithm is too time-consuming when performing high pixel images denoising, an improved extremum median filtering method is proposed. Detecting the extremum and median value in the filter window through the algorithm is optimized through local sorting. The gradient change of the filter window is used for accelerating the sorting process of the sliding window, which boosts the speed of the whole algorithm, while still ensuring the effect of image denoising. A transmission strategy for optimization is also proposed in the paper, in which only the demarcation points of each group of data and the data points with large difference when compared with the demarcation points are recorded. This process reduces the storage pressure and the amount of data transmission of mobile terminal, and it improves the efficiency of data transmission. The experimental results show that the proposed methods have higher speed and lower cost, and thus they can run better in crowd sensing environment.

CAAI Transactions on Intelligence Technology
The fifth paper in this special issue is on the security and privacy issues in blockchain-based healthcare applications. The blockchains used in healthcare are called health blockchain. In general, blockchains blocks are open, and the transactions among them are public. If privacy data are involved in these transactions, they will be leaked. Healthcare systems involve a great deal of privacy data, additional security mechanisms must be built to protect these privacy data in health blockchain. Furthermore, because the core of security mechanisms is key management schemes, the appropriate key management schemes should be designed before blockchains can be used in healthcare system. In this paper, according to the features of health blockchain, body sensor network is used to design a lightweight backup and efficient recovery scheme for the keys of health blockchain. The analyses show that the scheme has high security and performance, and it can be used to protect privacy messages in health blockchain effectively and thus to promote the applications of health blockchain.
The last paper in this special issue present an education environment for developing IoT and robotics applications called VIPLE (Visual IoT/Robotics Programming Language Environment). VIPLE supports a variety of physical and simulated IoT devices and robots based on an open architecture. Based on computational thinking, VIPLE supports the integration of engineering design process, workflow, fundamental programming concepts, control flow, parallel computing, event-driven programming, and service-oriented computing seamlessly into a wide range of curricula, such as introduction to computing, introduction to engineering, service-oriented computing, and software integration. Because visual programming is intuitive, VIPLE can be used in the course teaching first programming language. VIPLE's advanced feature such as event-driven programming, parallel programming, and service-oriented computing allows it to be used on senior classes as well. VIPLE has been actively used at Arizona State University in several courses, including Introduction to Engineering and Software Integration and Engineering, as well as in several other universities worldwide.