Deep Learning in a Virtual Environment for Self-Driving Cars
$5.99
Kindly ADD to CART and Purchase an Editable Word file at $5.99 Only
Deep Learning in a Virtual Environment for Self-Driving Cars
Deep Learning Network
The deep learning and neural networks have been positively utilised in different control activities (Kato et al. 2017, p.149). Deep learning can also be referred to as machine learning. It is a sub-division of artificial intelligence, which applies neural networks to facilitate machine learning. The concept of deep learning is intended to artificially imitate human brain functions via software and hardware. DL uses artificial neural networks (ANN) to learn endlessly. The capacity of DL to identify the surrounding is based on deep learning senses on real items such as images and sounds. The network permits a well-developed DNN using CNN mechanism to assess the real-time items including the context in which such items are observed. For instance, a self-driving car could not only identify an individual crossing the road but also evaluate if that individual is using a mobile phone or not watching the oncoming car. For instance, in the Figure 1, the visual representation using artificial/synthetic neural networks such as CNN can assess the context of an image as opposed to just spot that image. In this regard, it can evaluate whether the individual is smiling (Bojarski et al 2017, p.9).
Figure 1 DNN using artificial neural networks such as CNN to identify a smiling image (Bojarski et al 2017, p.9)
Convolutional Neural Network
Convolutional neural network (CNN) refers to the type of artificial/synthetic neural networks, which have effectively been utilised to examine visual imagery. CNN involves the use of a wide range of multilayer perceptron, which facilitates negligible pre-processing. They are used in video and image recognition. Furthermore, they are also utilised in facial recognition (Kato et al. 2017, p.150). The quality of CNN in image and facial recognition is like that of humans. It can also spot appearances. They can also be used in analysing videos.
Bechtel, McEllhiney, and Yun (2017, p.3) argued that deep neural networks are essential workload for self-driving cars. For instance, Tesla Model S utilised a particular chip referred to as MobileEye EyeQ that applied a deep neural network (DNN) real-time vision-based hurdle avoidance and detection. Therefore, they can be utilised as controllers of self-driving cars. It is anticipated that more artificial intelligence based DNN can be applied for future self-driving vehicles (Bechtel, McEllhiney and Yun 2017, p.2).
The research by Tian, Pei, Jana and Ray (2017, p.3) suggested that deep-learning system facilitates learning using road images and the steering angles created by a human driving. The main utilisation of DL networks within the automobile realm involves a complex computer perception and vision (Tian, Pei, Jana and Ray 2017, p.3). Kato et al. (2017, p.149) argued that the major difference between DL and machine learning networks is based on the magnitude in which the networks can self-reliantly learn. A deep neural network (DNN) comprises numerous hidden layers that introduce new features and surpasses human coding capabilities (Kato et al. 2017, p.147). For this reason, DL is more commanding and robust for complicated computing tasks such as recognition of an object.
The study conducted by Shalev-Shwartz, Shammah and Shashua (2016, p.4) highlighted that noteworthy is the enhancement of DL applying a convolutional neural network (CNN) (Bechtel, McEllhiney and Yun 2017, p.4). For instance, the input is the entire image and hence embeds feature extraction, which helps to sense pedestrian on the highway crossing the road. DLNs sense the road users. However, in some situations, such as steering the car along fixed routes and in high-resolution charted town, less complex learning algorithms are adequate to deal with such tasks. However, in complicated scenarios such as numerous changing routes or unknown destinations, DLN is the more appropriate alternative (Shalev-Shwartz, Shammah and Shashua 2016, p.5).
Bojarski et al (2017, p1) examined the approaches in which a DL can be specifically be utilised in autonomous cars. The researchers pointed out two main ways, with their benefits and shortcomings. Firstly, semantic abstraction involves breaking down the challenge of self-driving into multiple components. Semantic abstraction comprises algorithms that are dedicated only on a single portion of the task (Bojarski et al 2017, p.3). For instance, the scholars argued that a single component could be concentrated on pedestrian detection while another is recognising the lane marking. A third algorithm could be identifying items outside the lane. Ultimately, these mechanisms are integrated together into a principal network that enables decision making while driving. On the contrary, a network can be created that senses and categorises different classes or even engages in semantic separation (Shalev-Shwartz, Shammah and Shashua 2016, p.5).
Advantages and Disadvantages
The study by Tian, Pei, Jana and Ray (2017, p.4) highlighted that offering minimal tolerance for errors is the advantage of semantic abstraction. It also has the capability to isolate errors more quickly and has the capacity to administer unpredictable circumstances better. However, in terms of shortcomings, the DL system needs complicated programming and enormous pre-work (Tian, Pei, Jana and Ray 2017, p.6).
Secondly, “disrupting” learning network involves an end-to-end approach. Specifically, this network enables the car to learn how to drive itself without any assistance. Nonetheless, it utilises a large amount of data prepared by humans. According to Bechtel, McEllhiney and Yun (2017), this strategy needs a large amount of training data. In addition, the car should have the capability to be tuned and trained properly. The scholars also noted that the network has quite promising for the prospective intelligent cars (Bechtel, McEllhiney and Yun 2017, p.6).
Challenges and Shortcomings of Deep Learning Networks
Firstly, Kato et al. (2017, p.150) argued that since DL demands such a huge amount of processing power, a robust brain is required to deal with big data capacities and computing necessities. The challenge remains to manufacture a less expensive GPU that functions within the consumption of energy and other features such as management of heat that are needed for a market-ready car (Kato et al. 2017, p.151).
The research by Bojarski et al. (2016, p.4) contended an end-to-end learning network demands a large amount of training information to facilitate easier prediction in as many driving situations as possible to meet the minimal safety needs. Furthermore, the information should be sufficiently diverse for it to be beneficial. Therefore, lack of adequate amount of training data poses a challenge to the use of DL networks (Bojarski et al 2016, p.6).
DL networks for autonomous cars suffer from the challenge of safety. Based on research findings by Schmidhuber (2015, p.89), deep neural networks are limited because they are quite unstable due to adversarial perturbation. Similarly, the safety of the cars is not guaranteed because the security verification and assurance methods for DL networks are inadequately studied (Schmidhuber 2015, p.91). For this reason, more studies should be conducted before conclusions can be made on the safety levels of self-driving cars (Bojarski et al. 2016, p.4).
Deep Learning Frameworks
Some of the most common DL mechanisms include CNTK, Caffe, Theano, and Tensorflow among others. Moreover, Keras is a high-quality DL network with the capacity to function on top of CNTK, Theano, and TensorFlow (Tian, Pei, Jana and Ray 2017, p.5).
Keras Framework
Seminal work by Shalev-Shwartz, Shammah and Shashua (2016, p.2) illustrated that Keras offers a modular and simple API to generate and teach neural networks without exposing the majority of the complex facts under the hood. In this respect, it contributes to the easier DL experience. The use of Keras requires Tensorflow and Theano as backend archives including other libraries that enable visualisation and utilisation of data. Keras has three layers, which serves as the elements of a neural network. They help in the processing of input information and generate diverse outputs based on the kind of layer. The layers, which are interlinked to nodes, later utilise the outputs (Shalev-Shwartz, Shammah and Shashua 2016, p.7). Some of the key layers include the dense layers, activation layer, and dropout layers. The study by Kato et al. (2017, p.152) noted that dense layers involve a connection of output and input nodes while the activation layer comprises activation tasks like TANH and ReLU. Finally, the dropout layer is utilised for regularisation in the course of the training (Kato et al 2017, p.152).
How Keras Works alongside TensorFlow Backend
Keras creates a high-level structure for enhancing the DL framework. It does not focus on low-level processes such as convolutions and tensor products. Importantly, Keras depends on an expert, adequately optimised tensor management library working as the Keras’ backend engine (Shalev-Shwartz, Shammah and Shashua 2016, p.10). Instead of selecting one tensor library and developing the application of Keras fixed to the manipulation library, Keras deals with the challenge in a modular manner. In this regard, various kinds of backend engines can be tied smoothly into Keras (Bechtel, McEllhiney, and Yun 2017, p.7). At this instance, the TensorFlow backend provides an open-source tensor management. When the use operates the Keras, the configuration file helps to shift the field backend to TensorFlow. Therefore, Keras will utilise the emergent configuration. For instance, the model provides an excellent amount of records for learning and installations, which are intended to assist beginners, comprehend some of the conceptual elements of neural networks and acquiring TensorFlow system (Bechtel, McEllhiney, and Yun 2017, p.6).
Keras also has the capacity to provide limited sub graph calculation in a process referred to as Model Parallelization. It permits for dispersed training. The researcher noted that Keras reinforces TensorFlow, which implies that Keras has both Theano and TensorFlow backends. Furthermore, TensorFlow is quite slow as compared to Torch and Theano (Bechtel, McEllhiney and Yun 2017, p.7).
Benefits of Keras over other Frameworks
Keras is the most powerful neural network API prepared in Python. The network is designed with an aim of facilitating fast investigation and is considered as advantageous since it permits quick and easy prototyping. Keras supports both recurrent network and convolutional networks. Moreover, the network is user-friendly since it places user experience at the centre (Bojarski et al 2017, p.9). Additionally, it adheres to the favourable practices for minimising cognition load and creates simple and consistent APIs. It reduces the amount of user actions needed for mutual cases. It also offers actionable and clear feedback after user error (Kato et al. 2017, p.152).
Keras framework has an easier extensibility since the existing components offer an ample usability. Additionally, it facilitates total expressiveness and functions with Python; hence, the models are explained in the Python code, which allows for simplicity of extensibility, and debug. TensorFlow functions as the fundamental framework in the use of Keras networks (Bojarski et al 2017, p.9).
References
Bechtel, M.G., McEllhiney, E. and Yun, H., 2017. DeepPicar: A Low-cost Deep Neural Network-based Autonomous Car. arXiv preprint arXiv:1712.08644.
Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., Zhang, J. and Zhang, X., 2016. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316.
Bojarski, M., Yeres, P., Choromanska, A., Choromanski, K., Firner, B., Jackel, L. and Muller, U., 2017. Explaining how a deep neural network trained with end-to-end learning steers a car. arXiv preprint arXiv:1704.07911.
Kato, N., Fadlullah, Z.M., Mao, B., Tang, F., Akashi, O., Inoue, T. and Mizutani, K., 2017. The deep learning vision for heterogeneous network traffic control: proposal, challenges, and future perspective. IEEE wireless communications, 24(3), pp.146-153.
Schmidhuber, J., 2015. Deep learning in neural networks: An overview. Neural networks, 61, pp.85-117.
Shalev-Shwartz, S., Shammah, S. and Shashua, A., 2016. Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint arXiv:1610.03295.
Tian, Y., Pei, K., Jana, S. and Ray, B., 2017. DeepTest: Automated testing of deep-neural-network-driven autonomous cars. arXiv preprint arXiv:1708.08559.