AI approach in Cyber security & Intrusion Detection System using LSTM Networks

AI approach in Cyber security & Intrusion Detection System using LSTM Networks

Tagged: Computer Science

Share this:

Introduction

Data security has recently risen to the top of the internet's priority list. An intruder gained access to the system in order to gain unlawful access to or usage of the network's information. An attack, a hacking effort, a packet sniffing attempt, or a data stop are all considered incursions. Attacks are designed to breach system security or networks in order to extort money, gain sensitive information, or for other unethical reasons (Kaloudi & Li, 2020). Intrusion occurs when malicious code is introduced into a computer's software, data, or logic, producing a variety of problems, including the capacity to supply and steal sensitive data from the institute, making it available to cybercriminals.

Data hacking, denial of service, virus, phishing, and theft are only few of the acts covered by cyber-attacks (Procopiou & Komninos, 2019). Around the world, the number of cyber attackers or unlawful acts is expanding, and cyber security defenders are confronting an increasing number of threats from these cyber attackers. It is anticipated to have a major and widespread influence on human existence, necessitating the deployment of security measures. In addition, an intrusion detection system can assist in these attempts (IDS). Intrusion detection is carried out by collecting data packets, analysing them, and recognising any undesired, suspicious, or malicious activities in the traffic to alert the administrator (A. Y. Khan et al., 2020).

Deep learning approaches for automated deep feature extraction are available. It allows for the building of more complex models by providing a more accurate representation of data. The recurrent neural network (RNN) has become one of the most widely used algorithms in deep learning for conducting classifications and other evaluations on data sequences, based on current research in the domain of intrusion detection (Truong, Diep, & Zelinka, 2020). RNN is also a reasonable method for increasing anomaly detection in a network system and obtaining exceptional outcomes in subsequent learning. This paper discusses the usage of the BiDLSTM RNN model, which is based on bidirectional LongShort-Term Memory, for network intrusion detection.

Artificial intelligence in predicting IDS

Artificial intelligence (AI) is a field that aims to emulate the human brain's data processing and pattern building so that it can make better judgments. Deep learning is a subcategory of machine learning that use a stratified form of ANN to perform machine learning algorithms and provides more powerful modelling capabilities. Deep learning, is also known as deep neural learning (DNL), consists of a series of interconnected layers. DL may learn from and transform their input data into a number of levels of abstraction (Dasgupta, 2019). Deep neural networks (DNNs), deep belief networks (DBNs), recurrent neural networks, and convolutional neural networks (CNNs) are examples of DL (RNNs).

These methodologies provide commendable results that are equivalent to or even better than human reasoning, making them applicable to a wide range of problems in a number of disciplines of research, including intrusion detection. Cyber security is the development of defence strategies that safeguard computing resources, networks, programmes, and data against unauthorised access, modification, or destruction (Lyamin, Kleyko, Delooz, & Vinel, 2018). As a result of rapid improvements in information and communication technologies, new cyber security threats are rapidly appearing and evolving. To speed up and scale their operations, cybercriminals are utilising new and sophisticated strategies. As a result, more adaptive, flexible, and powerful cyber defence systems that can detect a wide range of threats in real time are needed (Kamoun, Iqbal, Esseghir, & Baker, 2020).

Artificial intelligence (AI) technologies have grown in popularity in recent years, and they continue to play a significant role in detecting and preventing cyber threats. While artificial intelligence (AI) was initially suggested in the 1950s, it has expanded at a breakneck pace in recent years and is now impacting many aspects of society and industry. AI has applications in gaming, natural language processing, health care, manufacturing, education, and other industries (Nguyen, Chbeir, Exposito, Aniorté, & Trawiński, 2019). This development is hurting cyber security as well, with AI being used to both attack and defend in cyberspace. On the offensive side, cybercriminals may employ artificial intelligence to enhance the sophistication and breadth of their operations. On the defensive side, AI is being utilised to enhance defence approaches so that defence systems become more durable, flexible, and efficient, which includes being adaptive to environmental changes in order to decrease the damage (Abuodeh, Adkins, Setayeshfar, Doshi, & Lee, 2021).

The Major Uses of AI In the field of cyber security

Artificial intelligence is being used to strengthen defensive capabilities now. Thanks to its better automation and data processing skills, AI can be used to analyse massive volumes of data with efficiency, accuracy, and speed. Even if the patterns of attacks change, an AI system may leverage what it learns and knows about prior threats to predict future attacks (R. Zhang et al., 2018). Artificial intelligence provides clear advantages in the following areas when it comes to cyber security:

AI may discover new and advanced attack flexibility variations: Traditional technology is stuck in the past, relying on established attackers and assaults, leaving blind spots when it comes to detecting unusual occurrences in new attacks. The drawbacks of traditional military equipment are presently being addressed with intelligent technologies. An intranet's privilege actions, for example, may be monitored, and any significant changes in privileged access operations might suggest a possible internal threat (Golczynski & Emanuello, 2021).

If the detection is successful, the computer will reinforce the validity of the acts and grow more sensitive to recognise similar patterns in the future. The computer can learn and adapt quicker and more correctly with more data and examples, allowing it to detect anomalous, faster, and more precise behaviours. This is particularly critical as cyber-attacks become more sophisticated and hackers create different innovative approaches (Alhayani, Jasim Mohammed, Zeghaiton Chaloob, & Saleh Ahmed, 2021).

AI can manage vast volumes of data: AI can help to improve network security by creating self-contained security systems that can detect and respond to breaches. The number of security warnings that receive on a regular basis might be overwhelming for security professionals. Automatically identifying and responding to attacks has decreased the strain of network security experts and can help detect threats more effectively than earlier strategies (Chan et al., 2019).

Any deviations from the norm may then be recognised, enabling for the detection of assaults. AI approaches look to be a popular area in cyberspace security research. To counteract threats, artificial intelligence approaches such as neural networks, computational intelligence, intelligent agents, data mining, artificial immune systems, pattern recognition, machine learning, heuristics, deep learning, and other AI methods are being used. Machine learning and deep learning, however, have lately gained a lot of attention and have had the most success in mitigating cyber-threats out of all of these tactics. (Abayomi-Alli, Misra, Abayomi-Alli, & Odusami, 2019).

Anomaly detection with LSTM networks

Anomalies can arise in a wide range of domains, prompting substantial research in a variety of fields, including networks, the internet of things, medicine, and manufacturing processes. The common ground for all disciplines is the definition of an anomaly as a deviation from the rule or an irregularity that is not considered to be part of normal system behaviour. Anomalies are defined as abnormalities, deviants, or outliers, which is congruent with the definition. Anomalous dynamics are usually unknown and appear out of nowhere, resulting in instabilities and, as a result, increased inefficiencies and system flaws (Yao, Wang, Liu, Chen, & Sheng, 2021). All of the application areas that have been explored have a taxonomy that relates to anomalies: It may be classified based on its focus point, measurability, (non-)linearity, and temporal behaviour. Depending on the application, several focus positions may be feasible. It can suggest a direct influence on the system's dynamics or interference at the level of (individual) sensors and actuators, for example (S. Smys, Abul Basar, & Haoxiang Wang, 2020). Anomalies can be measured directly or indirectly through some form of state evaluation. They can also have linear or nonlinear characteristics. Anomaly system dynamics recognised and characterised using LSTM networks are seldom time-invariant in the span of the study literature due to the properties of the LSTM cell. Stationary and non-stationary time-variant dynamics, or short-term and long-term behaviour, are the two forms of time-variant dynamics (Pingle, Patil, Bhatkar, & Patil, 2020).

The researchers Mohanty and Vyas (2018) explained that the topology of a feed forward neural network provides the cornerstone for deep learning models. The input layer, hidden layer/layers, and output layer are the most common components of deep learning. Hidden layers are used to infer features. In the input layer, a property vector represents the object to be classed as an input. The class vector associated to the input vector is generated by the output layer (Devi & Mohankumar, 2019). By using a back propagation strategy to adjust the weight values, the LSTM approach lowers the cost function and completes the learning process. To begin, the system is given an input vector and weights, and the error rate is determined by dividing the outcome by the desired result. The error rate is then reduced by applying back propagation to the weights (Abdulqadder, Zhou, Zou, Aziz, & Akber, 2020).

Convolutional Neural Networks

The researchers Tabassum et al., (2019) suggest that because Convolutional network data is collected in real time, a CNN-based layer may be used to distinguish abnormalities from regular data. The strategy comprises instantly learning about changes that occur as a result of anomalous data utilising the data. The main issue was that regular and abnormal flows aren't all that distinct, so the CNN had to be pushed to find abnormalities. Features that define an image's content (flow's content), mostly normal flow, are learnt when standard form CNNs are employed to detect, showing that the classifier detecting data content is connected to training data rather than learning data variations. The strategy used was designed to keep information hidden and learn anomalous traces in an adaptive way. The mean convolutional layer (CNNMCL), a novel convolutional layer designed for use with intrusion detection system tasks, is used to do this. These mistakes are then employed as low-level abnormal/normal characteristics, allowing more complex abnormal detection features to be developed.

Also the author Powell (2020) suggested that the recommended layer tries to train prediction error filters entirely in order to echo these activities. The feature maps are then connected to prediction error fields, which serve as low-level aberrant traces. As a result, while CNNs have the potential to enhance object recognition by learning visual information, they aren't yet ideal at detecting anomalies. Expected input flow content is easy to find in a CNN, however anomalies reveal only slight deviations from the typical data, necessitating the deployment of a specific algorithm to identify these discrepancies. As a result, IDS researchers are looking at whether CNNs, for example, can learn to recognise both anomalous patterns and conventional content characteristics. However, the accuracy and efficiency of identifying abnormalities fall short of the consumers' expectations.

Naïve Bayes based approach

The authors Jabber et al., (2017) proposed a new approach for detecting network intrusions and used Nave Bayes to compare it to other existing networks. The NB algorithm assumes independent qualities and is particularly sensitive to the selection of many features that interfere with the NB's performance or accuracy, but in actuality, the features' possibilities are connected.

Also the other researchers Mehri et al., (2018) proposed a method based on NB, which is widely used in network intrusion detection systems (IDS) because it can improve the efficiency and accuracy of network intrusion detection while also having low false positive rates. However, NB has a weakness in that it assumes independent attributes and is very sensitive to a large number of features, causing performance issues or low NB accuracy, necessitating further research to improve its performance. The accuracy of this approach is 90.51 percent, with a false alarm detection rate of 0.14 percent. For improved accuracy, the current approach needs be tested with different datasets in the future.

SVM based approach

The researchers Xu et al., (2018) discussed that, the supervised techniques use a fully labelled dataset to train the SVM. Although this approach is the most accurate at identifying intrusions, it is difficult, if not impossible, to locate a fully labelled dataset in actual security applications. To increase the accuracy and efficacy of the IDS, some systems use met heuristic techniques to train the SVM parameters using trade data from benchmark datasets. To handle large datasets and increase IDS performance, several SVM-based systems combine clustering techniques with the SVM. In addition, to increase detection rates, some IDS systems integrate SVM with other classifiers such as artificial neural networks, decision trees, and naive Bayes. To classify computer network traffic as normal or abnormal, many IDSs use an SVM. (Alqahtani, Gazzan, & Sheldon, 2020). However the limitation is that the classification performance can be increased further.

LSTM-based approach

Researchers have employed LSTMs in network security to address some of the most critical security challenges, such as intrusion detection. The bidirectional LSTM improves a model's classification performance by working in tandem with regular LSTMs (M. Khan, Karim, & Kim, 2019). Two LSTMs are trained from the input data. The original input data was used to train the first LSTM, while the inverted clone of the original data was used to train the second. As a result, the network gains more significance and quicker outcomes. A fundamental premise underpins BiDLSTM. It includes replicating the first recurrent layer of the network, then feeding the original input data to the first layer and a reversed replica of the input data to the duplicated layer. The vanishing gradient problem in ordinary RNNs is solved by this notion (Cui, Long, Min, Liu, & Li, 2018).

LSTM networks are prone to detect contextual abnormalities due to their capacity to understand temporal linkages and capture them in a low-dimensional state representation. These interactions can effect both stationary and non-stationary dynamics, as well as shortterm and long-term dependence. LSTM networks are particularly well adapted to modelling multivariate time series and time-variant systems (Lindemann, Muller, Vietz, Jazdi, & Weyrich, 2021). As a consequence, anomaly detection can be predicated on the disparity between actual system outputs and expected network outputs. The ability to detect abnormalities using LSTM-based approaches has been established, for example, in the authors' study (Malhotra, Vig, Shroff, & Agarwal, 2015).

The study uses a layered LSTM architecture to identify abnormalities in time series data. Unlike robust or denoising LSTM AE, LSTM AE does not employ dimensionally reduced features as inputs. The detection is carried done by evaluating the deviation of expected outputs using a variance analysis. In (Ding, Morozov, Vock, Weyrich, & Janschek, 2020), A deep LSTM network is used as a prediction of typical bus communication behaviour in autos. Significant deviations are recognised using a dynamic threshold to detect anomalous communication behaviour caused by cyber-attacks. In (Ergen & Kozat, 2020), the structure of a compound is shown. The LSTM network anticipates typical system dynamics and uses a support vector machine as a classifier for anomalies to provide an adaptive and self-learning detection mechanism. As a consequence, temporal anomalies in multivariate data can be detected semi-supervised or unsupervised. A technique for identifying collective abnormalities using LSTM networks has been described (Bontemps, Cao, McDermott, & LeKhac, 2016).

The uniqueness is analysing numerous one-step forward prediction faults rather than evaluating each time step independently. By modelling stationary and non-stationary temporal dependencies in advance, LSTM networks increase detection accuracy. As a consequence, it is able to identify temporal anomaly structures effectively. In this study (M.- C. Lee, Lin, & Gan, 2020) the authors proposed using two LSTM networks to implement a real-time detection technique. One is used to model short-term features and can identify single forthcoming abnormal data items within time series, while the other is used to regulate detection using long-term thresholds (Zenati, Foo, Lecouat, Manek, & Chandrasekhar, 2018).

Source Data Type Features Extracted Anomaly Type(s) Architecture 1. Scenario 2.Performance 3.Metrics
Bontemps et al., (2016) Univariate time series No Collective LSTM Intrusion detection for PC networks No comparison with other methods carried out Average relative error, danger coefficien t
Lindeman n et al., (2021) Multivariat e time series Yes Outlier, collective, contextua l Observer based LSTM AE Discrete manufacturin g No comparison with other methods carried out Recons. error
Ding et al., (2020) Multivariat e time series No Contextua l LSTM + EWMA Industrial robotic manipulators No comparison with other methods carried out Precision, recall
Malhotra et al., (2015) Multivariat e time series Yes Contextua l Stacked LSTM Several, such as power demand Higher than RNN Precision, recall, F1-score
Precision, recall, F1-score Univariate time series No Outlier, collective e Dual LSTM Multiple domain streaming data Higher than competitors, real-time capable Precision, recall, F1-score

Summary

Denial of Service (DoS), disclosure, manipulation, impersonation, and repudiation are just a few of the harmful actions perpetrated by attackers. These categories might be included under the umbrella term invasion. Intrusion detection systems (IDS) were developed in response to the rise of more complex intrusions and incursions (B. Lee, Amaresh, Green, & Engels, 2018). Deep learning applications for developing security solutions in IoT applications are still in their early stages, but we feel they hold a lot of potential. Because the Internet of Things handles personal data and commercial information, it's necessary to implement strong security measures. Because IoT creates a vast quantity of heterogeneous data, machine learning and deep learning technologies may be used to do this. The Internet of Things (IoT) is a revolution, not a progression (J. Zhang et al., 2020).

Security problems are growing as the Internet of Things (IoT) develops. The Internet of Things will only be valuable to society once it is safe, which artificial intelligence can offer. The method uses both binary and multiclass classification for detection. There are four basic kinds of multiclass categories, all of which are found in the NSL-KDD dataset. The need for extremely precise attack detection can be satisfied by using a deep learning intrusion detection system (Idrissi, Boukabous, Azizi, Moussaoui, & El Fadili, 2021). Recurrent neural networks (RNNs) have a large capacity for recognising complex patterns in text and generating similar patterns. By ensuring that a constant error is maintained, the LSTM RNN can handle cyber-attacks. This lets the RNN to learn over long periods of time, allowing it to associate issues and their consequences remotely. This feature enables it to preserve details of assaults learnt throughout the training phase and make detection decisions based on that data (Mirza & Cosan, 2018).

Unlike computer memory, this gate is analogue rather than digital. Finally, while the applications of RNNs have been extensively studied and recognised in the machine learning literature, their capacity to aid with intrusion detection should be addressed further (Kasongo & Sun, 2021). This proposed study has paved the way for a more in-depth look at this cyber security capabilities. The LSTM models proved to be the best since the embedding strategy could capture categorical information. Because the embedding approach was able to capture categorical information, which is critical for attack detection, the LSTM models are better.

References

  1. Abayomi-Alli, O., Misra, S., Abayomi-Alli, A., & Odusami, M. (2019). A review of soft techniques for SMS spam classification: Methods, approaches and applications. Engineering Applications of Artificial Intelligence, 86, 197–212. https://doi.org/10.1016/j.engappai.2019.08.024
  2. Abdulqadder, I. H., Zhou, S., Zou, D., Aziz, I. T., & Akber, S. M. A. (2020). Multi-layered intrusion detection and prevention in the SDN/NFV enabled cloud of 5G networks using AI-based defense mechanisms. Computer Networks, 179, 107364. https://doi.org/10.1016/j.comnet.2020.107364
  3. Abuodeh, M., Adkins, C., Setayeshfar, O., Doshi, P., & Lee, K. H. (2021). A Novel AI-based Methodology for Identifying Cyber Attacks in Honey Pots. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 15224–15231. Retrieved from https://www.aaai.org/AAAI21Papers/IAAI-98.AbuodehM.pdf
  4. Alhayani, B., Jasim Mohammed, H., Zeghaiton Chaloob, I., & Saleh Ahmed, J. (2021). Effectiveness of artificial intelligence techniques against cyber security risks apply of IT industry. Materials Today: Proceedings. https://doi.org/10.1016/j.matpr.2021.02.531
  5. Alqahtani, A., Gazzan, M., & Sheldon, F. T. (2020). A proposed Crypto-Ransomware Early Detection(CRED) Model using an Integrated Deep Learning and Vector Space Model Approach. 2020 10th Annual Computing and Communication Workshop and Conference (CCWC), 0275–0279. IEEE. https://doi.org/10.1109/CCWC47524.2020.9031182
  6. Bontemps, L., Cao, V. L., McDermott, J., & Le-Khac, N.-A. (2016). Collective Anomaly Detection Based on Long Short-Term Memory Recurrent Neural Networks. https://doi.org/10.1007/978-3-319-48057-2_9
  7. Chan, L., Morgan, I., Simon, H., Alshabanat, F., Ober, D., Gentry, J., … Cao, R. (2019). Survey of AI in Cybersecurity for Information Technology Management. 2019 IEEE Technology & Engineering Management Conference (TEMSCON), 1–8. IEEE. https://doi.org/10.1109/TEMSCON.2019.8813605
  8. Cui, J., Long, J., Min, E., Liu, Q., & Li, Q. (2018). Comparative Study of CNN and RNN for Deep Learning Based Intrusion Detection System. https://doi.org/10.1007/978-3-030-00018-9_15
  9. Dasgupta, D. (2019). AI vs. AI: Viewpoints. Technical Report (No. CS-19-001), The University of Memphis.
  10. Devi, R. S., & Mohankumar, M. (2019). Digital Forensics and Artificial Intelligence for Cyber Security. 13.
  11. Ding, S., Morozov, A., Vock, S., Weyrich, M., & Janschek, K. (2020). Model-Based Error Detection for Industrial Automation Systems Using LSTM Networks. https://doi.org/10.1007/978-3-030-58920-2_14
  12. Ergen, T., & Kozat, S. S. (2020). Unsupervised Anomaly Detection With LSTM Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 31(8), 3127– 3141. https://doi.org/10.1109/TNNLS.2019.2935975 Golczynski, A., & Emanuello, J. A. (2021). End-To-End Anomaly Detection for Identifying Malicious Cyber Behavior through NLP-Based Log Embeddings. ArXiv Preprint ArXiv:2108.12276.
  13. Idrissi, I., Boukabous, M., Azizi, M., Moussaoui, O., & El Fadili, H. (2021). Toward a deep learning-based intrusion detection system for IoT against botnet attacks. IAES International Journal of Artificial Intelligence (IJ-AI), 10(1), 110. https://doi.org/10.11591/ijai.v10.i1.pp110-120
  14. Jabbar, M. A., Aluvalu, R., & Reddy S, S. S. (2017). RFAODE: A Novel Ensemble Intrusion Detection System. Procedia Computer Science, 115, 226–234. https://doi.org/10.1016/j.procs.2017.09.129
  15. Kaloudi, N., & Li, J. (2020). The AI-Based Cyber Threat Landscape. ACM Computing Surveys, 53(1), 1–34. https://doi.org/10.1145/3372823
  16. Kamoun, F., Iqbal, F., Esseghir, M. A., & Baker, T. (2020). AI and machine learning: A mixed blessing for cybersecurity. 2020 International Symposium on Networks, Computers and Communications (ISNCC), 1–7. IEEE. https://doi.org/10.1109/ISNCC49221.2020.9297323
  17. Kasongo, S. M., & Sun, Y. (2021). A Deep Gated Recurrent Unit based model for wireless intrusion detection system. ICT Express, 7(1), 81–87. https://doi.org/10.1016/j.icte.2020.03.002
  18. Khan, A. Y., Latif, R., Latif, S., Tahir, S., Batool, G., & Saba, T. (2020). Malicious Insider Attack Detection in IoTs Using Data Analytics. IEEE Access, 8, 11743–11753. https://doi.org/10.1109/ACCESS.2019.2959047
  19. Khan, M., Karim, M., & Kim, Y. (2019). A Scalable and Hybrid Intrusion Detection System Based on the Convolutional-LSTM Network. Symmetry, 11(4), 583. https://doi.org/10.3390/sym11040583
  20. Lee, B., Amaresh, S., Green, C., & Engels, D. (2018). Comparative study of deep learning models for network intrusion detection. SMU Data Science Review, 1(1), 8.
  21. Lee, M.-C., Lin, J.-C., & Gan, E. G. (2020). ReRe: A Lightweight Real-Time Ready-to-Go Anomaly Detection Approach for Time Series. 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), 322–327. IEEE. https://doi.org/10.1109/COMPSAC48688.2020.0-226
  22. Lindemann, B., Muller, T., Vietz, H., Jazdi, N., & Weyrich, M. (2021). A survey on long short-term memory networks for time series prediction. Procedia CIRP, 99, 650–655. Retrieved from https://www.sciencedirect.com/science/article/pii/S2212827121003796
  23. Lyamin, N., Kleyko, D., Delooz, Q., & Vinel, A. (2018). AI-Based Malicious Network Traffic Detection in VANETs. IEEE Network, 32(6), 15–21. https://doi.org/10.1109/MNET.2018.1800074
  24. Malhotra, P., Vig, L., Shroff, G., & Agarwal, P. (2015). Long short term memory networks for anomaly detection in time series. Proceedings, 89, 89–94. Retrieved from https://books.google.com/books?hl=en&lr=&id=USGLCgAAQBAJ&oi=fnd&pg=PA89 &dq=Malhotra,+P.,+Vig,+L.,+Shrof,+G.,+Agarwal,+P.,+2015.+Long+short+term+mem ory+networks+for+anomaly+detection+in+time+series.+In:+Proceedings+of+European +Symposium+onArtificial+Neural
  25. Mehri, V. A., Ilie, D., & Tutschku, K. (2018). Privacy and DRM Requirements for Collaborative Development of AI Applications. Proceedings of the 13th International Conference on Availability, Reliability and Security, 1–8. New York, NY, USA: ACM. https://doi.org/10.1145/3230833.3233268
  26. Mirza, A. H., & Cosan, S. (2018). Computer network intrusion detection using sequential LSTM Neural Networks autoencoders. 2018 26th Signal Processing and Communications Applications Conference (SIU), 1–4. IEEE. https://doi.org/10.1109/SIU.2018.8404689
  27. Mohanty, S., & Vyas, S. (2018). Cybersecurity and AI. In How to Compete in the Age of Artificial Intelligence (pp. 143–153). Berkeley, CA: Apress. https://doi.org/10.1007/978- 1-4842-3808-0_6
  28. Nguyen, N. T., Chbeir, R., Exposito, E., Aniorté, P., & Trawiński, B. (Eds.). (2019). Computational Collective Intelligence. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-28374-2
  29. Pingle, Y., Patil, S., Bhatkar, S. N., & Patil, S. (2020). Detection of Malicioius Content using AI.
  30. Powell, B. A. (2020). Detecting malicious logins as graph anomalies. Journal of Information Security and Applications, 54, 102557. https://doi.org/10.1016/j.jisa.2020.102557
  31. Procopiou, A., & Komninos, N. (2019). ORCID: 0000-0003-2776-1283 and Douligeris, C.(2019). ForChaos: Real Time Application DDoS detection using Forecasting and Chaos Theory in Smart Home IoT Network. Wireless Communications and Mobile Computing, 8469410.
  32. S. Smys, Abul Basar, & Haoxiang Wang. (2020). Hybrid Intrusion Detection System for Internet of Things (IoT). Journal of ISMAC, 2(4), 190–199. https://doi.org/10.36548/jismac.2020.4.002
  33. Tabassum, A., Erbad, A., & Guizani, M. (2019). A Survey on Recent Approaches in Intrusion Detection System in IoTs. 2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC), 1190–1197. IEEE. https://doi.org/10.1109/IWCMC.2019.8766455
  34. Truong, T. C., Diep, Q. B., & Zelinka, I. (2020). Artificial Intelligence in the Cyber Domain: Offense and Defense. Symmetry, 12(3), 410. https://doi.org/10.3390/sym12030410
  35. Xu, C., Shen, J., Du, X., & Zhang, F. (2018). An Intrusion Detection System Using a Deep Neural Network With Gated Recurrent Units. IEEE Access, 6, 48697–48707. https://doi.org/10.1109/ACCESS.2018.2867564
  36. Yao, R., Wang, N., Liu, Z., Chen, P., & Sheng, X. (2021). Intrusion Detection System in the Advanced Metering Infrastructure: A Cross-Layer Feature-Fusion CNN-LSTM-Based Approach. Sensors, 21(2), 626. https://doi.org/10.3390/s21020626
  37. Zenati, H., Foo, C. S., Lecouat, B., Manek, G., & Chandrasekhar, V. R. (2018). Efficient ganbased anomaly detection. ArXiv Preprint ArXiv:1802.06222. Retrieved from https://arxiv.org/abs/1802.06222
  38. Zhang, J., Ling, Y., Fu, X., Yang, X., Xiong, G., & Zhang, R. (2020). Model of the intrusion detection system based on the integration of spatial-temporal features. Computers & Security, 89, 101681. https://doi.org/10.1016/j.cose.2019.101681
  39. Zhang, R., Chen, X., Lu, J., Wen, S., Nepal, S., & Xiang, Y. (2018). Using AI to hack IA: A new stealthy spyware against voice assistance functions in smart phones. ArXiv Preprint ArXiv:1805.06187.

Get Help With Dissertation Writing

Get Help With Your Dissertation Writing