Because they are able to identify and neutralise possible risks to network security, intrusion detection systems, more commonly abbreviated as IDS, are an absolute requirement for the protection of computer networks. Recent advancements in the field of adversarial machine learning suggest that intrusion detection systems, sometimes known as IDSs, may be susceptible to assaults that make advantage of adversarial examples. In the field of machine learning, adversarial examples are data points that have been purposely constructed with the goal of deceiving machine learning models into producing erroneous classifications or false negatives. This may be accomplished by presenting the models with data that has been tampered with in some way. This research project will investigate the vulnerabilities of intrusion detection systems (IDSs) to adversarial examples. It will also investigate the probable implications of such attacks on network security, and it will suggest feasible defensive strategies in order to enhance the resistance of IDSs against these threats. The overall purpose of this research is to improve the resistance of IDSs to the many dangers that they face.
Key words: IDS, network, detection, classification.
|