Journal Home > Volume 2 , Issue 1

Machine learning techniques such as artificial neural networks are seeing increased use in the examination of communication network research questions. Central to many of these research questions is the need to classify packets and improve visibility. Multi-Layer Perceptron (MLP) neural networks and Convolutional Neural Networks (CNNs) have been used to successfully identify individual packets. However, some datasets create instability in neural network models. Machine learning can also be subject to data injection and misclassification problems. In addition, when attempting to address complex communication network challenges, extremely high classification accuracy is required. Neural network ensembles can work towards minimizing or even eliminating some of these problems by comparing results from multiple models. After ensembles tuning, training time can be reduced, and a viable and effective architecture can be obtained. Because of their effectiveness, ensembles can be utilized to defend against data poisoning attacks attempting to create classification errors. In this work, ensemble tuning and several voting strategies are explored that consistently result in classification accuracy above 99%. In addition, ensembles are shown to be effective against these types of attack by maintaining accuracy above 98%.


menu
Abstract
Full text
Outline
About this article

CNN and MLP neural network ensembles for packet classification and adversary defense

Show Author's information Bruce Hartpence( )Andres Kwasinski
GCCIS i-School at the Rochester Institute of Technology, Rochester, New York, NY 14623, USA
Department of Computer Engineering, KGCOE at the Rochester Institute of Technology, Rochester, New York, NY 14623, USA

Abstract

Machine learning techniques such as artificial neural networks are seeing increased use in the examination of communication network research questions. Central to many of these research questions is the need to classify packets and improve visibility. Multi-Layer Perceptron (MLP) neural networks and Convolutional Neural Networks (CNNs) have been used to successfully identify individual packets. However, some datasets create instability in neural network models. Machine learning can also be subject to data injection and misclassification problems. In addition, when attempting to address complex communication network challenges, extremely high classification accuracy is required. Neural network ensembles can work towards minimizing or even eliminating some of these problems by comparing results from multiple models. After ensembles tuning, training time can be reduced, and a viable and effective architecture can be obtained. Because of their effectiveness, ensembles can be utilized to defend against data poisoning attacks attempting to create classification errors. In this work, ensemble tuning and several voting strategies are explored that consistently result in classification accuracy above 99%. In addition, ensembles are shown to be effective against these types of attack by maintaining accuracy above 98%.

Keywords: Convolutional Neural Network (CNN), classification, ensemble, Multi-Layer Perception (MLP), adversary

References(23)

[1]
B. Hartpence and A. Kwasinski, Fast internet packet and flow classification based on artificial neural networks, in Proc. 2019 SoutheastCon, Huntsville, AL, USA, 2019, p. 19433953.
DOI
[2]
B. Hartpence and A. Kwasinski, A convolutional neural network approach to improving network visibility, in Proc. 2020 29th Wireless and Optical Communications Conf. (WOCC), Newark, NJ, USA, 2020, pp. 1-6.
DOI
[3]
A. J. C. Sharkey, N. E. Sharkey, U. Gerecke, and G. O. Chandroth, The “test and select” approach to ensemble combination, in Proc. 1st Int. Workshop on Multiple Classifier Systems, Cagliari, Italy, 2000, pp. 30-44.
DOI
[4]
T. G. Dietterich, Ensemble methods in machine learning, in Proc. 1st Int. Workshop on Multiple Classifier Systems, Cagliari, Italy, 2000, pp. 1-15.
DOI
[5]
M. A. Yaman, A. Subasi, and F. Rattay, Comparison of random subspace and voting ensemble machine learning methods for face recognition, Symmetry, vol. 10, no. 11, p. 651, 2018.
[6]
U. Knauer, C. S. Von Rekowski, M. Stecklina, T. Krokotsch, T. P. Minh, V. Hauffe, D. Kilias, I. Ehrhardt, H. Sagischewski, S. Chmara, et al., Tree species classification based on hybrid ensembles of a convolutional neural network (CNN) and random forest classifiers, Remote Sens., vol. 11, no. 23, p. 2788, 2019.
[7]
E. Tasci, Voting combinations-based ensemble of fine-tuned convolutional neural networks for food image recognition, Multimed. Tools Appl., vol. 79, no. 41, pp. 30 397-30 418, 2020.
[8]
I. E. Livieris, A. Kanavos, V. Tampakas, and P. Pintelas, A weighted voting ensemble self-labeled algorithm for the detection of lung abnormalities from x-rays, Algorithms, vol. 12, no. 3, p. 64, 2019.
[9]
T. Strauss, M. Hanselmann, A. Junginger, and H. Ulmer, Ensemble methods as a defense to adversarial perturbations against deep neural networks, arXiv preprint arXiv: 1709.03423, 2017.
[10]
W. Q. Wei, L. Liu, M. Loper, K. H. Chow, E. Gursoy, S. Truex, and Y. Z. Wu, Cross-layer strategic ensemble defense against adversarial examples, in Proc. 2020 Int. Conf. Computing, Networking and Communications (ICNC), Big Island, HI, USA, 2020, pp. 456-460.
DOI
[11]
K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 770-778.
DOI
[12]
K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv: 1409.1556, 2014.
[13]
B. Pan, Z. W. Shi, and X. Xu, Hierarchical guidance filtering-based ensemble classification for hyperspectral images, IEEE Trans. Geosci. Remote Sens., vol. 55, no. 7, pp. 4177-4189, 2017.
[14]
S. Gunderson and F. Jagodzinski, Ensemble voting schemes that improve machine learning models for predicting the effects of protein mutations, in Proc. 2018 ACM Int. Conf. Bioinformatics, Computational Biology, and Health Informatics, New York, NY, USA, 2018, pp. 211-219.
DOI
[15]
E. Sin and L. P. Wang, Bitcoin price prediction using ensembles of neural networks, in Proc. 2017 13th Int. Conf. Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNCFSKD), Guilin, China, 2017, pp. 666-671.
DOI
[16]
L. Y. Xu, X. Zhou, Y. M. Ren, and Y. F. Qin, A traffic classification method based on packet transport layer payload by ensemble learning, in Proc. 2019 IEEE Sympo. Computers and Communications (ISCC), Barcelona, Spain, 2019, pp. 1-6.
DOI
[17]
S. E. Gómez, L. Hernández-Callejo, B. C. Martínez, and A. J. Sánchez-Esguevillas, Exploratory study on class imbalance and solutions for network traffic classification, Neurocomputing, vol. 343, pp. 100-119, 2019.
[18]
S. E. Gómez, B. C. Martínez, A. J. Sánchez-Esguevillas, and L. H. Callejo, Ensemble network traffic classification: Algorithm comparison and novel ensemble scheme proposal, Comput. Networks, vol. 127, pp. 68-80, 2017.
[19]
F. Gargiulo, L. I. Kuncheva, and C. Sansone, Network protocol verification by a classifier selection ensemble, in Proc. 8th Int. Workshop on Multiple Classifier Systems, Reykjavik, Iceland, 2009, pp. 314-323.
DOI
[20]
DOI
[21]
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, Rethinking the inception architecture for computer vision, in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 2818-2826.
[22]
F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, Ensemble adversarial training: Attacks and defenses, arXiv preprint arXiv: 1705.07204, 2017.
[23]
M. Nasr, A. Bahramali, and A. Houmansadr, Blind adversarial network perturbations, arXiv preprint arXiv: 2002.06495, 2020.
Publication history
Copyright
Rights and permissions

Publication history

Received: 03 September 2020
Revised: 12 October 2020
Accepted: 08 December 2020
Published: 12 May 2021
Issue date: March 2021

Copyright

© ITU and TUP 2021

Rights and permissions

This work is available under the CC BY-NC-ND 3.0 IGO license: https://creativecommons.org/licenses/by-nc-nd/3.0/igo/.

Return