IEEE Computational Intelligence Society

IEEE CIS Task Force on

Evolutionary Deep Learning and Applications



Deep learning draws increasing attention from both academe and industries, which owns to its extraordinary deep architectures on learning meaningful representations of input data to significantly improve the performance of associated machine learning tasks. Existing deep learning approaches include the deep neural networks, deep convex net, PCA-Net, deep fisher net, deep forest and deep belief networks. One of the key issues of existing approaches is that the meaningful representations can be learned only when their hyper-parameter settings are properly specified beforehand, and general-parameters are precisely learned during the training process. Because deep learning is an emerging topic, not much researches have been dedicated to automatically set the hyper-parameters, and accurately find the globally optimal general-parameters. The problem in this regard can be formulated as optimization problems including the discrete optimization, constrained optimization, large-scale global optimization and multi-objective optimization, where evolutionary computation methods can play a crucial role. For example, in deep neural network-based deep learning algorithms, the types of building blocks are enumerated, the coefficients of the regularization terms are limited to one particular range, the numbers of weights are large and their values are required to be globally optimal. In addition, a higher performance deep learning algorithm typically requires more computational resources. To this end, the pursing of performance and computational resource are also conflicting objectives.



Evolutionary computation approaches, particularly genetic algorithms, particle swarm optimization and genetic programming, have shown superiority in addressing real-world discrete, constrained, large-scale and multi-objective optimization problems largely due to their powerful abilities in searching for global optima, dealing with non-convex and non-differentiable problems, finding a set of non-dominated solutions in a single run, and requiring no rich domain knowledge. However, most of existing evolutionary computation methods currently work only on relatively shallow architectures algorithms, and cannot provide satisfactory results for deep architectures.

The objectives of this task force are:


Anticipated interests

Topics of interest include but are not limited to:



Current Events:



Yanan Sun

Victoria University of Wellington, New Zealand,

Vice Chairs

Andy Song

Royal Melbourne Institute of Technology (RMIT), Australia,

Colin G. Johnson

University of Kent, UK,

Bing Xue

Victoria University of Wellington, New Zealand,



Harith Al-Sahaf, Victoria University of Wellington, New Zealand

Aaron Chen, Victoria University of Wellington, New Zealand

Ran Cheng, University of Birmingham, UK

Liang Feng, Chongqing University, China

Min Jiang, Xiamen University, China

Yifeng Li, National Research Council, Canada

Amiram Moshaiov, Tel-Aviv University, Israel

Kourosh Neshatian, University of Canterbury, New Zealand

Xi Peng, Sichuan University, China

Yiming Peng, Victoria University of Wellington, New Zealand

Rongbin Qi, East China University of Science and Technology, China

Hong Qu, University of Electronic Science and Technology of China, China

Nasser R. Sabar,, La Trobe University, Australia

Kay Chen Tan, National University of Singapore, Singapore

Gary Yen, Oklahoma State University, US

Mengjie Zhang, Victoria University of Wellington, New Zealand