Deep learning has advanced with evolutionary deep intelligence

0
1634

Deep learning can do what machine learning can’t. Deep learning is proving this by accurately tackling machine learning’s most critical difficulties. However, deep learning’s capabilities are limited by its reliance on high-performance computing resources.

Supercomputer clusters and GPU arrays conduct deep learning algorithms on very complicated and huge computational architectures that can handle the tremendous demand. Deep neural networks also necessitate the use of machine learning professionals to construct complicated structures and fine-tune them to maximize efficiency.

Many experts and academics have raised various red flags by discussing the complexities of the difficulties, resulting in a requirement for deeper and broader networks to improve cognitive accuracy. This has been an urgent worry as specialists have been unable to make use of this strong technology owing to a lack of energy and processing resources, which is causing their architectures to be hampered.

The University of Waterloo’s Vision and Image Processing Lab has devised techniques to address this problem. The team is investigating constructing neural networks that develop spontaneously over time to become strong and efficient, using a novel way to enable strong yet operational deep intelligence.

Evolutionary Deep Intelligence as a Concept

Deep neural networks evolve over generations to become clever and efficient, which is referred to as evolutionary deep intelligence. Each deep neural network’s core is computationally encoded before being used, along with triggered ambient elements that promote computationally and energy efficiency.

Natural selection allows the deep neural network to develop new networks that are more advanced than the previous ones regularly. The MSRA-B and HKU-IS datasets were used in an experiment conducted by researchers at the University of Waterloo.

The experiment’s findings showed that the synthesized “offspring” deep neural networks may attain state-of-the-art F-beta scores by having a more efficient design. By the fourth generation, the newer networks had around 48 times fewer synapses than the first.

The MNIST dataset was used to further test this high performance. The findings were added to the state-of-the-art demonstration, demonstrating 99 percent accuracy with advanced neural networks that had about 40 times fewer synapses by the seventh generation. When the neural networks reached their thirteenth generation “offspring,” their accuracy had increased to about 98 percent and they had up to 125 times fewer synapses than the first-generation networks.

The University of Waterloo’s evolutionary deep intelligence idea garnered multiple prizes and honors, including a Best Paper Award at the NIPS Workshop on Efficient Methods for Deep Neural Networks, a Best Paper Award at the Conference on Computational Vision and Intelligence Systems, and was named one of the most intriguing and thought-provoking publications on arXiv by the MIT Technology Review.

Follow and connect with us on Facebook, LinkedIn & Twitter