Artificial intelligence (AI) relies a lot on neural networks and these are basically like computerized brains. But now, a recent study suggests that we might not need super complex computer structures for AI. Instead, simpler designs, similar to how our own brains are organized, could be better for learning efficiently.
Deep learning is like a tall building with many layers and it is good at doing complicated things. On the other hand, the human brain, with its relatively shallow architecture, excels at intricate classification tasks despite its slower and noisier processes.
Scientists from Bar-Ilan University in Israel looked into how our brains learn with simpler structures. They talked about it in a recent article in Physica A, saying that these simpler structures might be just as good as the more complicated ones in deep learning systems.
Professor Ido Kanter, who is leading the research at Bar-Ilan University, explained that the brain is not like a really tall building with lots of floors. Instead, human brains are more like a wide building with only a few floors.
Ronit Gross, who played a big part in the study, mentioned that the brain’s wider and higher structures work together in two different ways. The brain’s simple method, even though it does not have many layers, is really good at sorting things out. This goes against what people usually think, that having more layers always makes things better.
However, the adoption of wider shallow architectures faces a technological hurdle. Current advanced GPU technology excels in accelerating deep architecture but falls short in implementing the wide shallow structures that mimic the brain’s dynamics. This new idea suggests we might need to change how our computer technology (GPU) works to better understand and use the simple learning methods in AI.