top of page
by Astha Oriel

B-AIM PICK SELECTS- Unraveling Deep Learning Algorithms with Limited Data


The study proposes an alternative repurposing technique for turning the weakness of deep neural networks into strengths.

Deep learning has been an expanse of artificial intelligence, heavily researched by the data scientists in the past few areas. Experts are more curious about supplementing this technology in sectors where human skills perform mundane tasks. As it uses big data, which is garnered from various sources, makes patterns of this collected data and learn to perform a task without any supervision, it becomes data Hungry, which becomes a major challenge when the data is in scarcity. Apart from being hungry, two significant drawbacks presented with deep learning is the opacity and its shallowness. As data sift through many layers between the input and output nodes, identifying the various data points between this layer becomes tricky, which means learning a new algorithm becomes unattainable. Moreover, it becomes difficult for the pre-existing deep. That’s why it becomes imperative for tricking the deep learning models for unraveling new algorithms so that the challenge of scarce data can be countered.

Understanding Deep Learning

Just like the human brain, deep learning utilizes neural networks to process and understand a large amount of data without any supervision. In fact, over the years, it has been feared that the application of deep learning and machine learning can be the main reason for rendering the human population on the verge of unemployment. That’s why it becomes important to comprehend the functionality of deep learning.

Deep learning is heavily administered by algorithms through the layered neural network, much like an imitation of the human brain. Like the neural networks in the human brain, this technological network has a compilation of input nodes or units, accumulating the raw data. It propels it across the output node where the category of the raw data is decoded.

These deep neural networks are ingrained within the deep learning algorithms. They are blended within the several hidden layers between the input and output units, amplifying their capabilities for classifying the complicated data. However, as mentioned earlier, it requires a plethora of datasets that need to be trained. The larger the dataset, the better will be the performance. The lesser the dataset, a reduction in the performance can be observed, affecting the quality of the output. Another observed drawback is that with many layers present, the data points can be easily missed.

Earlier experts relied on Pre-tuning and Fine-tuning models for drawing out new algorithms to learn new algorithms. A Pre-tuning model trains the neural network before performing the task repeatedly so that the model can learn and apply it while performing the task. On the other hand, the Fine-tuning models require a dataset to fine-tune the pre-trained CNN.

As both of these models require a large amount of data, the solution for the existing challenge is not delivered.

That’s why experts have drawn out some of the techniques, where the performance of deep learning is maintained, and the quality of the product is retained without utilizing a huge amount of data.

Transfer Learning Without Knowing

A math formula, when learned, can be applied to solve different problems. This perception, if not entirely but still, to a large extent, holds best while talking about Transfer learning. Transfer learning makes the use of pre-received knowledge for solving one problem and applying the same logic for a different but related problem.

At ICML’s conference, the IBM Research scientist introduced “Black-Box Adversarial Reprogramming,” draws attention to an alternative repurposing technique for turning the weakness of deep neural network into strengths. The study proposes an approach where the deep learning algorithm can perform the tasks with scarce data and constrained resources. The study presented that the reprogramming of the black-box model can be solely done based on input-output responses and without optimizing the BAR. This model is said to be much more useful while deployed in medical set-up, where the staff grapples with maintaining a huge amount of data for diseases, which are still under research.

The BAR does not require the data of deep learning to change its behavior. The researchers used Zeroth Order Optimization (ZOO) of the machine learning, where the behavior can be changed, or algorithm can be learned, without requiring the details about the deep learning. The researchers have observed that BAR outperformed the respective traditional state-of-the-arts methods, had astounding practicality and effectiveness for online API image classifications, and was affordable, as compared to the traditional models.

The study states, “Our results provide a new perspective and an effective approach for transfer learning without knowing or modifying the pre-trained model.”

Post: Blog2_Post
bottom of page