A cost-sensitive paradox data augmentation framework (CSADA) for making hyperparametric deep learning models cost-sensitive.


Most machine learning methods assume that every misclassification a model makes has the same severity. This is not often the case for unbalanced rating issues. Excluding a case from a minority or positive category is usually worse than incorrectly categorizing an example from a negative or majority category. Many real-world examples include identifying fraud, diagnosing a medical problem, and detecting spam emails. A false negative (no condition) is worse or more expensive in each scenario than a false positive.

Although models of deep neural networks (DNNs) have performed satisfactorily, over-defining their criteria causes a significant challenge for cost-sensitive classification cases. The problem comes from the ability of DNNS to adapt to training data sets. The costs of critical errors will not affect training if the model is clairvoyant or always able to reveal the underlying truth. This is because there are no wrong ratings. This phenomenon motivated a research team from the University of Michigan to rethink the cost-sensitive classification of DNNs and highlight the necessity of cost-sensitive learning beyond training examples.

https://arxiv.org/pdf/2208.11739.pdf

This research team suggested using targeted hostile samples to perform augmentation of the data to train a model that makes more conservative decisions about costly pairs. Unlike most work that deals with this task, the method proposed in this article, the Cost Sensitive Adversarial Data Augmentation Framework (CSADA), intervenes in the training phase and is adapted to overfitting problems. In addition, it can be adapted to most DNN architectures and models outside neural networks. The proposed hostile increase scheme is not used to replicate normal data. Instead, it aims to create target adversaries who push the boundaries of decision. Target adversarial examples are generated using a variant of the multi-step ascent technique. By producing data samples close to the decision boundary between relevant labels, the over-purpose is to introduce significant errors into training. The authors presented a new cost-sanctioned bi-level optimization formulation consisting of two periods. The first term is a common experimental risk target, while the second term is a penalty term that sanctions misclassification of the boosted samples with respect to their corresponding weights. Minimizing this functionality in the training step makes the model more robust against critical errors.

A proof of concept has been provided to show the importance of the idea presented in this article. Three classes of independent two-dimensional Gaussian distributions are generated where only one misclassification incurs a cost. Although the limits in the training phase without data augmentation allowed optimal separation of the three classes in the training samples, several fatal errors were recorded during inference. The use of targeted hostile data augmentation was able to correct this problem by setting more robust limits against these errors.

A pilot study was performed on three datasets, MNIST, CIFAR-10, and Pharmaceutical Profile (PMI), to evaluate CSADA. The proposed approach achieved equivalent results in terms of overall accuracy while successfully reducing the total cost and reducing critical errors in all tests on the three data sets.

In this paper, we investigate the cost-sensitive classification issue in applications where the costs of different classification errors vary. To solve this problem, the authors introduce a cost-sensitive data augmentation technique that creates a set of targeted adversarial states used in training to push the boundaries of decision-making toward reducing critical errors. In addition, they propose a mathematical framework for cost-aware bi-level optimization, which penalizes the loss to the resulting adversarial cases. Finally, a gradient-based multi-ascension approach is also provided to solve the optimization problem.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'Rethinking Cost-sensitive Classification in Deep Learning via Adversarial Data Augmentation'. All Credit For This Research Goes To Researchers on This Project. Check out the paper.

Please Don't Forget To Join Our ML Subreddit


Mahmoud is a PhD researcher in machine learning. It also carries a
Bachelor’s degree in Physical Sciences and Master’s degree in
Communication systems and networks. His current fields
Research is concerned with computer vision, stock market forecasting and deep
learning. Produced many scholarly articles about a person re
Determine and study the durability and stability of depth
networks.



Leave a Reply

Your email address will not be published.