본문 바로가기

AI/논문리뷰

(19)
[논문리뷰] Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge 논문 링크 : https://proceedings.neurips.cc/paper/2020/hash/a1d4c20b182ad7137ab3606f0e3fc8a4-Abstract.html Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge Requests for name changes in the electronic proceedings will be accepted with no questions asked. However name changes may cause bibliographic tracking issues. Authors are asked to consider this carefully and discuss it with ..
[논문리뷰] Semantically Coherent Out-of-Distribution Detection 논문 링크 : https://arxiv.org/abs/2108.11941 Semantically Coherent Out-of-Distribution Detection Current out-of-distribution (OOD) detection benchmarks are commonly built by defining one dataset as in-distribution (ID) and all others as OOD. However, these benchmarks unfortunately introduce some unwanted and impractical goals, e.g., to perfectly disti arxiv.org 해당 논문은 기존의 Out-of-Distribution Detecti..
[논문리뷰]An Effective Baseline for Robustness to Distributional Shift 논문 링크 : https://arxiv.org/abs/2105.07107 An Effective Baseline for Robustness to Distributional Shift Refraining from confidently predicting when faced with categories of inputs different from those seen during training is an important requirement for the safe deployment of deep learning systems. While simple to state, this has been a particularly challeng arxiv.org 해당 논문은 OOD detection 방법 중 사전의..
[논문리뷰] Background Data Resampling for Outlier-Aware Classification 논문 링크 : https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Background_Data_Resampling_for_Outlier-Aware_Classification_CVPR_2020_paper.html CVPR 2020 Open Access Repository Yi Li, Nuno Vasconcelos; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 13218-13227 The problem of learning an image classifier that allows detection of out-of-distrib..
[논문리뷰] Unsupervised Out-of-Distribution Detection by Maximum Classifier Discrepancy 논문링크 : https://arxiv.org/abs/1908.04951 Unsupervised Out-of-Distribution Detection by Maximum Classifier Discrepancy Since deep learning models have been implemented in many commercial applications, it is important to detect out-of-distribution (OOD) inputs correctly to maintain the performance of the models, ensure the quality of the collected data, and prevent the appl arxiv.org 해당 논문은 ID데이터만을..
[논문리뷰] A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks 논문 링크 :https://arxiv.org/abs/1807.03888 A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications. However, deep neural networks wi arxiv.or..