본문 바로가기

AI

(25)
[Survey] OOD(out-of-distribution) Detection 개념 분류 최근 OOD detection을 연구하면서, 해당 task와 유사한 다양한 task와 혼동되는 정의 및 활용을 명확히 정리하기 글을 쓰게 되었다. 아래의 링크는 정리를 위해 참고한 survey 논문이다. https://arxiv.org/abs/2110.11334 Generalized Out-of-Distribution Detection: A Survey Out-of-distribution (OOD) detection is critical to ensuring the reliability and safety of machine learning systems. For instance, in autonomous driving, we would like the driving system to issue an a..
[논문리뷰] Enhancing The Reliability of Out-Of-Distribution Image Detection In Neural Networks 논문 링크 : https://arxiv.org/abs/1706.02690 Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks We consider the problem of detecting out-of-distribution images in neural networks. We propose ODIN, a simple and effective method that does not require any change to a pre-trained neural network. Our method is based on the observation that using temperatu arxiv.org 해당 논문은..
[논문리뷰] A Baseline for Detecting Misclassified and Out-Of-Distribution Examples in Neural Networks 논문 링크 : https://arxiv.org/abs/1610.02136 A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax arxiv.org..
[논문리뷰] Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty 논문 링크 :https://arxiv.org/abs/1906.12340https://ojs.aaai.org/index.php/AAAI/article/view/5966 Self-Supervised Learning for Generalizable Out-of-Distribution Detection | Proceedings of the AAAI Conference on Artifici ojs.aaai.org 최근 OOD(out-of-distribution) detection에 관심이 생겨서 이와 관련된 다양한 논문을 survey 중에 해당 논문을 발견했다. 해당 논문은 NIPS 2019 논문으로 Self-supervised learning을 accuracy관점이 아닌 모델의 Robustness 관점에서 바라..
[논문리뷰] A Simple Framework for Contrastive Learning of Visual Representations 논문 링크 : https://arxiv.org/abs/2002.05709 A Simple Framework for Contrastive Learning of Visual Representations This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to under arxiv.org 해당 논문은 MoCo와 같이 self..
[논문리뷰] Momentum Contrast for Unsupervised Visual Representation Learning 논문링크 : https://arxiv.org/abs/1911.05722 Momentum Contrast for Unsupervised Visual Representation Learning We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large a arxiv.org 해당 논문은 Facebook AI Resera..