Ji, Ya TuWang, Bai LunRen, Qing Dao Er JiShi, BaoWu, Nier E.Lu, MinLiu, NaZhuang, Xu FeiXu, Xuan XuanWang, LiDai, Ling JieYao, Miao MiaoLi, Xiao MeiChaine, RaphaƫlleDeng, ZhigangKim, Min H.2023-10-092023-10-092023978-3-03868-234-9https://doi.org/10.2312/pg.20231281https://diglib.eg.org:443/handle/10.2312/pg20231281Drop type technique, as a method that can effectively regulate the co-adaptations and prediction ability of neural network units, is widely used in model parameter optimization to reduce overfitting problems. However, low resource image classification faces serious overfitting problems, and the data sparsity problem weakens or even disappears the effectiveness of most regularization methods. This paper is inspired by the value iteration strategy and attempts a Drop type method based on Metcalfe's law, named Metcalfe-Drop. The experimental results indicate that using Metcalfe-Drop technique as a basis to determine parameter sharing is more effective than randomly controlling neurons according to a certain probability. Our code is available at https://gitee.com/giteetu/metcalfe-drop.git.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies -> Image processing; Neural networks; Image representationsComputing methodologiesImage processingNeural networksImage representationsA Simple Stochastic Regularization Technique for Avoiding Overfitting in Low Resource Image Classification10.2312/pg.20231281111-1122 pages