[1] 邓丁朋,周亚建,池俊辉,等.短文本分类技术研究综述[J].软件,2020,41(2):141-144. [2] MARON M E.Automatic indexing: An experimental inquiry[J]. ACM,1961,8(3): 404-417. [3] 李静梅,孙丽华,张巧荣,等.一种文本处理中的朴素贝叶斯分类器[J].哈尔滨工程大学学报,2003(1):71-74. [4] 余芳. 一个基于朴素贝叶斯方法的web文本分类系统:WebCAT[J].计算机工程与应用,2004(13):195-197. [5] COVER T,HART P.Nearest neighbor pattern classification[J]. IEEE transactions on information theory,1967,13(1):21-27. [6] 庞剑锋. 基于向量空间模型的自反馈的文本分类系统的研究与实现[D].北京:中国科学院研究生院(计算技术研究所),2001. [7] 湛燕. K-近邻、K-均值及其在文本分类中的应用[D].河北保定:河北大学,2003. [8] JOACHIMS T.Text categorization with Support Vector Machines: Learning with many relevant features[J/OL].Machine learning: ECML-98, https://doi.org/10.1007/BFb0026683. [9] LIU L, ÖZSU M T. Term frequency by inverse document frequency[J/OL].https://doi.org/10.1007/978-0-387-39940-9_3784. [10] MIKOLOV T, CHEN K, CORRADO G, et al.Efficient estimation of word representations in vector space[J].Computation and language,2013,1:3781. [11] PENNINGTON J,SOCHER R,MANNING C D.Glove: Global vectors for word representation[J/OL].https://nlp.stanford.edu/pubs/glove.pdf. [12] KIM K.Convolutional neural networks for sentence classification[A].Proceedings of the 2014 conference on empirical methods in natural language processing(EMNLP)[C].Doha, Qatar:Association for computational linguistics,2014.1746-1751. [13] IYYER M, MANJUNATHA V, BOYD-GRABER J L, et al. Deep unordered composition rivals syntactic methods for text classification[A].Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing of the Asian federation of natural language processing[C].Beijing, China:Association for computational linguistics, 2015.1681-1691. [14] TAI K S, SOCHER R, MANNING C D.Improved semantic representations from tree-structured long short-term memory networks[A].Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing of the Asian federation of natural language processing[C].Beijing, China:Association for computational linguistics, 2015.1556-1566. [15] GRAVE E, MIKOLOV T, JOULIN A, et al.Bag of tricks for efficient text classification[A].Proceedings of the 15th conference of the european chapter of the association for computational linguistics[C].Valencia,Spain:Association for computational linguistics,2017.427-431. [16] VASWANI A, SHAZEER N, PARMAR N, et al.Attention is all you need[J].Computation and language,2017,1:5998-6008. [17] 刘川. 面向小样本的文本分类模型及算法研究[D].成都:电子科技大学,2017. [18] SONG G, YEY M, DU X L,et al.Short text classification: A survey[J]. Journal of multimedia,2014,9(5):635-643. [19] HAN X,ZHU H,YU P F, et al.FewRel: A Large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation[A].Proceedings of the 2018 conference on empirical methods in natural language processing[C].Brussels, Belgium:Association for computational linguistics, 2018.4803-4809. [20] VINYALS O, BLUNDELL C, LILLICRAP T, et al.Matching networks for one shot learning[A].30th Conference on neural information processing systems (NIPS 2016)[C].Barcelona, Spain:Association for computational linguistics,2016. [21] WANG Y X, GIRSHICK R, HEBERT M, et al.Low-shot learning from imaginary data[A].Proc. of the IEEE Conf. on computer vision and pattern recognition[C]. Salt Lake City,UT,USA:IEEE,2018. [22] LU J,GONG P H,YE J P, et al.Learning from very few samples: A survey [J/OL].https://doi.org/10.48550/arXiv.2009.02653. [23] PETERS M E,NEUMANN M,IYYERM,et al.Deep contextualized word representations[A].Proceedings of the 2018 conference of the North American chapter of the association for computational linguistics:Human language technologies[C].New Orleans,Louisiana:Association for computational linguistics,2018.2227-2237. [24] HOWARD J,RUDER S.Universal language model fine-tuning for text classification[A].Proceedings of the 56th annual meeting of the association for computational linguistics[C]. Melbourne,Australia:Association of computational linguistics,2018. 328-339. [25] DEVLIN J, CHANG M, LEE K, et al.BERT: pre-training of deep bidirectional transformers for language understanding[A].Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: Human language technologies[C]. Minneapolis,Minnesota:Association for computational linguistics, 2019, 1:4171-4186. [26] YANG Z, DAI Z, YANG Y, et al.XLNet: Generalized autoregressive pretraining for language understanding[A].Advances in neural information processing systems 32: Annual conference on neural information processing systems 2019[C]. Vancouver, Canada:NeurIPS ,2019.5754-5764. |