{"year":"2020","title":"18 Evaluation of Greek Word Embeddings","authors":["S Outsios, C Karatsalos, K Skianis, M Vazirgiannis"],"snippet":"… wiki. The last model has been trained on Common Crawl and Wikipedia data using FastText based on CBOW model with position-weights (Grave et al., 2018), mentioned as cc+wiki. Category gr_def gr_neg1 0 cc.el.300 …","url":["http://www.eleto.gr/download/Conferences/12th%20Conference/Papers-and-speakers/12th_18-02-20_OutsiosStamatis-KaratsalosChristos-SkianisK-VazirgiannisMichalis_Paper1_V04.pdf"]} {"year":"2020","title":"\" Thy algorithm shalt not bear false witness\": An Evaluation of Multiclass Debiasing Methods on Word Embeddings","authors":["T Schlender, G Spanakis - arXiv preprint arXiv:2010.16228, 2020"],"snippet":"… However, surprisingly, the WEAT score measured in ConceptNet is the worst of all three. The GloVe embeddings seem to carry the most bias concerning the RNSB and MAC metrics, which is intuitive when considering the common crawl data it was trained on …","url":["https://arxiv.org/pdf/2010.16228"]} {"year":"2020","title":"A Benchmark of Rule-Based and Neural Coreference Resolution in Dutch Novels and News","authors":["C Poot, A van Cranenburgh - arXiv preprint arXiv:2011.01615, 2020"],"snippet":"… 5 Evaluation Before presenting our main benchmark results, we discuss the issue of coreference evaluation metrics. 6We use Fasttext common crawl embeddings, https://fasttext.cc/docs/en/crawl-vectors.html Page 6 …","url":["https://arxiv.org/pdf/2011.01615"]} {"year":"2020","title":"A Better Use of Audio-Visual Cues: Dense Video Captioning with Bi-modal Transformer","authors":["V Iashin, E Rahtu - arXiv preprint arXiv:2005.08271, 2020"],"snippet":"Page 1. IASHIN, RAHTU: A BETTER USE OF AUDIO-VISUAL CUES 1 A Better Use of Audio-Visual Cues: Dense Video Captioning with Bi-modal Transformer Vladimir Iashin vladimir.iashin@tuni.fi Esa Rahtu esa.rahtu@tuni.fi …","url":["https://arxiv.org/pdf/2005.08271"]} {"year":"2020","title":"A brief tour to the NLP Sesame Street","authors":["E Montoya"],"snippet":"… In addition to the strategy to verify fake news this research provided of a large corpus of news articles from Common Crawl named RealNews, as Grover needed a large corpus of news with metadata which was not available or …","url":["https://chatbotslife.com/a-brief-tour-to-the-nlp-sesame-street-7bba02d75ae3"]} {"year":"2020","title":"A Call for More Rigor in Unsupervised Cross-lingual Learning","authors":["M Artetxe, S Ruder, D Yogatama, G Labaka, E Agirre - arXiv preprint arXiv …, 2020"],"snippet":"… However, as of November 2019, Wikipedia exists in only 307 languages3 of which nearly half have less than 10,000 articles. While one could hope to overcome this by taking the entire web as a corpus, as …","url":["https://arxiv.org/pdf/2004.14958"]} {"year":"2020","title":"A Character-Level BiGRU-Attention for Phishing Classification","authors":["L Yuan, Z Zeng, Y Lu, X Ou, T Feng - International Conference on Information and …, 2019"],"snippet":"… In addition, Common Crawl that stored a great deal of websites is an open website for crawler learners. There are 800,000 websites provided as legitimate websites data … Phish urls. Legal urls. Data sources. Phish Tank. Common Crawl …","url":["https://link.springer.com/chapter/10.1007/978-3-030-41579-2_43"]} {"year":"2020","title":"A Comprehensive Survey of Grammar Error Correction","authors":["Y Wang, Y Wang, J Liu, Z Liu - arXiv preprint arXiv:2005.06600, 2020"],"snippet":"… Common Crawl. The Common Crawl corpus [10] is a repository of web crawl data which is open to everyone. It completes crawls monthly since 2011. • EVP … [28] Word-Level L1 Yes None Error Selection Wikipedia, 2014 Common Crawl …","url":["https://arxiv.org/pdf/2005.06600"]} {"year":"2020","title":"A Comprehensive Survey on Word Representation Models: From Classical to State-Of-The-Art Word Representation Language Models","authors":["U Naseem, I Razzak, SK Khan, M Prasad - arXiv preprint arXiv:2010.15036, 2020"],"snippet":"Page 1. A Comprehensive Survey on Word Representation Models: From Classical to State-Of-The-Art Word Representation Language Models USMAN NASEEM∗, School of Computer Science, The University of Sydney, Australia …","url":["https://arxiv.org/pdf/2010.15036"]} {"year":"2020","title":"A Cross-lingual Natural Language Processing Framework for Infodemic Management","authors":["R Pal, R Pandey, V Gautam, K Bhagat, T Sethi - arXiv preprint arXiv:2010.16357, 2020"],"snippet":"… The algorithm effectively minimizes this function to learn meaningful vector representations. The version of Glove used for experimentation is the publicly available common Crawl (840B tokens, 2.2M vocab, cased, 300 dimension vectors) …","url":["https://arxiv.org/pdf/2010.16357"]} {"year":"2020","title":"A Deep Learning Approach to Interest Analysis","authors":["T Meer - 2020"],"snippet":"Page 1. A Deep Learning Approach to Interest Analysis Thomas van der Meer A thesis submitted for the degree of Master of Business Informatics Department of Information and Computing Sciences Utrecht University The …","url":["https://dspace.library.uu.nl/bitstream/handle/1874/398939/scriptie_eindversie_tvdm.pdf?sequence=1"]} {"year":"2020","title":"A Deep Learning-Based Approach for Identifying the Medicinal Uses of Plant-Derived Natural Compounds. Front. Pharmacol. 11: 584875. doi: 10.3389/fphar …","authors":["S Yoo, HC Yang, S Lee, J Shin, S Min, E Lee, M Song… - Frontiers in Pharmacology …, 2020"],"snippet":"… alphaisothiocyanatotoluene.” In this study, we used the pre-trained fastText model with Wikipedia and Common Crawl (Grave et al., 2018). The model additionally learned from the DrugBank indication and PubMed literature …","url":["https://pdfs.semanticscholar.org/2105/4ac827a06c594f54e1ffe9c865fcbb994980.pdf"]} {"year":"2020","title":"A deep search method to survey data portals in the whole web: toward a machine learning classification model","authors":["AS Correa, A Melo Jr, FSC da Silva - Government Information Quarterly, 2020"],"snippet":"… Later, the same authors (AS Correa & da Silva, 2019) took advantage of the URL index of the Common Crawl project (an open repository of web crawl data) to survey potential data portals by searching the URL text strings …","url":["https://www.sciencedirect.com/science/article/pii/S0740624X20302896"]} {"year":"2020","title":"A Deep-Learning-Based Blocking Technique for Entity Linkage","authors":["F Azzalini, M Renzi, L Tanca - International Conference on Database Systems for …, 2020"],"snippet":"… attribute value \\(t[A_{k}]\\) is transformed into a real-valued vector \\(\\mathbf{v} (w)\\). The fastText model we use is crawl-300d-2M-subword [3] where each word is represented as a 300-dimensional vector and the …","url":["https://link.springer.com/chapter/10.1007/978-3-030-59410-7_37"]} {"year":"2020","title":"A Focused Study to Compare Arabic Pre-training Models on Newswire IE Tasks","authors":["W Lan, Y Chen, W Xu, A Ritter - arXiv preprint arXiv:2004.14519, 2020"],"snippet":"… three times; (2) add the Arabic shuffled Os- car data (Ortiz Suárez et al., 2019), a large-scale multilingual dataset obtained by language identification and filtering of the Common Crawl corpus … XLM-Rbase CommonCrawl 55.6B …","url":["https://arxiv.org/pdf/2004.14519"]} {"year":"2020","title":"A Framework for Word Embedding Based Automatic Text Summarization and Evaluation","authors":["TT Hailu, J Yu, TG Fantaye - Information, 2020"],"snippet":"Text summarization is a process of producing a concise version of text (summary) from one or more information sources. If the generated summary preserves meaning of the original text, it will help the users to make fast and …","url":["https://www.mdpi.com/2078-2489/11/2/78/pdf"]} {"year":"2020","title":"A German Language Voice Recognition System using DeepSpeech","authors":["J Xu, K Matta, S Islam, A Nürnberger"],"snippet":"… 1, pp. 517–520, 1992. [14] Christopher Cieri, David Miller and Kevin Walker, “The Fisher Corpus: a Resource for the Next Generations of Speech-to-Text,” LREC, 2004. [15] CommomCrawl, “English language model,” http://commoncrawl.org/, 2020 …","url":["https://www.researchgate.net/profile/Kaveen_Matta_Kumaresh/publication/342657372_German_Voice_Recognition_System_using_DeepSpeech/links/5efee6e3a6fdcc4ca447681a/German-Voice-Recognition-System-using-DeepSpeech.pdf"]} {"year":"2020","title":"A Gradient Boosting-Seq2Seq System for Latin POS Tagging and Lemmatization","authors":["GGA Celano - LREC 2020 Workshop Language Resources and …"],"snippet":"… prefixes, infixes, or suffixes to be weighted. Some models for Latin, such as the one based on texts from Common Crawl and Wikipedia, have already been computed and are freely available. 7 However, since the data released …","url":["https://www.academia.edu/download/63734156/LT4HALAbook20200624-19244-er3k3d.pdf#page=126"]} {"year":"2020","title":"A graph based framework for structured prediction tasks in sanskrit","authors":["A Krishna, A Gupta, P Goyal, B Santra, P Satuluri - Computational Linguistics, 2020"],"snippet":"Page 1. A Graph Based Framework for Structured Prediction Tasks in Sanskrit Amrith Krishna* University of Cambridge Bishal Santra Indian Institute of Technology Kharagpur Ashim Gupta† University of Utah Pavankumar Satuluri Chinmaya Vishwavidyapeeth …","url":["https://www.mitpressjournals.org/doi/pdf/10.1162/coli_a_00390"]} {"year":"2020","title":"A Graph-Theoretic Approach for the Detection of Phishing Webpages","authors":["CL Tan, KL Chiew, KSC Yong, J Abdullah, Y Sebastian - Computers & Security, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S016740482030078X"]} {"year":"2020","title":"A Hybrid Approach for Aspect-Based Sentiment Analysis Using Deep Contextual Word Embeddings and Hierarchical Attention","authors":["MM Trusca, D Wassenberg, F Frasincar, R Dekker - arXiv preprint arXiv:2004.08673, 2020","R Dekker - Web Engineering: 20th International Conference, ICWE …"],"snippet":"… The last two conditions for f are necessary to prevent overweighting of either rare or frequent co-occurrences. In this paper, we choose to use 300-dimension GloVe word embeddings trained on the Common Crawl (42 billion words)[14]. Word2vec …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=XpnqDwAAQBAJ&oi=fnd&pg=PA365&dq=commoncrawl&ots=-hereKCQai&sig=PCXnLnE9TjyMtqs05nolBDRi69g","https://arxiv.org/pdf/2004.08673"]} {"year":"2020","title":"A Large Scale Study on Health Information Retrieval for Laypersons","authors":["Z Liu - 2020"],"snippet":"… 3.1 Description of Document Collection The consumer-oriented health search task uses a dataset called clefehealth2018 corpus, which was created by acquiring web pages from various health do- mains(websites) using the CommonCrawl platform1 …","url":["https://cs.anu.edu.au/courses/CSPROJECTS/20S1/reports/u6022937_report.pdf"]} {"year":"2020","title":"A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal","authors":["D Gholipour Ghalandari, C Hokamp, J Glover, G Ifrim - arXiv, 2020","DG Ghalandari, C Hokamp, NT Pham, J Glover, G Ifrim - arXiv preprint arXiv …, 2020"],"snippet":"… We also automatically extend these source articles by looking for related articles in the Common Crawl archive … Table 1: Example event summary and linked source ar- ticles from the Wikipedia Current Events Portal, and …","url":["https://arxiv.org/pdf/2005.10070","https://ui.adsabs.harvard.edu/abs/2020arXiv200510070G/abstract"]} {"year":"2020","title":"A Large-Scale Semi-Supervised Dataset for Offensive Language Identification","authors":["S Rosenthal, P Atanasova, G Karadzhov, M Zampieri… - arXiv preprint arXiv …, 2020"],"snippet":"… The first layer of the LSTM model is an embedding layer, which we initialize with a concatenation of the GloVe 300-dimensional (Pennington et al., 2014) and FastText's Common Crawl 300dimensional embeddings (Grave et al., 2018). The Page 5 …","url":["https://arxiv.org/pdf/2004.14454"]} {"year":"2020","title":"A Longitudinal Analysis of Job Skills for Entry-Level Data Analysts","authors":["T Dong, J Triche - Journal of Information Systems Education, 2020"],"snippet":"… Therefore, we used the Common Crawl dataset to address this problem (http:// commoncrawl.org/). Common Crawl is a non-profit organization that builds and maintains an open repository of web crawl data that is, in essence, a copy of the Internet …","url":["http://jise.org/Volume31/n4/JISEv31n4p312.pdf"]} {"year":"2020","title":"A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages","authors":["P Ortiz Suárez, L Romary, B Sagot - arXiv, 2020","PO Suárez, L Romary, B Sagot - arXiv preprint arXiv:2006.06202, 2020"],"snippet":"… al., 2019), a freely available2 multilingual dataset obtained by performing language classification, filtering and cleaning of the whole Common Crawl corpus.3 … 1 https://commoncrawl.org 2 https://traces1.inria.fr/oscar/ 3Snapshot …","url":["https://arxiv.org/pdf/2006.06202","https://ui.adsabs.harvard.edu/abs/2020arXiv200606202O/abstract"]} {"year":"2020","title":"A Multilingual Evaluation for Online Hate Speech Detection","authors":["M Corazza, S Menini, E Cabrio, S Tonelli, S Villata - ACM Transactions on Internet …, 2020"],"snippet":"… In particular, we use the Italian and German embeddings trained on Common Crawl and Wikipedia [33] with size 300 … English Fasttext Crawl embeddings: English embeddings trained by Fasttext9 on Common Crawl with an embedding size of 300 …","url":["https://dl.acm.org/doi/abs/10.1145/3377323"]} {"year":"2020","title":"A Neural-based model to Predict the Future Natural Gas Market Price through Open-domain Event Extraction","authors":["MT Chau, D Esteves, J Lehmann"],"snippet":"… Strong baseline We feed the price and sentence embedding of filtered news using spaCy small English (Context tensor trained on [39], 300-d embedding vector) and large English model (trained on both [39] and Common Crawl …","url":["http://ceur-ws.org/Vol-2611/paper2.pdf"]} {"year":"2020","title":"A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUAL BILSTM NETWORK","authors":["R Shelke, D Thakore"],"snippet":"… It provides word embeddings for Hindi (and 157 other languages) and is based on the CBOW (Continuous Bag-of-Words) model. The CBOW model learns by predicting the current word based on its context, and it was trained …","url":["http://www.academia.edu/download/63216061/120200506-26612-102sbv8.pdf"]} {"year":"2020","title":"A novel approach to sentiment analysis in Persian using discourse and external semantic information","authors":["R Dehkharghani, H Emami - arXiv preprint arXiv:2007.09495, 2020"],"snippet":"Page 1. * Corresponding Author A novel approach to sentiment analysis in Persian using discourse and external semantic information *Rahim Dehkharghani, Faculty of Engineering, University of Bonab, Bonab, Iran rdehkharghani …","url":["https://arxiv.org/pdf/2007.09495"]} {"year":"2020","title":"A Novel BGCapsule Network for Text Classification","authors":["AK Gangwar, V Ravi - arXiv preprint arXiv:2007.04302, 2020"],"snippet":"… GloVe. We used GloVe [21] pretrained model. The GloVe model trained on 2.2 million vocabularies, 840 billion tokens of web data from Common Crawl. This Glove embedding projected each word to a 300-dimensional vector …","url":["https://arxiv.org/pdf/2007.04302"]} {"year":"2020","title":"A novel reasoning mechanism for multi-label text classification","authors":["R Wang, R Ridley, W Qu, X Dai - Information Processing & Management"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0306457320309341"]} {"year":"2020","title":"A Performance Comparison among Different Amounts of Context on Deep Learning Based Intent Classification Models","authors":["M Jung, J Kim, JY Jang, H Jung, S Shin - 2020 International Conference on …, 2020"],"snippet":"… We also employ word embeddings trained on Common Crawl of fastText [13], a library for efficient text classification and representation learning. We apply a bidirectional LSTM (Bi-LSTM) network [6] to build an LSTM based intent classification model …","url":["https://ieeexplore.ieee.org/abstract/document/9289467/"]} {"year":"2020","title":"A Practical Approach for Taking Down Avalanche Botnets Under Real-World Constraints","authors":["D Preuveneers, A Duda, W Joosen, M Korczynski"],"snippet":"Page 1. A Practical Approach for Taking Down Avalanche Botnets Under Real-World Constraints Victor Le Pochat∗, Tim Van hamme∗, Sourena Maroofi§, Tom Van Goethem∗, Davy Preuveneers∗, Andrzej Duda§, Wouter Joosen …","url":["https://lirias.kuleuven.be/retrieve/567093/"]} {"year":"2020","title":"A Practical Guide to Hybrid Natural Language Processing: Combining Neural Models and Knowledge Graphs for NLP","authors":["JM Gomez-Perez, R Denaux, A Garcia-Silva - 2020"],"snippet":"Page 1. Jose Manuel Gomez-Perez Ronald Andres Garcia-Silva Denaux A to Practical Hybrid Natural Guide Language Processing Combining Neural Models and Knowledge Graphs for NLP Page 2. A Practical Guide …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=Ou_rDwAAQBAJ&oi=fnd&pg=PR7&dq=commoncrawl&ots=7ExXbVHzPG&sig=WOLotv9GbQ2RA9QHICwuff_hHVM"]} {"year":"2020","title":"A Recipe for Creating Multimodal Aligned Datasets for Sequential Tasks","authors":["AS Lin, S Rao, A Celikyilmaz, E Nouri, C Brockett… - arXiv preprint arXiv …, 2020"],"snippet":"… We extract text recipes from Common Crawl,2 one of the largest web sources of text … CommonCrawl text-text recipe pairs We randomly choose 200 text-text recipes pairs (spanning 5 dishes) from the test … Table 3: Results for …","url":["https://arxiv.org/pdf/2005.09606"]} {"year":"2020","title":"A Revised Generative Evaluation of Visual Dialogue","authors":["D Massiceti, V Kulharia, PK Dokania, N Siddharth… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. A Revised Generative Evaluation of Visual Dialogue Daniela Massiceti Viveka Kulharia Puneet K. Dokania N. Siddharth Philip HS Torr University of Oxford {daniela, viveka, puneet, nsid, phst} @robots.ox.ac.uk Abstract …","url":["https://arxiv.org/pdf/2004.09272"]} {"year":"2020","title":"A Study on Transformer-based Machine Comprehension with Curriculum Learning","authors":["MQ BUI - 2020"],"snippet":"Page 1. Japan Advanced Institute of Science and Technology JAIST Repository https://dspace.jaist.ac.jp/ Title A Study on Transformer-based Machine Comprehension with Curriculum Learning Author(s) BUI, MINH …","url":["https://dspace.jaist.ac.jp/dspace/bitstream/10119/16864/5/paper.pdf"]} {"year":"2020","title":"A Survey of Document Grounded Dialogue Systems (DGDS)","authors":["L Ma, WN Zhang, M Li, T Liu - arXiv preprint arXiv:2004.13818, 2020"],"snippet":"Page 1. A Survey of Document Grounded Dialogue Systems (DGDS) LONGXUAN MA, Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology, China WEI-NAN ZHANG, Research Center …","url":["https://arxiv.org/pdf/2004.13818"]} {"year":"2020","title":"A Survey on Contextual Embeddings","authors":["Q Liu, MJ Kusner, P Blunsom - arXiv preprint arXiv:2003.07278, 2020"],"snippet":"… T5 introduces a new pre-training dataset, Colossal Clean Crawled Corpus by cleaning the web pages from Common Crawl … by training a Transformerbased masked language model on one hundred languages, using more …","url":["https://arxiv.org/pdf/2003.07278"]} {"year":"2020","title":"A Systematic Study of Inner-Attention-Based Sentence Representations in Multilingual Neural Machine Translation","authors":["R Vázquez, A Raganato, M Creutz, J Tiedemann - Computational Linguistics, 2020"],"snippet":"Page 1. Computational Linguistics Just Accepted MS. https://doi.org/10.1162/ COLI_a_00377 © Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license A Systematic Study of Inner-Attention-Based …","url":["https://www.mitpressjournals.org/doi/pdf/10.1162/COLI_a_00377"]} {"year":"2020","title":"A Text Augmentation Approach using Similarity Measures based on Neural Sentence Embeddings for Emotion Classification on Microblogs","authors":["YK Shyang, JLS Yan - 2020 IEEE 2nd International Conference on Artificial …, 2020"],"snippet":"… vectors. We used two InferSent models available, which were InferSent trained using GloVe on Common Crawl 840B (InferSent-GloVe) and InferSent trained using fastText on Common Crawl 600B (InferSentfastText). Four …","url":["https://ieeexplore.ieee.org/abstract/document/9257826/"]} {"year":"2020","title":"A Transformer-based Audio Captioning Model with Keyword Estimation","authors":["Y Koizumi, R Masumura, K Nishida, M Yasuda, S Saito - arXiv preprint arXiv …, 2020"],"snippet":"… sions different. We use the bottleneck feature of VGGish [11] (Dx = 128) for audio embedding, and fastText [18] trained on the Common Crawl corpus (Dw = 300) for caption-word and keyword embedding, respectively. Since the …","url":["https://arxiv.org/pdf/2007.00222"]} {"year":"2020","title":"A web analytics approach to map the influence and reach of CCAFS","authors":["B Carneiro, G Resce, Y Ma, G Pacillo, P Läderach - 2020"],"snippet":"Page 1. A web analytics approach to map the influence and reach of CCAFS Working Paper No. 326 CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS) Page 2. A web analytics …","url":["https://cgspace.cgiar.org/bitstream/handle/10568/110588/Working%20Paper-326.pdf?sequence=1&isAllowed=y"]} {"year":"2020","title":"A WRITTEN TEST FOR ARTIFICIAL GENERAL INTELLIGENCE","authors":["M Casteluccio - Strategic Finance, 2020"],"snippet":"… If you input a few words, GPT-3 will write a completed thought or sentence. The model was trained on data from Common Crawl, a nonprofit that builds and maintains an open repository of web crawl data accessible to the public for free (common crawl.org) …","url":["http://search.proquest.com/openview/e8f242b31761f187b35a3bb15ab0e724/1?pq-origsite=gscholar&cbl=48426"]} {"year":"2020","title":"Accenture at CheckThat! 2020: If you say so: Post-hoc fact-checking of claims using transformer-based models","authors":["E Williams, P Rodrigues, V Novak - arXiv preprint arXiv:2009.02431, 2020","V Novak"],"snippet":"… BPE) instead of WordPiece. [23] The base-roberta model was pre-trained on 160GB of text extracted from BookCorpus, English Wikipedia, CC-News, OpenWebText, and Stories (a subset of CommonCrawl Data) [14]. At the time …","url":["http://ceur-ws.org/Vol-2696/paper_226.pdf","https://arxiv.org/pdf/2009.02431"]} {"year":"2020","title":"Accurate and Fast URL Phishing Detector: A Convolutional Neural Network Approach","authors":["W Wei, Q Ke, J Nowak, M Korytkowski, R Scherer… - Computer Networks, 2020"],"snippet":"… The database downloaded during the article writing contained 10,604 records. To obtain legitimate websites, the second part of the training dataset was downloaded from the Common Crawl Foundation (http://commoncrawl.org/) …","url":["https://www.sciencedirect.com/science/article/pii/S1389128620301109"]} {"year":"2020","title":"Active Learning for Spreadsheet Cell Classification","authors":["J Gonsior, J Rehak, M Thiele, E Koci, M Günther…"],"snippet":"… Another recent corpus is Fuse [4], which comprises 249, 376 unique spreadsheets, extracted from Common Crawl2. Each spreadsheet is accompanied by a JSON file, which includes NLP token extraction and metrics …","url":["https://wwwdb.inf.tu-dresden.de/wp-content/uploads/SEAData2.pdf"]} {"year":"2020","title":"Adaptive GloVe and FastText Model for Hindi Word Embeddings","authors":["V Gaikwad, Y Haribhakta - Proceedings of the 7th ACM IKDD CoDS and 25th …, 2020"],"snippet":"… developed using original GloVe model, FastText model (FastTextHin), Adaptive FastText model (AFM) (trained on Hindi monolingual corpus [11] and FastText embeddings published on the website [25] (FastTextWeb) …","url":["https://dl.acm.org/doi/abs/10.1145/3371158.3371179"]} {"year":"2020","title":"ADD: Academic Disciplines Detector Based on Wikipedia","authors":["A Gjorgjevikj, K Mishev, D Trajanov - IEEE Access, 2020"],"snippet":"… For representation of the textual data into fixed-length vector form using pre-trained text encoding models, in addition to the model files, word vectors trained with GloVe [38] on Common Crawl (840B tokens)6 and with FastText …","url":["https://ieeexplore.ieee.org/iel7/6287639/8948470/08948031.pdf"]} {"year":"2020","title":"Advanced Semantics for Commonsense Knowledge Extraction","authors":["TP Nguyen, S Razniewski, G Weikum - arXiv preprint arXiv:2011.00905, 2020"],"snippet":"Page 1. Advanced Semantics for Commonsense Knowledge Extraction Tuan-Phong Nguyen Max Planck Institute for Informatics tuanphong@mpi-inf.mpg.de Simon Razniewski Max Planck Institute for Informatics srazniew@mpi-inf.mpg.de …","url":["https://arxiv.org/pdf/2011.00905"]} {"year":"2020","title":"Advanced Web Crawlers","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"… A good starting point is using common crawl's fork of Apache Nutch 1.x–based crawler. The codebase is open sourced and available on the GitHub repo (https://github.com/commoncrawl), and it's not only well …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_8"]} {"year":"2020","title":"Advancements in Deep Learning Theory and Applications: Perspective in 2020 and beyond","authors":["MN Saadat, M Shuaib - Advances and Applications in Deep Learning, 2020"],"snippet":"The aim of this chapter is to introduce newcomers to deep learning, deep learning platforms, algorithms, applications, and open-source datasets. This chapter will give you a broad overview of the term deep learning, in context …","url":["https://www.intechopen.com/books/advances-and-applications-in-deep-learning/advancements-in-deep-learning-theory-and-applications-perspective-in-2020-and-beyond"]} {"year":"2020","title":"Advancing Neural Language Modeling in Automatic Speech Recognition","authors":["K Irie - 2020"],"snippet":"Page 1. Advancing Neural Language Modeling in Automatic Speech Recognition Von der Fakultät für Mathematik, Informatik und Naturwissenschaften der RWTH Aachen University zur Erlangung des akademischen Grades …","url":["https://www-i6.informatik.rwth-aachen.de/publications/download/1142/IrieKazuki--AdvancingNeuralLanguageModelinginAutomaticSpeechRecognition--2020.pdf"]} {"year":"2020","title":"Adversarial Self-Supervised Data-Free Distillation for Text Classification","authors":["X Ma, Y Shen, G Fang, C Chen, C Jia, W Lu - arXiv preprint arXiv:2010.04883, 2020"],"snippet":"… DeepFace (Taigman et al., 2014) is trained on user images under confidential policies for protecting users. Further, some datasets, like Common Crawl dataset used in GPT3 (Brown et al., 2020), contain nearly a trillion words and are difficult to transmit and store …","url":["https://arxiv.org/pdf/2010.04883"]} {"year":"2020","title":"Adversarial Training for Large Neural Language Models","authors":["X Liu, H Cheng, P He, W Chen, Y Wang, H Poon, J Gao - arXiv preprint arXiv …, 2020"],"snippet":"… For continual pre-training of RoBERTa, we use Wikipedia (13GB), OPENWEBTEXT (public Reddit content (Gokaslan and Cohen); 38GB), STORIES (a subset of CommonCrawl (Trinh and Le, 2018); 31GB). 2https://dumps.wikimedia.org/enwiki/ Page 5 …","url":["https://arxiv.org/pdf/2004.08994"]} {"year":"2020","title":"Affective Conditioning on Hierarchical Attention Networks applied to Depression Detection from Transcribed Clinical Interviews","authors":["D Xezonaki, G Paraskevopoulos, A Potamianos…"],"snippet":"… Next, for both corpora we tokenize the speaker turns by splitting them into words. We use 300D GloVe [31] pretrained word embeddings, trained on the Common Crawl corpus, to extract word representations. Implementation …","url":["https://indico2.conference4me.psnc.pl/event/35/contributions/3166/attachments/895/934/Thu-3-1-3.pdf"]} {"year":"2020","title":"Affective Conditioning on Hierarchical Networks applied to Depression Detection from Transcribed Clinical Interviews","authors":["D Xezonaki, G Paraskevopoulos, A Potamianos… - arXiv preprint arXiv …, 2020"],"snippet":"… Next, for both corpora we tokenize the speaker turns by splitting them into words. We use 300D GloVe [31] pretrained word embeddings, trained on the Common Crawl corpus, to extract word representations. Implementation …","url":["https://arxiv.org/pdf/2006.08336"]} {"year":"2020","title":"AgglutiFiT: Efficient Low-Resource Agglutinative Language Model Fine-Tuning","authors":["Z Li, X Li, J Sheng, W Slamu - IEEE Access, 2020"],"snippet":"… test set. For crosslingual pre-training language models, we use the XLM − R model loaded from the torch.Hub that It is trained on 2.5TB of CommonCrawl data, in 17 languages and uses a large vocabulary size of 95K. XLM …","url":["https://ieeexplore.ieee.org/iel7/6287639/8948470/09164940.pdf"]} {"year":"2020","title":"AI4Bharat-IndicNLP Corpus: Monolingual Corpora and Word Embeddings for Indic Languages","authors":["A Kunchukuttan, D Kakwani, S Golla, A Bhattacharyya… - arXiv preprint arXiv …, 2020"],"snippet":"… FastText also provides embeddings trained on Wikipedia + CommonCrawl corpus … 1. We augmented our crawls with some data from other sources: Leipzig corpus (Goldhahn et al., 2012) (Tamil and Bengali), WMT NewsCrawl …","url":["https://arxiv.org/pdf/2005.00085"]} {"year":"2020","title":"Algorithmic Bias: On the Implicit Biases of Social Technology","authors":["G Johnson - 2020"],"snippet":"… This study found that parsing software trained on a dataset called “the common crawl”—an assemblage of 840 billion words collected by crawling the internet—resulted in the program producing “human-like semantic …","url":["http://philsci-archive.pitt.edu/17169/1/Algorithmic%20Bias.pdf"]} {"year":"2020","title":"ALOD2Vec Matcher Results for OAEI 2020","authors":["H Paulheim"],"snippet":"… like DBpedia [8] – but instead on the whole Web: The dataset consists of hypernymy relations extracted from the Common Crawl3, a … 3 see http://commoncrawl.org/ 4 see http://webisa.webdatacommons.org/concept …","url":["http://disi.unitn.it/~pavel/om2020/papers/oaei20_paper2.pdf"]} {"year":"2020","title":"An AI-Based System for Formative and Summative Assessment in Data Science Courses","authors":["P Vittorini, S Menini, S Tonelli - International Journal of Artificial Intelligence in …, 2020"],"snippet":"Massive open online courses (MOOCs) provide hundreds of students with teaching materials, assessment tools, and collaborative instruments. The assessment a.","url":["https://link.springer.com/article/10.1007/s40593-020-00230-2"]} {"year":"2020","title":"An Analysis of Dataset Overlap on Winograd-Style Tasks","authors":["A Emami, A Trischler, K Suleman, JCK Cheung - arXiv preprint arXiv:2011.04767, 2020"],"snippet":"… This is a form of data contamination. One of the earliest works that trained a language model on Common Crawl data identified and removed a training documents that overlapped with one of their evaluation datasets (Trinh and Le, 2018) …","url":["https://arxiv.org/pdf/2011.04767"]} {"year":"2020","title":"An Approach to NMT Re-Ranking Using Sequence-Labeling for Grammatical Error Correction","authors":["B Wang, K Hirota, C Liu, Y Dai, Z Jia"],"snippet":"Page 1. NMT Re-Ranking Using Sequence-Labeling for GEC Paper: An Approach to NMT Re-Ranking Using Sequence-Labeling for Grammatical Error Correction Bo Wang, Kaoru Hirota, Chang Liu, Yaping Dai † , and Zhiyang Jia …","url":["https://www.jstage.jst.go.jp/article/jaciii/24/4/24_557/_pdf"]} {"year":"2020","title":"An Effective Phishing Detection Model Based on Character Level Convolutional Neural Network from URL","authors":["A Aljofey, Q Jiang, Q Qu, M Huang, JP Niyigena - Electronics, 2020"],"snippet":"Phishing is the easiest way to use cybercrime with the aim of enticing people to give accurate information such as account IDs, bank details, and passwords. This type of cyberattack is usually triggered by emails, instant messages, or …","url":["https://www.mdpi.com/2079-9292/9/9/1514/pdf"]} {"year":"2020","title":"An Empirical Investigation of Performances of Different Word Embedding Algorithms in Comment Clustering","authors":["E Dorani, N Duru, T Yıldız - 2019 Innovations in Intelligent Systems and …"],"snippet":"… In this study, we used the Common Crawl (1.9 million words)1 and (2.0 million word)2 pre-trained word vectors for Glove and FastText, respectively … Such that we used the Common Crawl (1.9 million words) and (2.0 million word) …","url":["https://ieeexplore.ieee.org/abstract/document/8946379/"]} {"year":"2020","title":"An Empirical Investigation Towards Efficient Multi-Domain Language Model Pre-training","authors":["K Arumae, Q Sun, P Bhatia - arXiv preprint arXiv:2010.00784, 2020"],"snippet":"… We processed publicly available bio-medical and non-bio-medical corpora for pre-training our models. For non-bio-medical data, we use BookCorpus and English Wikipedia data, CommonCrawl Stories (Trinh and Le, 2018) …","url":["https://arxiv.org/pdf/2010.00784"]} {"year":"2020","title":"An Empirical Study of Pre-trained Transformers for Arabic Information Extraction","authors":["W Lan, Y Chen, W Xu, A Ritter - Proceedings of the 2020 Conference on Empirical …, 2020"],"snippet":"… XLM-Rlarge CommonCrawl 295B/55.6B/2.9B SentencePiece 250k/80k/ 14k yes large 550M … and the Gigaword portion three times; (2) adding the Arabic section of the Oscar corpus (Ortiz Suárez et al., 2019), a large-scale …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.382.pdf"]} {"year":"2020","title":"An Empirical Study of Transformer-Based Neural Language Model Adaptation","authors":["K Li, Z Liu, T He, H Huang, F Peng, D Povey… - ICASSP 2020-2020 IEEE …, 2020"],"snippet":"… We denote the merged corpus as “Source1” in all tables. The second corpus is an English subset (10%) of the Common Crawl News (CCNews) [27], a news corpus contains articles published worldwide between September 2016 and February 2019 …","url":["https://ieeexplore.ieee.org/abstract/document/9053399/"]} {"year":"2020","title":"An Empirical Study on Explainable Prediction of Text Complexity: Preliminaries for Text Simplification","authors":["C Garbacea, M Guo, S Carton, Q Mei - arXiv preprint arXiv:2007.15823, 2020"],"snippet":"… In our experiments we use the 12 layer XLNeT base pre-trained model on 2https://docs.fast.ai/text.html 4 Page 5. the English Wikipedia and the Books corpus (similar to BERT), and additionally also on Giga5 ClueWeb …","url":["https://arxiv.org/pdf/2007.15823"]} {"year":"2020","title":"An Empirical Study on Release Engineering of Artificial Intelligence Products","authors":["M Xiu - arXiv preprint arXiv:2012.01403, 2020"],"snippet":"… SPIRAL CelebA HQ 10 (Config: Network Structure) hub.Module Text Embedding Albert Wikipedia, BooksCorpus, Stories, CommonCrawl, Giga5, Clue Web 4 (Config: Model Size) (Data Pre-processing: Cased or Not) hub.Module …","url":["https://arxiv.org/pdf/2012.01403"]} {"year":"2020","title":"An English–Swahili parallel corpus and its use for neural machine translation in the news domain","authors":["F Sánchez-Martınez, VM Sánchez-Cartagena…"],"snippet":"… 9https://commoncrawl.github.io/ cc-crawl-statistics/plots/languages 10https://github.com/ CLD2Owners/cld2 300 Page 3. of crawling and, from the remaining 3 232, only 908 ended up containing data in both languages. Document alignment …","url":["https://www.dlsi.ua.es/~fsanchez/pub/pdf/sanchez-martinez20b.pdf"]} {"year":"2020","title":"An Enhanced Sentiment Analysis Framework Based on Pre-Trained Word Embedding","authors":["EH Mohamed, MES Moussa, MH Haggag - International Journal of Computational …, 2020"],"snippet":"Login to your account …","url":["https://www.worldscientific.com/doi/abs/10.1142/S1469026820500315"]} {"year":"2020","title":"An Evaluation Benchmark for Testing the Word Sense Disambiguation Capabilities of Machine Translation Systems","authors":["A Raganato, Y Scherrer, J Tiedemann - … of The 12th Language Resources and …, 2020"],"snippet":"… Page 2. 3669 CS–EN DE–EN FI–EN FR–EN RU–EN Books GlobalVoices Europarl JW300 News-Comm. Tatoeba TED Talks EU Bookshop MultiUN Common Crawl Table 1: Corpora used to extract the MuCoW test suites. The …","url":["https://www.aclweb.org/anthology/2020.lrec-1.452.pdf"]} {"year":"2020","title":"An Evaluation Model for Auto-generated Cognitive Scripts","authors":["AM ELMougi, YMK Omar, R Hodhod"],"snippet":"… Fig. 6. The Cinema Linear Cognitive Script Converted into Text. B. Computing the GloVe Similarity Ratio Threshold The proposed model uses GloVe vectors of 300 dimensions that are created by training Common Crawl (840B tokens …","url":["https://pdfs.semanticscholar.org/14fd/085addcbcebe7e531198d52f041c4e86a3d9.pdf"]} {"year":"2020","title":"An Evaluation of Recent Neural Sequence Tagging Models in Turkish Named Entity Recognition","authors":["G Aras, D Makaroglu, S Demir, A Cakir - arXiv preprint arXiv:2005.07692, 2020"],"snippet":"Page 1. An Evaluation of Recent Neural Sequence Tagging Models in Turkish Named Entity Recognition Gizem Arasa,, Didem Makaroglua,b, Seniz Demirc and Altan Cakirb aDemiroren Teknoloji AS, Istanbul, Turkey bDepartment …","url":["https://arxiv.org/pdf/2005.07692"]} {"year":"2020","title":"An Exploration in L2 Word Embedding Alignment","authors":["P Liao"],"snippet":"… It is also run for Wikipedia Chinese fastText embeddings to Common Crawl English fastText embeddings to test the method on a large corpus of a different domain. All fastText embeddings are pretrained and downloaded from their website16 …","url":["https://pdfs.semanticscholar.org/513e/16939e369cf09be34ac4c983d20be53a94b1.pdf"]} {"year":"2020","title":"An Exploratory Approach to the Corpus Filtering Shared Task WMT20","authors":["A Kejriwal, P Koehn"],"snippet":"… We make the number of lines taken by the wikipedia data be between 40-50% of the number of lines taken by the CommonCrawl data … entropy as defined in Equation 5. We find that the scores we got were ex- tremely …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.108.pdf"]} {"year":"2020","title":"An Interactive Network for End-to-End Review Helpfulness Modeling","authors":["J Du, L Zheng, J He, J Rong, H Wang, Y Zhang - Data Science and Engineering, 2020"],"snippet":"Review helpfulness prediction aims to prioritize online reviews by quality. Existing methods largely combine review texts and star ratings for helpfulness.","url":["https://link.springer.com/article/10.1007/s41019-020-00133-1"]} {"year":"2020","title":"An Iterative Knowledge Transfer NMT System for WMT20 News Translation Task","authors":["J Kim, S Park, S Kim, Y Choi - Proceedings of the Fifth Conference on Machine …, 2020"],"snippet":"… Kyoto Free Translation Task 0.44M TED Talks 0.24M Monolingual Data (En) Europarl v10 2.29M News Commentary v15 0.6M News Crawl 23.35M News Discussions 63.51M Monolingual Data (Ja) News Crawl …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.11.pdf"]} {"year":"2020","title":"An Open-Domain Web Search Engine for Answering Comparative Questions","authors":["T Abye, T Sager, AJ Triebel"],"snippet":"… pp. 91–96 (2017) 3. Bevendorff, J., Stein, B., Hagen, M., Potthast, M.: Elastic ChatNoir: Search Engine for the ClueWeb and the Common Crawl. In: Azzopardi, L., Hanbury, A., Pasi, G., Piwowarski, B. (eds.) Advances in Information Retrieval …","url":["http://ceur-ws.org/Vol-2696/paper_130.pdf"]} {"year":"2020","title":"Analogical frames by constraint satisfaction","authors":["L De Vine - 2020"],"snippet":"Page 1. Analogical Frames by Constraint Satisfaction A THESIS SUBMITTED IN FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Lance De Vine BMaths …","url":["https://eprints.qut.edu.au/198036/1/Lance_De%20Vine_Thesis.pdf"]} {"year":"2020","title":"Analysis of the communication of smart city features through social media","authors":["JP Fontanilles"],"snippet":"Page 1. Jordi Pascual Fontanilles Analysis of the communication of smart city features through social media MASTER'S THESIS Supervised by Dr. Antonio Moreno Ribas Computer Security Engineering and Artificial …","url":["https://deim.urv.cat/~itaka/itaka2/PDF/acabats/MEMORIA_TFM_JordiPascual.pdf"]} {"year":"2020","title":"Analyzing Sustainability Reports Using Natural Language Processing","authors":["A Luccioni, E Bailor, N Duchene - arXiv preprint arXiv:2011.08073, 2020"],"snippet":"… In fact, research in financial NLP has found that using general-purpose NLP models trained on corpora such as Wikipedia and the Common Crawl fail to capture domainspecific terms and concepts which are critical for a coherent …","url":["https://arxiv.org/pdf/2011.08073"]} {"year":"2020","title":"Analyzing the Effect of Community Norms on Gender Bias","authors":["NR Raut - 2020"],"snippet":"… We use word2vec to train embeddings on the comment data for each of the subreddit. For fasttext, we use 2 million word vectors trained with subword information on Common Crawl …","url":["http://search.proquest.com/openview/18f1238b848a27a836459d849f5795c8/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2020","title":"ANDI@ CONCRETEXT: Predicting concreteness in context for English and Italian using distributional models and behavioural norms","authors":["A Rotaru - Proceedings of the 7th evaluation campaign of Natural …, 2020"],"snippet":"… Skip-gram (Google News – 100B) 21 21 21 21 GloVe (Common Crawl – 840B) 21 21 21 21 ConceptNet NumberBatch (ConceptNet + Skip-gram + GloVe) … abs(V(w) - V(c)) Behavioural norms (frequency, etc.) 20 20 20 20 …","url":["http://ceur-ws.org/Vol-2765/paper100.pdf"]} {"year":"2020","title":"Announcing CzEng 2.0 Parallel Corpus with over 2 Gigawords","authors":["T Kocmi, M Popel, O Bojar - arXiv preprint arXiv:2007.03006, 2020"],"snippet":"… format. New parallel data come from Europarl (v10), News commentary, Wikititles, Commoncrawl, Paracrawl2, WikiMatrix (Schwenketal., 2019), and Tilde MODEL Corpus (EESC, EMA, Rapid; Rozis and Skadinš, 2017). We …","url":["https://arxiv.org/pdf/2007.03006"]} {"year":"2020","title":"Anomaly detection with Generative Adversarial Networks and text patches","authors":["A Drozdyuk, N Eke"],"snippet":"… We used the model containing two million word vectors trained on Common Crawl … The depressive data was similarly cleaned. After cleaning the data it was converted to word vectors using the FastText model trained on Common Crawl and Wikipedia [12] …","url":["https://norberte.github.io/assets/pdf/GAN%20Project%20Report.pdf"]} {"year":"2020","title":"Another factor to consider is the user's perspective. Everyone uses web search. Using a search engine to look things up on the web is the most popular activity on the …","authors":["D Lewandowski"],"snippet":"… Indexes that offer complete access do, however, exist. At the top of this list is Common Crawl,NOTE 12 a nonprofit project that aims to provide a web index for anyone who's interested … Common Crawl represents an important development …","url":["https://pandoc.networkcultures.org/epub/SotQreader/ch010.xhtml"]} {"year":"2020","title":"Answering Comparative Questions with Arguments","authors":["A Bondarenko, A Panchenko, M Beloucif, C Biemann… - Datenbank-Spektrum, 2020"],"snippet":"… analyzing Wikidata and DBpedia as additional sources of (structured) information besides the retrieval of sentences/documents from the Common Crawl … Ruppert E, Faralli S, Ponzetto SP, Biemann C (2018) Building a …","url":["https://link.springer.com/article/10.1007/s13222-020-00346-8"]} {"year":"2020","title":"Answering Event-Related Questions over Long-term News Article Archives","authors":["J Wang, A Jatowt, M Färber, M Yoshikawa"],"snippet":"… We can see that the actual time scope (January, 1988) of the first question is reflected relatively well by its distribution of relevant documents as generally 4 We use Glove [23] embeddings trained on the Common Crawl dataset with 300 dimensions. Page 6 …","url":["http://www.aifb.kit.edu/images/1/19/QA_ECIR2020.pdf"]} {"year":"2020","title":"Application of Machine Learning Techniques for Text Generation","authors":["S Martí Román - 2020"],"snippet":"Page 1. Escola Tècnica Superior d'Enginyeria Informàtica Universitat Politècnica de València Application of Machine Learning Techniques for Text Generation DEGREE FINAL WORK Degree in Computer Engineering Author: Salvador Martí Román …","url":["https://riunet.upv.es/bitstream/handle/10251/149583/Mart%C3%AD%20-%20Uso%20de%20t%C3%A9cnicas%20de%20aprendizaje%20autom%C3%A1tico%20para%20la%20generaci%C3%B3n%20de%20texto.pdf?sequence=1"]} {"year":"2020","title":"Application of Machine Learning to Classify News Headlines","authors":["P Guttula, RM Aburas, S Srijan"],"snippet":""} {"year":"2020","title":"AQuaMuSe: Automatically Generating Datasets for Query-Based Multi-Document Summarization","authors":["S Kulkarni, S Chammas, W Zhu, F Sha, E Ie - arXiv preprint arXiv:2010.12694, 2020"],"snippet":"… is a nontrivial task in itself and there are several con1https://commoncrawl org … paragraphs, we use a pre-processed and cleaned version of the Common Crawl corpus (Raffel et al … We illustrate our approach us- ing Google's Natural …","url":["https://arxiv.org/pdf/2010.12694"]} {"year":"2020","title":"AraWEAT: Multidimensional Analysis of Biases in Arabic Word Embeddings","authors":["A Lauscher, R Takieddin, SP Ponzetto, G Glavaš - arXiv preprint arXiv:2011.01575, 2020"],"snippet":"… For FT, we investigate two models, one trained on the portions of Wikipedia and CommonCrawl corpora written in Modern Standard Arabic (MS) and the other on portions written in Egyptian Arabic.9 We evaluate the four variants …","url":["https://arxiv.org/pdf/2011.01575"]} {"year":"2020","title":"ArchiMeDe@ DANKMEMES: A New Model Architecture for Meme Detection","authors":["J Setpal, G Sarti"],"snippet":"… We fine-tune representations over the available meme textual data and use them as components of our end-to-end system. 1umberto-commoncrawl-cased-v1 in the HuggingFace's model hub (Wolf et al., 2019) Page 4. 2.3 Visual input …","url":["http://ceur-ws.org/Vol-2765/paper138.pdf"]} {"year":"2020","title":"Are All Good Word Vector Spaces Isomorphic?","authors":["I Vulić, S Ruder, A Søgaard - arXiv preprint arXiv:2004.04070, 2020"],"snippet":"… 3Recent initiatives replace training on Wikipedia with training on larger CommonCrawl data (Grave et al., 2018; Conneau et al., 2020), but the large differences in corpora sizes between high-resource and low-resource languages are not removed …","url":["https://arxiv.org/pdf/2004.04070"]} {"year":"2020","title":"Are All Languages Created Equal in Multilingual BERT?","authors":["S Wu, M Dredze - arXiv preprint arXiv:2005.09093, 2020"],"snippet":"… (2019) train a multilingual masked language model (Devlin et al., 2019) on 2.5TB of CommonCrawl filtered data covering 100 languages and show it outperforms a Wikipedia-based model on low resource languages (Urdu …","url":["https://arxiv.org/pdf/2005.09093"]} {"year":"2020","title":"Argumentative relation classification for argumentative dialogue systems","authors":["C Schindler - 2020"],"snippet":"… With this setup, args outperformed ArgumenText in the category related. 3.2.2. ArgumenText The search engine ArgumenText [35] uses the English part of CommonCrawl2 to retrieve relevant documents … 2http://commoncrawl …","url":["https://oparu.uni-ulm.de/xmlui/bitstream/handle/123456789/33973/BScThesis_SchindlerC.pdf?sequence=3&isAllowed=y"]} {"year":"2020","title":"Argumentative Topology: Finding Loop (holes) in Logic","authors":["S Tymochko, Z New, L Bynum, E Purvine, T Doster… - arXiv preprint arXiv …, 2020"],"snippet":"… Word embeddings are performed using two pretrained models: Word2Vec trained on the Google News dataset [12] and GloVe trained on Common Crawl [13]. We compute the persistence diagrams of the topological …","url":["https://arxiv.org/pdf/2011.08952"]} {"year":"2020","title":"ArgumenText: Argument Classification and Clustering in a Generalized Search Scenario","authors":["J Daxenberger, B Schiller, C Stahlhut, E Kaiser… - Datenbank-Spektrum, 2020"],"snippet":"… Full size image. For the public version of the ArgumenText search engine, we indexed more than 400 million English and German web pages from the CommonCrawl project and segmented all documents into sentences [21] …","url":["https://link.springer.com/article/10.1007/s13222-020-00347-7"]} {"year":"2020","title":"Artificial Intelligence in mental health and the biases of language based models","authors":["I Straw, C Callison-Burch - PloS one, 2020"],"snippet":"… 52]. As described by Pennington et al. GloVe embeddings were trained on text copora from Wikipedia data, Gigaword and web data from Common Crawl which built a vocabulary of 400,000 frequent words [57]. Word2Vec was …","url":["https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0240376"]} {"year":"2020","title":"ASAPPpy: a Python Framework for Portuguese STS?","authors":["J Santos, A Alves, HG Oliveira"],"snippet":"… 20 Page 8. From those, CBOW Word2vec and GloVe, both with 300-dimensioned vectors, were selected; (ii) fastText.cc embeddings [9], which provide word vectors for 157 languages, trained on Common Crawl and Wikipedia using fastText …","url":["http://ceur-ws.org/Vol-2583/2_ASAPPpy.pdf"]} {"year":"2020","title":"Ascent of Pre-trained State-of-the-Art Language Models","authors":["K Nagda, A Mukherjee, M Shah, P Mulchandani… - Advanced Computing …, 2020"],"snippet":"… 6.2 Dataset. Similar to BERT, XLNet was pre-trained using English Wikipedia dataset (13 GB of plain text), as well as CommonCrawl, Giga5 and ClueWeb 2012-B datasets [11]. The large variant of the model has a sequence and …","url":["https://link.springer.com/chapter/10.1007/978-981-15-3242-9_26"]} {"year":"2020","title":"Aspect-Controlled Neural Argument Generation","authors":["B Schiller, J Daxenberger, I Gurevych - Training"],"snippet":"… Consequently, the following preprocessing steps ultimately target retrieval and classification of sentences. To evaluate different data sources, we use a dump from Common-Crawl2 (CC) and Reddit comments3 …","url":["https://public.ukp.informatik.tu-darmstadt.de/UKP_Webpage/publications/2020/2020_PP_BES_aspect_controlled_argument_generation_v0.2.pdf"]} {"year":"2020","title":"Assessing Demographic Bias in Named Entity Recognition","authors":["S Mishra, S He, L Belli"],"snippet":"… level confidence via the Constrained ForwardBackward algorithm [5]. Different versions of this model were trained on CoNLL 03 NER benchmark dataset [27] by utilizing varying embedding methods: (a) GloVe uses GloVe 840B …","url":["https://kg-bias.github.io/NER_Bias_KG_Bias.pdf"]} {"year":"2020","title":"Assessing Suitable Word Embedding Model for Malay Language through Intrinsic Evaluation","authors":["YT Phua, KH Yew, OM Foong, MYW Teow - 2020 International Conference on …, 2020"],"snippet":"… This mode was trained on Common Crawl and Wikipedia [26], and pre-trained word vectors for 294 languages were trained on Wikipedia [22]. The results of the evaluation were 0.477 for Pearson correlation coefficient and 0.51 …","url":["https://ieeexplore.ieee.org/abstract/document/9247707/"]} {"year":"2020","title":"ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations","authors":["F Alva-Manchego, L Martin, A Bordes, C Scarton… - arXiv preprint arXiv …, 2020"],"snippet":"… stopwords). We use the 50k most frequent words of the FastText word embeddings vo- cabulary (Bojanowski et al., 2016). This vo- cabulary was originally sorted with frequencies of words in the Common Crawl. This score …","url":["https://arxiv.org/pdf/2005.00481"]} {"year":"2020","title":"Attention-based hierarchical recurrent neural networks for MOOC forum posts analysis","authors":["N Capuano, S Caballé, J Conesa, A Greco - Journal of Ambient Intelligence and …, 2020"],"snippet":"… WikiNER corpora (Nivre et al. 2016) for Italian as well as the OntoNotes (Pradhan and Ramshaw 2017) and the Common Crawl Footnote 2 corpora for English. Text categorization model. Independently from the specific document …","url":["https://link.springer.com/article/10.1007/s12652-020-02747-9"]} {"year":"2020","title":"Attention-based Model for Evaluating the Complexity of Sentences in English Language","authors":["D Schicchi, G Pilato, GL Bosco - 2020 IEEE 20th Mediterranean Electrotechnical …, 2020"],"snippet":"… The output of the attention layer (context-vector) is given as input to a dense layer, which gives the probability that the sentence belongs to either hard-to-understand or easy-to- 1www.wikipedia.org 2www.commoncrawl …","url":["https://ieeexplore.ieee.org/abstract/document/9140531/"]} {"year":"2020","title":"Attribute Sentiment Scoring with Online Text Reviews: Accounting for Language Structure and Missing Attributes","authors":["I Chakraborty, M Kim, K Sudhir - 2020"],"snippet":"… 10These embeddings have been trained on different corpus like Wikipedia dumps, Gigaword news dataset and web data from Common Crawl and have more than 5 billion unique tokens. Page 18. 17 lenging text and image classification problems (Wang et al …","url":["https://www.ishitachakra.com/JobMarketPaper_IshitaYale.pdf"]} {"year":"2020","title":"Augmenting cross-domain knowledge bases using web tables","authors":["Y Oulabi - 2020"],"snippet":"Page 1. Augmenting Cross-Domain Knowledge Bases Using Web Tables Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften der Universität Mannheim vorgelegt von Yaser …","url":["https://madoc.bib.uni-mannheim.de/55962/1/Oulabi2020_PhD_thesis.pdf"]} {"year":"2020","title":"Author2Vec: A Framework for Generating User Embedding","authors":["X Wu, W Lin, Z Wang, E Rastorgueva - arXiv preprint arXiv:2003.11627, 2020"],"snippet":"… the user. We used the Facebook FastText (https:// fasttext.cc/) pre-trained Word2Vec model: crawl-300d-2M, which is a model with 2 million word vectors trained on Common Crawl (600B to- kens). The baseline implementations …","url":["https://arxiv.org/pdf/2003.11627"]} {"year":"2020","title":"Autoencoding Improves Pre-trained Word Embeddings","authors":["M Kaneko, D Bollegala - arXiv preprint arXiv:2010.13094, 2020"],"snippet":"… 3M words learnt from the Google News corpus), GloVe2 (300-dimensional word embeddings for ca. 2.1M words learnt from the Common Crawl), and fastText3 (300dimensional embeddings for ca. 2M words learnt from the Common Crawl) …","url":["https://arxiv.org/pdf/2010.13094"]} {"year":"2020","title":"Automated coding of implicit motives: A machine‑learning approach","authors":["JS Pang, H Ring"],"snippet":"… experiments we decided to use Facebook's FastText subword embeddings of 300 dimensions trained on Common Crawl (600 billion tokens).5 This is the set of pre-trained vectors that we used to derive word features from …","url":["https://link.springer.com/content/pdf/10.1007/s11031-020-09832-8.pdf"]} {"year":"2020","title":"Automated Short Answer Grading: A Simple Solution for a Difficult Task","authors":["S Menini, S Tonelli, G De Gasperis, P Vittorini"],"snippet":"… combining vectors representing both words and subwords. To generate these embeddings we start from the pre-computed Italian language model3 trained on Common Crawl and Wikipedia. The latter, in particular, is suitable …","url":["https://pdfs.semanticscholar.org/9ff2/6a502dd1b3e0c136af4bc2ca9af9b901fce4.pdf"]} {"year":"2020","title":"Automatic Detection of Machine Generated Text: A Critical Survey","authors":["G Jawahar, M Abdul-Mageed, LVS Lakshmanan - arXiv preprint arXiv:2011.01314, 2020"],"snippet":"… GPT-3 (Brown et al., 2020) fragments from CommonCrawl (570GB / 175B) three previous news articles and title of a proposed article body of the proposed article top-p fake news Table 1: Summary of the characteristics of TGMs that can act as threat models …","url":["https://arxiv.org/pdf/2011.01314"]} {"year":"2020","title":"Automatic language identification of short texts","authors":["A Avenberg - 2020"],"snippet":"Page 1. UPTEC F 20043 Examensarbete 30 hp September 2020 Automatic language identification of short texts Anna Avenberg Page 2. Teknisknaturvetenskaplig fakultet UTH-enheten Besöksadress: Ångströmlaboratoriet …","url":["https://www.diva-portal.org/smash/get/diva2:1473718/FULLTEXT01.pdf"]} {"year":"2020","title":"Automatic Metaphor Interpretation Using Word Embeddings","authors":["K Bar, N Dershowitz, L Dankin - arXiv preprint arXiv:2010.02665, 2020"],"snippet":"… relatively large corpus. Specifically, we use DepCC,1 a dependency-parsed “web-scale corpus” based on CommonCrawl.2 There are 365 million documents in the corpus, comprising about 252B tokens. Among other preprocessing …","url":["https://arxiv.org/pdf/2010.02665"]} {"year":"2020","title":"Automatic Poetry Generation from Prosaic Text","authors":["T Van de Cruys - Proceedings of the 58th Annual Meeting of the …, 2020"],"snippet":"Page 1. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2471–2480 July 5 - 10, 2020. c 2020 Association for Computational Linguistics 2471 Automatic Poetry Generation from Prosaic Text …","url":["https://www.aclweb.org/anthology/2020.acl-main.223.pdf"]} {"year":"2020","title":"Automatic Short Answer Grading using Text-to-Text Transfer Transformer Model","authors":["S Haller - 2020"],"snippet":"Page 1. Faculty of Electrical Engineering, Mathematics & Computer Science Automatic Short Answer Grading using Text-to-Text Transfer Transformer Model Stefan Haller M.Sc. Thesis in Business Information …","url":["http://essay.utwente.nl/83879/7/Haller_MA_EEMCS.pdf"]} {"year":"2020","title":"Automatic Speech Recognition for ILSE-Interviews: Longitudinal Conversational Speech Recordings covering Aging and Cognitive Decline","authors":["A Abulimiti, J Weiner, T Schultz"],"snippet":"… In addition, we selected text data similar to ILSE from large amounts of Common Crawl Data1 based on Term Frequency–Inverse Document Frequency (tf-idf) for the training of Recurrent Neural Network (RNN) based …","url":["https://indico2.conference4me.psnc.pl/event/35/contributions/3209/attachments/742/780/Thu-SS-1-6-9.pdf"]} {"year":"2020","title":"Automatic text segmentation based on relevant context","authors":["S Kim, WW Chang, N Lipka, F Dernoncourt, CY Park - US Patent App. 16/368,334, 2020"],"snippet":"US20200311207A1 - Automatic text segmentation based on relevant context - Google Patents. Automatic text segmentation based on relevant context. Download PDF Info. Publication number US20200311207A1. US20200311207A1 …","url":["https://patents.google.com/patent/US20200311207A1/en"]} {"year":"2020","title":"Automating Just-In-Time Comment Updating","authors":["Z Liu, X Xia, M Yan, S Li","Z Liu, X Xia, M Yan, S Li - Automated Software Engineering, to appear, 2020"],"snippet":"Page 1. Automating Just-In-Time Comment Updating Zhongxin Liu∗† Zhejiang University China liu_zx@zju.edu.cn Xin Xia‡ Monash University Australia xin.xia@monash.edu Meng Yan Chongqing University China mengy@cqu.edu.cn …","url":["https://pdfs.semanticscholar.org/0b03/b1526e88c8780214697bbe9163b95a59cdbd.pdf","https://xin-xia.github.io/publication/ase202.pdf"]} {"year":"2020","title":"Automating Text Naturalness Evaluation of NLG Systems","authors":["E Çano, O Bojar - arXiv preprint arXiv:2006.13268, 2020"],"snippet":"… The model is trained on RealNews, a large news collection they derived from Page 4. 4 E. Çano and O. Bojar Common Crawl1 dumps … They are typically assessed by 1 https://commoncrawl.org/ 2 http://www.statmt.org …","url":["https://arxiv.org/pdf/2006.13268"]} {"year":"2020","title":"BanFakeNews: A Dataset for Detecting Fake News in Bangla","authors":["MZ Hossain, MA Rahman, MS Islam, S Kar - arXiv preprint arXiv:2004.08789, 2020"],"snippet":"… words in it. We experiment with the Bangla 300 dimensional word vectors pre-trained7 with Fasttext (Grave et al., 2018) on Wikipedia8 and Common Crawl9, where we have a coverage of 55.21%. Additionally, we experiment …","url":["https://arxiv.org/pdf/2004.08789"]} {"year":"2020","title":"Bangla Text Classification using Transformers","authors":["T Alam, A Khan, F Alam - arXiv preprint arXiv:2011.04446, 2020"],"snippet":"… NSP task is removed and only MLM loss is used for pretraining. XLM-RoBERTa [18] is the multilingual variant of RoBERTa trained with a multilingual MLM. It is trained on one hundred languages, with more than two terabytes of filtered Common Crawl data …","url":["https://arxiv.org/pdf/2011.04446"]} {"year":"2020","title":"Bankruptcy Map: A System for Searching and Analyzing US Bankruptcy Cases at Scale","authors":["E Choi, G Brassil, K Keller, J Ouyang, K Wang"],"snippet":"… layers [14]. The network was pre-trained on the large OntoNotes dataset, with GloVe vectors used for feature creation trained on Common Crawl data [3, 16]. The model recognized named entities and their types. We collected …","url":["https://cpb-us-w2.wpmucdn.com/express.northeastern.edu/dist/d/53/files/2020/02/CJ_2020_paper_57.pdf"]} {"year":"2020","title":"BARThez: a Skilled Pretrained French Sequence-to-Sequence Model","authors":["MK Eddine, AJP Tixier, M Vazirgiannis - arXiv preprint arXiv:2010.12321, 2020"],"snippet":"… Other than that, BARTHez corpus is similar to FlauBERT's. It primarily consists in the French part of CommonCrawl, NewsCrawl, Wikipedia and other smaller corpora that are listed in Table 1. To clean the corpus from noisy examples …","url":["https://arxiv.org/pdf/2010.12321"]} {"year":"2020","title":"Benchmarking Neural and Statistical Machine Translation on Low-Resource African Languages","authors":["K Duh, P McNamee, M Post, B Thompson"],"snippet":"… The columns CommonCrawl and Wikipedia indicate the amount of monolingual data on the web, which can be viewed as an indicator of the upper limit of how much web-crawled data we may be able to obtain. CommonCrawl …","url":["https://pdfs.semanticscholar.org/3bde/97a22dab1147b0f3209805315bbff9b82674.pdf"]} {"year":"2020","title":"BERT-based Ensembles for Modeling Disclosure and Support in Conversational Social Media Text","authors":["K Pant, T Dadu, R Mamidi - 2020"],"snippet":"… Gokaslan, A., Cohen, V.: Openwebtext corpus. http://Skylion007.github.io/ OpenWebTextCorpus (2019) 5. Nagel, S.: Cc-news (2016), http://web.archive.org/save/ http://commoncrawl. org/2016/10/newsdataset-available/ 6. Rajendran …","url":["http://web2py.iiit.ac.in/research_centres/publications/download/inproceedings.pdf.83b74a0e278a6d03.424552542d626173656420456e73656d626c657320666f72204d6f64656c696e6720446973636c6f73757265202620537570706f727420696e20436f6e766572736174696f6e616c20536f6369616c204d6564696120546578742e706466.pdf"]} {"year":"2020","title":"BERT-Based Simplification of Japanese Sentence-Ending Predicates in Descriptive Text","authors":["T Kato, R Miyata, S Sato - Proceedings of the 13th International Conference on …, 2020"],"snippet":"… For any parts having more than one word, the average of the embedding is used. To obtain the embedding vectors, we used existing Japanese pre-trained word vectors that were trained on Common Crawl and Wikipedia using fastText.7 …","url":["https://www.aclweb.org/anthology/2020.inlg-1.31.pdf"]} {"year":"2020","title":"BERTweet: A pre-trained language model for English Tweets","authors":["DQ Nguyen, T Vu, AT Nguyen - arXiv preprint arXiv:2005.10200, 2020"],"snippet":"… The pre-trained RoBERTa is a strong language model for English, learned from 160GB of texts covering books, Wikipedia, CommonCrawl news, CommonCrawl stories, and web text contents. XLM-R is a cross-lingual variant …","url":["https://arxiv.org/pdf/2005.10200"]} {"year":"2020","title":"Better Web Corpora For Corpus Linguistics And NLP","authors":["V Suchomel"],"snippet":"Page 1. Masaryk University Faculty of Informatics Better Web Corpora For Corpus Linguistics And NLP Doctoral Thesis Vít Suchomel Brno, Spring 2020 Page 2. Masaryk University Faculty of Informatics Better Web Corpora …","url":["https://is.muni.cz/th/u4rmz/Better_Web_Corpora_For_Corpus_Linguistics_And_NLP.pdf"]} {"year":"2020","title":"Beyond English-Centric Multilingual Machine Translation","authors":["A Fan, S Bhosale, H Schwenk, Z Ma, A El-Kishky… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Beyond English-Centric Multilingual Machine Translation Angela Fan∗, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary …","url":["https://arxiv.org/pdf/2010.11125"]} {"year":"2020","title":"Beyond Instructional Videos: Probing for More Diverse Visual-Textual Grounding on YouTube","authors":["J Hessel, Z Zhu, B Pang, R Soricut - arXiv preprint arXiv:2004.14338, 2020"],"snippet":"… corresponding visual content. However, in contrast to the highly diverse corpora utilized for text-based pretraining (Wikipedia, Common Crawl, etc.), pretraining for web videos so far has been limited to instructional videos. This domain …","url":["https://arxiv.org/pdf/2004.14338"]} {"year":"2020","title":"Biases as Values: Evaluating Algorithms in Context","authors":["M Díaz - 2020"],"snippet":"Page 1. NORTHWESTERN UNIVERSITY Biases as Values: Evaluating Algorithms in Context A DISSERTATION SUBMITTED TO THE GRADUATE SCHOOL IN PARTIAL FULFILLMENT OF THE REQUIREMENTS for the degree DOCTOR OF PHILOSOPHY …","url":["http://search.proquest.com/openview/83eed19485a394e067ee5a9b03d84ef2/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2020","title":"Biomedical Information Extraction Pipelines for Public Health in the Age of Deep Learning","authors":["AM Ranganatha - 2019"],"snippet":"Page 1. Biomedical Information Extraction Pipelines for Public Health in the Age of Deep Learning by Arjun Magge Ranganatha A Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy …","url":["http://search.proquest.com/openview/bf6cbc4695dc3135c8e78ff548e670f8/1?pq-origsite=gscholar&cbl=2026366&diss=y"]} {"year":"2020","title":"Blocking Techniques for Entity Linkage: A Semantics-Based Approach","authors":["F Azzalini, S Jin, M Renzi, L Tanca - Data Science and Engineering, 2020"],"snippet":"… each attribute value \\(t[A_{k}]\\) is transformed into a real-valued vector \\(\\mathbf{v }(w)\\). The fastText model we use is crawl-300d-2M-subword [23] where each word is represented as a 300-dimensional vector and the …","url":["https://link.springer.com/article/10.1007/s41019-020-00146-w"]} {"year":"2020","title":"Bottom-Up Modeling of Permissions to Reuse Residual Clinical Biospecimens and Health Data","authors":["E Umberfield - 2020"],"snippet":"Page 1. Bottom-Up Modelling of Permissions to Reuse Residual Clinical Biospecimens and Health Data by Elizabeth Umberfield A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor …","url":["https://deepblue.lib.umich.edu/bitstream/handle/2027.42/162937/eliewolf_1.pdf?sequence=1"]} {"year":"2020","title":"BREXIT: Psychometric Profiling the Political Salubrious through Machine Learning","authors":["J Usher, P Dondio"],"snippet":"… We used the English multi-task Convoluted Neural Network trained on OntoNotes, with GloVe vectors trained on Common Crawl, which assigns word vectors, context-specific token vectors, POS tags, dependency parse …","url":["http://wims2020.sigappfr.org/wp-content/uploads/2020/06/WIMS'20/p178-Usher.pdf"]} {"year":"2020","title":"BREXIT: Psychometric Profiling the Political Salubrious through Machine Learning: Predicting personality traits of Boris Johnson through Twitter political text","authors":["J Usher, P Dondio - Proceedings of the 10th International Conference on …, 2020"],"snippet":"… We used the English multi-task Convoluted Neural Network trained on OntoNotes, with GloVe vectors trained on Common Crawl, which assigns word vectors, context-specific token vectors, POS tags, dependency parse …","url":["https://dl.acm.org/doi/abs/10.1145/3405962.3405981"]} {"year":"2020","title":"Building a user-generated content north-african arabizi treebank: Tackling hell","authors":["D Seddah, F Essaidi, A Fethi, M Futeral, B Muller… - Proceedings of the 58th …, 2020"],"snippet":"… used data-driven language identification models to extract NArabizi samples among the whole collection of the Common-Crawl-based OSCAR … one based on search query-based web-crawling and the other from a cleaned version …","url":["https://www.aclweb.org/anthology/2020.acl-main.107.pdf"]} {"year":"2020","title":"Building a Wide Reach Corpus for Secure Parser Development","authors":["T Allison, W Burke, V Constantinou, E Goh…"],"snippet":"… [17] CGR Lavanya Pamulaparty and MS Rao, “A novel approach for avoiding overload in the web crawling.” Odisha, India: High Performance Computing and Applications (ICHPCA), 2014. [18] “Common Crawl,” https://commoncrawl.org …","url":["http://spw20.langsec.org/papers/corpus_LangSec2020.pdf"]} {"year":"2020","title":"Building LARO: Language Agnostic Sentence Embeddings from finetuned RoBERTa⋆","authors":["AS Salvado"],"snippet":"… XLM-RoBERTa is the successor of RoBERTa and a large multi-lingual language model. It is a Transformer-based model, technically a Transformer en- coder, and was trained on 2.5TB of filtered CommonCrawl data in 100 …","url":["https://users.informatik.haw-hamburg.de/~ubicomp/projekte/master2020-proj/soblechero.pdf"]} {"year":"2020","title":"Building Web Corpora for Minority Languages","authors":["H Jauhiainen, T Jauhiainen, K Lindén - Proceedings of the 12th Web as Corpus …, 2020"],"snippet":"… Common Crawl Foundation3 regularly crawls the Internet and offers the texts it finds for free download. Smith et al … Kanerva et al. (2014) used the morphological analyser OMorFi4 to find Finnish sentences in the Common Crawl corpus …","url":["https://www.aclweb.org/anthology/2020.wac-1.4.pdf"]} {"year":"2020","title":"Caliskan Et Al-authors-full","authors":["A Caliskan, JJ Bryson, A Narayanan"],"snippet":"… We use the largest of the four corpora provided—the “Common Crawl” corpus obtained from a large-scale crawl of the web, containing 840 billion tokens (roughly, words). Tokens in this corpus are casesensitive, resulting in 2.2 million different ones …","url":["https://www.studeersnel.nl/nl/document/technische-universiteit-delft/machine-learning/werkstukessay/caliskan-et-al-authors-full/9896508/view"]} {"year":"2020","title":"CALM: Continuous Adaptive Learning for Language Modeling","authors":["K Arumae, P Bhatia - arXiv preprint arXiv:2004.03794, 2020"],"snippet":"… We processed publicly available biomedical and non-biomedical corpora for pre-training our models. For non-biomedical data, we use BookCorpus and English Wikipedia data, CommonCrawl Stories (Trinh and Le, 2018), and OpenWebText (Gokaslan and Cohen) …","url":["https://arxiv.org/pdf/2004.03794"]} {"year":"2020","title":"Can Embeddings Adequately Represent Medical Terminology? New Large-Scale Medical Term Similarity Datasets Have the Answer!","authors":["C Schulz, D Juric - arXiv preprint arXiv:2003.11082, 2020"],"snippet":"… 3) Non-medical: As a comparison, we also include - the GloVe word embedding (Pennington, Socher, and Manning 2014); - 2 Fasttext embeddings trained on Wikipedia and Common Crawl (plus its model (M)) (Mikolov et al. 2018) …","url":["https://arxiv.org/pdf/2003.11082"]} {"year":"2020","title":"Can Emojis Convey Human Emotions? A Study to Understand the Association between Emojis and Emotions","authors":["AAM Shoeb, G de Melo - Proceedings of the 2020 Conference on Empirical …, 2020"],"snippet":"… Here, sim(v1,v2) denotes the cosine similarity be- tween two vectors. We first consider the widely used 300dimensional GloVe (Pennington et al., 2014) models pretrained on CommonCrawl 840B and Twitter, as these contain emojis …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.720.pdf"]} {"year":"2020","title":"Can I Take Your Subdomain? Exploring Related-Domain Attacks in the Modern Web","authors":["M Squarcina, M Tempesta, L Veronese, S Calzavara… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Can I Take Your Subdomain? Exploring Related-Domain Attacks in the Modern Web Marco Squarcina1, Mauro Tempesta1, Lorenzo Veronese1, Stefano Calzavara2, Matteo Maffei1 1TU Wien, 2Università Ca' Foscari Venezia Abstract …","url":["https://arxiv.org/pdf/2012.01946"]} {"year":"2020","title":"Can Knowledge Rich Sentences Help Language Models to Solve Common Sense Reasoning Problems?","authors":["A Prakash - 2019"],"snippet":"… 20 Page 33. Figure 3. RoBERTa network architecture 2. CommonCrawl News was used, which contained 63 million news articles between September 2016 and February 2019 3. OPENWEBTEXT (Gokaslan and Cohen 2019) which is a text corpus containing …","url":["http://search.proquest.com/openview/08e0d3a7c85dcbbd4875abd0d3c48e17/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2020","title":"Capturing Word Order in Averaging Based Sentence Embeddings","authors":["JH Lee, JC Collados, LE Anke, S Schockaert"],"snippet":"… two million words. These word vectors were trained on Common Crawl with 600 billion tokens [19]. The sentences that we used for training, validation and testing were obtained from an English Wikipedia dump. All sentences …","url":["http://josecamachocollados.com/papers/ECAI2020_Capturing_Word_Order_in_Averaging_Based_Sentence_Embeddings.pdf"]} {"year":"2020","title":"Caspar: Extracting and Synthesizing User Stories of Problems from App Reviews","authors":["H Guo, MP Singh"],"snippet":"Page 1. Caspar: Extracting and Synthesizing User Stories of Problems from App Reviews Hui Guo Secure Computing Institute North Carolina State University Raleigh, North Carolina hguo5@ncsu.edu Munindar P. Singh Secure …","url":["https://hguo5.github.io/Caspar/docs/Caspar_ICSE_20.pdf"]} {"year":"2020","title":"CatchPhish: A URL and Anti-Phishing Research Platform","authors":["S Waddell"],"snippet":"Page 1. CatchPhish: A URL and Anti-Phishing Research Platform Stephen Waddell MInf Project (Part 2) Report Master of Informatics School of Informatics University of Edinburgh 2020 Page 2. Page 3. 3 Abstract In this work, I …","url":["https://groups.inf.ed.ac.uk/tulips/projects/19-20/waddell-2020.pdf"]} {"year":"2020","title":"CauseNet: Towards a Causality Graph Extracted from the Web","authors":["S Heindorf, Y Scholten, H Wachsmuth, ACN Ngomo…"],"snippet":"… Web To extract causal relations from the web at scale, we analyze the ClueWeb12 web crawl, which comprises about 733,019,372 English web pages crawled between February and May 2012.3 We chose this crawl over …","url":["https://webis.de/downloads/publications/papers/potthast_2020a.pdf"]} {"year":"2020","title":"CC-News-En: A Large English News Corpus","authors":["J Mackenzie, R Benham, M Petri, JR Trippas…"],"snippet":"… Temporal Growth. The Common Crawl foundation are constantly adding new documents to CC-News … 10DMOZ is now superseded by Curlie: https://www.curlie. org 11https://github.com/commoncrawl/news-crawl/issues/8 Page 4 …","url":["https://www.johannetrippas.com/papers/mackenzie2020ccnews.pdf"]} {"year":"2020","title":"CCAligned: A Massive Collection of Cross-Lingual Web-Document Pairs","authors":["A El-Kishky, V Chaudhary, F Guzmán, P Koehn - Proc. of EMNLP, 2020"],"snippet":"… we mined over 392 million aligned documents (100M with English and 292M without English) across 68 Common Crawl snapshots. We assess the efficacy of this rule-based alignment in the next section. We select a small subset …","url":["https://www.researchgate.net/profile/Ahmed_El-Kishky/publication/337273813_A_Massive_Collection_of_Cross-Lingual_Web-Document_Pairs/links/5f992509458515b7cfa40eb4/A-Massive-Collection-of-Cross-Lingual-Web-Document-Pairs.pdf"]} {"year":"2020","title":"Chart-based Zero-shot Constituency Parsing on Multiple Languages","authors":["T Kim, B Li, S Lee"],"snippet":"… (2019)), the XLM model trained with masked language modeling on 100 languages (XLM, Conneau and Lample (2019)), and the XLM-R and XLM-R-large models that are trained with the filtered CommonCrawl data (Wenzek et al. 2019) by Conneau et al. (2019) …","url":["https://openreview.net/pdf?id=JY-3BheD5LB"]} {"year":"2020","title":"ChemTables: A Dataset for Semantic Classification of Tables in Chemical Patents","authors":["Z Zhai, C Druckenbrodt, C Thorne, SA Akhondi… - 2020"],"snippet":"… This model and the baselines it compares to are evaluated on a web table dataset, built by extracting tables from top 500 web pages which contain the highest numbers of tables in a subset of the April 2016 Common Crawl corpus [20] …","url":["https://www.researchsquare.com/article/rs-127219/latest.pdf"]} {"year":"2020","title":"Circles are like Ellipses, or Ellipses are like Circles? Measuring the Degree of Asymmetry of Static and Contextual Embeddings and the Implications to Representation …","authors":["W Zhang, M Campbell, Y Yu, S Kumaravel - arXiv preprint arXiv:2012.01631, 2020"],"snippet":"Page 1. Circles are like Ellipses, or Ellipses are like Circles? Measuring the Degree of Asymmetry of Static and Contextual Word Embeddings and the Implications to Representation Learning Wei Zhang 1, Murray Campbell 1 …","url":["https://arxiv.org/pdf/2012.01631"]} {"year":"2020","title":"CiTIUS at the TREC 2020 Health Misinformation Track","authors":["M Fernández-Pichel, DE Losada, JC Pichel…"],"snippet":"… 2 DOCUMENTS AND TOPICS In the TREC 2020 Health Misinformation Track, a news corpus from January 2020 to April 2020 was provided. The documents were obtained from CommonCrawl News, which contains news articles from all over the world …","url":["http://persoal.citius.usc.es/jcpichel/docs/2020_TREC_MFernandezPichel.pdf"]} {"year":"2020","title":"Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network","authors":["M Karim, BR Chakravarthi, M Arcan, JP McCrae… - arXiv preprint arXiv …, 2020"],"snippet":"… The fourth one called fastText [17], which is trained on common crawl and Wikipedia using CBOW with position-weights, in di- mension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. Eventually …","url":["https://arxiv.org/pdf/2004.07807"]} {"year":"2020","title":"Classification of cancer pathology reports with Deep Learning methods","authors":["S Martina"],"snippet":"Page 1. PHD PROGRAM IN SMART COMPUTING DIPARTIMENTO DI INGEGNERIA DELL'INFORMAZIONE (DINFO) Classification of cancer pathology reports with Deep Learning methods Stefano Martina Dissertation presented …","url":["https://flore.unifi.it/bitstream/2158/1187936/1/thesis.pdf"]} {"year":"2020","title":"Classification of cancer pathology reports: a large-scale comparative study","authors":["S Martina, L Ventura, P Frasconi - IEEE Journal of Biomedical and Health Informatics, 2020"],"snippet":"… GloVe) [20]). It is a common practice to use pre-compiled libraries of word vectors trained on several billion tokens extracted from various sources such as Wikipedia, the English Gigaword 5, Common Crawl, or Twitter. These …","url":["https://arxiv.org/pdf/2006.16370"]} {"year":"2020","title":"Classification of Cyberbullying Text in Arabic","authors":["BA Rachid, H Azza, HHB Ghezala - 2020 International Joint Conference on Neural …, 2020"],"snippet":"… The second set of pre-trained embeddings is the one provided in [24], in which word vectors were trained on online encyclopedia Wikipedia and the Common Crawl corpus using an extension of the fastText model (Fasttext embeddings) …","url":["https://ieeexplore.ieee.org/abstract/document/9206643/"]} {"year":"2020","title":"Classifying Sequences of Extreme Length with Constant Memory Applied to Malware Detection","authors":["E Raff, W Fleshman, R Zak, HS Anderson, B Filar… - arXiv preprint arXiv …, 2020"],"snippet":"… This better demonstrates the gap between current deep learning and domain knowledge based approaches for classifying malware. We also use the Common Crawl to collect 676,843 benign PDF 103 104 105 106 107 108 …","url":["https://arxiv.org/pdf/2012.09390"]} {"year":"2020","title":"CLEF eHealth Evaluation Lab 2020","authors":["M Krallinger"],"snippet":"… This collection consists of over 5 million medical webpages from selected domains acquired from the CommonCrawl [7]. Given the positive feedback received for this document collection, it will be used again in the 2020 CHS task …","url":["https://link.springer.com/content/pdf/10.1007/978-3-030-45442-5_76.pdf"]} {"year":"2020","title":"Clinical XLNet: Modeling Sequential Clinical Notes and Predicting Prolonged Mechanical Ventilation","authors":["K Huang, A Singh, S Chen, ET Moseley, C Deng… - arXiv preprint arXiv …, 2019"],"snippet":"… Pretraining Clinical XLNet. The text representation generated from large pre-training models depends on the corpus it is pre-trained on. XLNet is pre-trained on common language corpora such as BookCorpus, Wikipedia, Common Crawl and etc …","url":["https://arxiv.org/pdf/1912.11975"]} {"year":"2020","title":"CLUE: A Chinese Language Understanding Evaluation Benchmark","authors":["L Xu, X Zhang, L Li, H Hu, C Cao, W Liu, J Li, Y Li… - arXiv preprint arXiv …, 2020"],"snippet":"… CLUECorpus2020 (Xu et al., 2020) It contains 100 GB Chinese raw corpus, which is retrieved from Common Crawl … CLUEOSCAR6 OSCAR is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus …","url":["https://arxiv.org/pdf/2004.05986"]} {"year":"2020","title":"CLUECorpus2020: A Large-scale Chinese Corpus for Pre-trainingLanguage Model","authors":["L Xu, X Zhang, Q Dong - arXiv preprint arXiv:2003.01355, 2020"],"snippet":"… Common Crawl is an organization that crawls the web and freely provides its archives and datasets to the public. Common Crawl usually crawls internet web content once a month. Common Crawl's web archives consist of petabytes of data collected since 2011 …","url":["https://arxiv.org/pdf/2003.01355"]} {"year":"2020","title":"Clustering Approach to Topic Modeling in Users Dialogue","authors":["E Feldina, O Makhnytkina - Proceedings of SAI Intelligent Systems Conference, 2020"],"snippet":"… FastText using FastText weights based on the pre-trained CBOW model with a word window size of five trained on Common Crawl and Wikipedia, vector size 300. Table 2. Results of the implementation of the clustering …","url":["https://link.springer.com/chapter/10.1007/978-3-030-55187-2_44"]} {"year":"2020","title":"CO-EVOLUTION OF CULTURE AND MEANING REVEALED THROUGH LARGE-SCALE SEMANTIC ALIGNMENT","authors":["B THOMPSON, S ROBERTS, G LUPYAN"],"snippet":"… We also replicated on word embeddings derived from the OpenSubtitles database (Li- son & Tiedemann, 2016) and a combination of Wikipedia and the Common Crawl dataset (Grave, Bojanowski, Gupta, Joulin, & Mikolov, 2018)) …","url":["https://brussels.evolang.org/proceedings/papers/EvoLang13_paper_62.pdf"]} {"year":"2020","title":"COD3S: Diverse Generation with Discrete Semantic Signatures","authors":["N Weir, J Sedoc, B Van Durme - arXiv preprint arXiv:2010.02882, 2020"],"snippet":"… The model is trained on the co-released corpus CausalBank, which comprises causal statements harvested from English Common Crawl (Buck et al., 2014) … 2014. N-gram counts and language models from the common crawl …","url":["https://arxiv.org/pdf/2010.02882"]} {"year":"2020","title":"Combination of Neural Machine Translation Systems at WMT20","authors":["B Marie, R Rubino, A Fujita - Proceedings of the Fifth Conference on Machine …, 2020"],"snippet":"… As En- glish monolingual data, we used all the provided data, but sampled only 200M lines from the “Common Crawl” corpora, except the “News Discussions” and “Wiki Dumps” corpora … corpora but also sampled only …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.23.pdf"]} {"year":"2020","title":"Combinatorial feature embedding based on CNN and LSTM for biomedical named entity recognition","authors":["M Cho, C Park, J Ha, S Park - Journal of Biomedical Informatics, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S1532046420300083"]} {"year":"2020","title":"Combining Character and Word Embeddings for Affect in Arabic Informal Social Media Microblogs","authors":["AI Alharbi, M Lee - International Conference on Applications of Natural …, 2020"],"snippet":"… Here, the researchers derived the training data from three separate sources: Wikipedia, Twitter and Common Crawl webpages crawl data; they employed two word-level models to learn word representations for general NLP tasks …","url":["https://link.springer.com/chapter/10.1007/978-3-030-51310-8_20"]} {"year":"2020","title":"Combining Different Parsers and Datasets for CAPITEL UD Parsing","authors":["F Sánchez-León - Proceedings of the Iberian Languages Evaluation …, 2020"],"snippet":"… Besides, we have used fastText word embeddings trained on Common Crawl and Wikipedia corpora.5 Table 1 shows results on development set of a model built with the training material using different word embeddings.6 Increasing …","url":["http://ceur-ws.org/Vol-2664/capitel_paper1.pdf"]} {"year":"2020","title":"Combining Visual and Textual Features for Semantic Segmentation of Historical Newspapers","authors":["R Barman, M Ehrmann, S Clematide, SA Oliveira… - arXiv preprint arXiv …, 2020"],"snippet":"… First, four pre-trained embeddings of the Flair library14 are used with their default implementation settings, as follows: - fastText-fr, ie the French fastText embeddings of size 300 pre-trained on Common Crawl and Wikipedia; …","url":["https://arxiv.org/pdf/2002.06144"]} {"year":"2020","title":"Commonsense Aesthetics","authors":["AK Roek - 2020"],"snippet":"Page 1. COMMONSENSE AESTHETICS Aaron Kurosu Roek A DISSERTATION PRESENTED TO THE FACULTY OF PRINCETON UNIVERSITY IN CANDIDACY FOR THE DEGREE OF DOCTOR OF …","url":["http://search.proquest.com/openview/ec2e04cd24fc776ff0cb09566fcf7621/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2020","title":"Commonsense Learning: An Indispensable Path towards Human-centric Multimedia","authors":["B Huang, S Tang, G Shen, G Li, X Wang, W Zhu - … of the 1st International Workshop on …, 2020"],"snippet":"… BERT uses Bookcorpus [84] and English Wikipedia with a size of about 13GB, while XLNet uses more than 100GB of corpus. T5 uses Colossal Clean Crawled Corpus captured from the Common Crawl website with a size of 750GB …","url":["https://dl.acm.org/doi/abs/10.1145/3422852.3423484"]} {"year":"2020","title":"Communication-Efficient String Sorting","authors":["T Bingmann, P Sanders, M Schimek - arXiv preprint arXiv:2001.08516, 2020"],"snippet":"Page 1. arXiv:2001.08516v1 [cs.DC] 23 Jan 2020 Communication-Efficient String Sorting Timo Bingmann, Peter Sanders, Matthias Schimek Karlsruhe Institute of Technology, Karlsruhe, Germany {bingmann,sanders}@kit.edu, matthias schimek@gmx.de …","url":["https://arxiv.org/pdf/2001.08516"]} {"year":"2020","title":"Comparative Analysis of Deep Learning Models for Myanmar Text Classification","authors":["MS Phyu, KT Nwet - Asian Conference on Intelligent Information and …, 2020"],"snippet":"… Grave et al. [3] published pre-trained word vectors for two hundred forty-six languages trained on common crawl and Wikipedia. They proposed bag-of-character n-grams based on skip-gram that could capture sub-word information to enrich word vectors …","url":["https://link.springer.com/chapter/10.1007/978-3-030-41964-6_7"]} {"year":"2020","title":"Comparative Analysis of Machine Learning Algorithms for Computer-Assisted Reporting Based on Fully Automated Cross-Lingual RadLex® Mappings","authors":["ME Maros, CG Cho, AG Junge, B Kämpgen, V Saase… - 2020"],"snippet":"… However, pre-trained word vector models for 157 languages, which were pre-trained on Common Crawl and Wikipedia by the fastText package authors are available for direct download (https://fasttext.cc/docs/en/crawl-vectors.html) [58] …","url":["https://www.preprints.org/manuscript/202004.0354/download/final_file"]} {"year":"2020","title":"COMPARATIVE ANALYSIS OF SUBDOMAIN ENUMERATION TOOLS AND STATIC CODE ANALYSIS","authors":["GJ Kathrine, RT Baby, V Ebenzer"],"snippet":"… Certificates= censys,certspotter, Google CT APIs: AlienVault, BinaryEdge, BufferOver, CIRCL, CommonCrawl, DNSDB, GitHub, HackerTarget, NetworksDB, PassiveTotal, Pastebin.. Web Archives: ArchiveIt, ArchiveToday, Arquivo, Wayback and others …","url":["https://www.researchgate.net/profile/Ronnie_Joseph2/publication/342501456_COMPARATIVE_ANALYSIS_OF_SUBDOMAIN_ENUMERATION_TOOLS_AND_STATIC_CODE_ANALYSIS/links/5ef76e2d299bf18816eae517/COMPARATIVE-ANALYSIS-OF-SUBDOMAIN-ENUMERATION-TOOLS-AND-STATIC-CODE-ANALYSIS.pdf"]} {"year":"2020","title":"Comparative Analysis of Word Embeddings for Capturing Word Similarities","authors":["M Toshevska, F Stojanovska, J Kalajdjieski - arXiv preprint arXiv:2005.03812, 2020"],"snippet":"… architectures [23]. In our experiments, we have used pre-trained models both trained with subword information on Wikipedia 2017 (16B tokens) and trained with subword information on Common Crawl (600B tokens)4. 2 https …","url":["https://arxiv.org/pdf/2005.03812"]} {"year":"2020","title":"Comparing Different Methods for Named Entity Recognition in Portuguese Neurology Text","authors":["F Lopes, C Teixeira, HG Oliveira - Journal of Medical Systems, 2020"],"snippet":"… In order to check which was preferable, two different WE models were used: A pre-trained general Portuguese FastText model, Footnote 3 based on billions of tokens from Wikipedia and Common Crawl [41], with a 5-character window (general language) …","url":["https://link.springer.com/article/10.1007/s10916-020-1542-8"]} {"year":"2020","title":"Comparing High Dimensional Word Embeddings Trained on Medical Text to Bag-of-Words for Predicting Medical Codes","authors":["V Yogarajan, H Gouk, T Smith, M Mayo, B Pfahringer - Asian Conference on …, 2020"],"snippet":"… Our embeddings are trained to the exact same specifications as the Wikipedia and common crawl fastText models in [10] … For 300-dimensional embeddings, W300 are word embeddings that are trained by fastText on Wikipedia and other common crawl text …","url":["https://link.springer.com/chapter/10.1007/978-3-030-41964-6_9"]} {"year":"2020","title":"Comparing Neural Network Parsers for a Less-resourced and Morphologically-rich Language: Amharic Dependency Parser","authors":["BE Seyoum, Y Miyao, BY Mekonnen - Proceedings of the first workshop on …, 2020"],"snippet":"… For this purpose, we used the trained model for Amharic using fasttext7. The data for training the model is from Wikipedia and Common Crawl8. The models were trained using continuous bag of words (CBOW) …","url":["https://www.aclweb.org/anthology/2020.rail-1.5.pdf"]} {"year":"2020","title":"Comparing pre-trained language models for Spanish hate speech detection","authors":["FM Plaza-del-Arco, MD Molina-González… - Expert Systems with …, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S095741742030868X"]} {"year":"2020","title":"Comparing Probabilistic, Distributional and Transformer-Based Models on Logical Metonymy Interpretation","authors":["G Rambelli, E Chersoni, A Lenci, P Blache, CR Huang - … of the 1st Conference of the …, 2020"],"snippet":"… in random order. XLNet's training corpora were the same as BERT plus Giga5, ClueWeb 2012-B and Common Crawl, for a total of 32.89B subword piece. Also in this case, we used the large pre-trained model. GPT-2 (Radford …","url":["https://www.aclweb.org/anthology/2020.aacl-main.26.pdf"]} {"year":"2020","title":"Comparing supervised learning algorithms for Spatial Nominal Entity recognition","authors":["A Medad, M Gaio, L Moncla, S Mustière, YL Nir - AGILE: GIScience Series, 2020"],"snippet":"… We have made the hypothesis that as FastText is a model of pretrained vectors (300 dimensions) on Wikipedia and Common Crawl, it provides a generic representation of words … CC BY 4.0 License. Page 12. Common Crawl and Wikipedia using the CBOW method …","url":["https://agile-giss.copernicus.org/articles/1/15/2020/agile-giss-1-15-2020.pdf"]} {"year":"2020","title":"Comparison between machine learning and human learning from examples generated with machine teaching","authors":["GE Jaimovitch López - 2020"],"snippet":"… For instance, progress in areas like NLP (Natural Language Processing) has led to the development of outstanding deep neural networks such as GPT-3, a task-agnostic model trained using huge data repositories like …","url":["https://riunet.upv.es/bitstream/handle/10251/152771/Jaimovitch%20-%20Comparaci%C3%B3n%20entre%20el%20aprendizaje%20de%20machine%20learning%20y%20humanos%20desde%20ejemplos%20genera....pdf?sequence=1"]} {"year":"2020","title":"Comparison of Named Entity Recognition Tools Applied to News Articles","authors":["S Vychegzhanin, E Kotelnikov - 2019 Ivannikov Ispras Open Conference (ISPRAS), 2019"],"snippet":"… spaCy Python MIT Bloom embeddings and a residual convolutional neural network en_core_web_sm OntoNotes en_core_web_md OntoNotes, Common Crawl en_core_web_lg OntoNotes, Common Crawl xx_ent_wiki_sm WikiNER ru2 …","url":["https://ieeexplore.ieee.org/abstract/document/8991165/"]} {"year":"2020","title":"Comprehensive Stereotype Content Dictionaries Using a Semi‐Automated Method","authors":["G Nicolas, X Bai, ST Fiske - European Journal of Social Psychology"],"snippet":"… word embeddings used here are Word2Vec's model pretrained on Google News (Mikolov, Chen, Corrado, & Dean, 2013) and Glove' model pretrained on the Common Crawl (Pennington, Socher, & Manning, 2014; presented in Supplement) …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/ejsp.2724"]} {"year":"2020","title":"Computational Approaches for Identifying Sensational Soft News","authors":["V Indurthi - 2020"],"snippet":"Page 1. Computational Approaches for Identifying Sensational Soft News Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science and Engineering by Research by Vijayasaradhi Indurthi 201450803 …","url":["http://web2py.iiit.ac.in/research_centres/publications/download/mastersthesis.pdf.a54f3e371a9ec80e.46696e616c5f5468657369735f4d535f42795f52657365617263685f56696a617961736172616468695f496e6475727468692e706466.pdf"]} {"year":"2020","title":"Computational explorations of semantic cognition","authors":["AS Rotaru - 2020"],"snippet":"Page 1. 1 UNIVERSITY COLLEGE LONDON (UCL) Computational explorations of semantic cognition PHD THESIS Armand Stefan Rotaru Supervisors: Primary: Prof. Gabriella Vigliocco Secondary: Prof. Lewis Griffin Page 2. 2 …","url":["https://discovery.ucl.ac.uk/id/eprint/10106344/13/Rotaru_10106344_thesis.pdf"]} {"year":"2020","title":"Computational Mechanisms of Language Understanding and Use in the Brain and Behaviour","authors":["I Kajic - 2020"],"snippet":"Page 1. Computational Mechanisms of Language Understanding and Use in the Brain and Behaviour by Ivana Kajić A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy in Computer Science …","url":["https://uwspace.uwaterloo.ca/bitstream/handle/10012/16439/Kajic_Ivana.pdf?sequence=1"]} {"year":"2020","title":"Connections and selections: Comparing multivariate predictions and parameter associations from latent variable models of picture naming","authors":["GM Walker, J Fridriksson, G Hickok - Cognitive Neuropsychology, 2020"],"snippet":"Connectionist simulation models and processing tree mathematical models of picture naming have complementary advantages and disadvantages. These model types were compared in terms of their predicti...","url":["https://www.tandfonline.com/doi/abs/10.1080/02643294.2020.1837092"]} {"year":"2020","title":"Constraining the Transformer NMT Model with Heuristic Grid Beam Search","authors":["G Xie, A Way, J Du, L Wang"],"snippet":"… the training corpus consists of 4.4 Million segments from Europarl (Koehn, 2005) and CommonCrawl (Smith et al., 2013); … Smith, JR, Saint-Amand, H., Plamada, M., Koehn, P., Callison-Burch, C., and Lopez, A. (2013). Dirt cheap …","url":["http://www.computing.dcu.ie/~away/PUBS/2020/TransformerGridSearch.pdf"]} {"year":"2020","title":"Contemporary Polish Language Model (Version 2) Using Big Data and Sub-Word Approach","authors":["K Wołk"],"snippet":"… In this paper, we present a set of 6-gram language models based on a big-data training of the contemporary Polish language, using the Common Crawl corpus (a compilation of over 3.25 billion webpages) and other resources …","url":["https://indico2.conference4me.psnc.pl/event/35/contributions/3915/attachments/957/996/Thu-3-8-7.pdf"]} {"year":"2020","title":"Context-aware Feature Generation for Zero-shot Semantic Segmentation","authors":["Z Gu, S Zhou, L Niu, Z Zhao, L Zhang - arXiv preprint arXiv:2008.06893, 2020"],"snippet":"… in the supplementary. Following SPNet [43], we concatenate two different types of word embeddings (d = 600, 300 for each), ie, word2vec [30] trained on Google News and fast-Text [15] trained on Common Crawl. The word …","url":["https://arxiv.org/pdf/2008.06893"]} {"year":"2020","title":"Contextual Question Answering with Improved Embedding Models","authors":["G He"],"snippet":"… In the GloVe word embedding based BiDAF++ model, we utilize pretrained GloVe(Pennington et al., 2014) embeddings with 300 output dimensions (840B.300d). These embeddings have been prertrained on a common crawl of 840 billion to- kens …","url":["https://georgehe.me/coqa.pdf"]} {"year":"2020","title":"Contextualized Embeddings in Named-Entity Recognition: An Empirical Study on Generalization","authors":["B Taillé, V Guigue, P Gallinari - arXiv preprint arXiv:2001.08053, 2020"],"snippet":"… 3 Word Representations Word embeddings map each word to a single vector which results in a lexical representation. We take GloVe 840B embeddings [13] trained on Common Crawl as the pretrained word embeddings baseline …","url":["https://arxiv.org/pdf/2001.08053"]} {"year":"2020","title":"Contextualized Emotion Recognition in Conversation as Sequence Tagging","authors":["Y Wang, J Zhang, J Ma, S Wang, J Xiao - Proceedings of the 21th Annual Meeting of …, 2020"],"snippet":"… GloVe vectors trained on Common Crawl 840B with 300 dimensions are used as fixed word em- beddings. We use a 12-layers 4-heads Transformer encoder of which the inner-layer dimensionality is 2048 and the hidden size is 100 …","url":["https://www.aclweb.org/anthology/2020.sigdial-1.23.pdf"]} {"year":"2020","title":"Controllable Text Generation","authors":["S Prabhumoye - 2020"],"snippet":"Page 1. CARNEGIE MELLON UNIVERSITY Controllable Text Generation Should machines re ect the way humans interact in society? esis Proposal by Shrimai Prabhumoye esis proposal submi ed in partial ful llment for the degree of Doctor of Philosophy esis committee …","url":["https://www.cs.cmu.edu/~sprabhum/docs/proposal.pdf"]} {"year":"2020","title":"ConvBERT: Improving BERT with Span-based Dynamic Convolution","authors":["Z Jiang, W Yu, D Zhou, Y Chen, J Feng, S Yan - arXiv preprint arXiv:2008.02496, 2020"],"snippet":"Page 1. ConvBERT: Improving BERT with Span-based Dynamic Convolution Zihang Jiang1∗, Weihao Yu1∗, Daquan Zhou1, Yunpeng Chen2, Jiashi Feng1, Shuicheng Yan2 1National University of Singapore, 2Yitu Technology …","url":["https://res.arxiv.org/pdf/2008.02496"]} {"year":"2020","title":"Correcting the Autocorrect: Context-Aware Typographical Error Correction via Training Data Augmentation","authors":["K Shah, G de Melo - arXiv preprint arXiv:2005.01158, 2020"],"snippet":"… from 2We do not rely on embeddings trained on CommonCrawl, as Web data contains substantially more misspelling forms. 3Specifically, those with a character length three standard deviations above or below mean. Hence …","url":["https://arxiv.org/pdf/2005.01158"]} {"year":"2020","title":"Crawling the German Health Web: Exploratory Study and Graph Analysis","authors":["R Zowalla, T Wetter, D Pfeifer - Journal of Medical Internet Research, 2020"],"snippet":"Journal of Medical Internet Research - International Scientific Journal for Medical Research, Information and Communication on the Internet.","url":["https://www.jmir.org/2020/7/e17853/"]} {"year":"2020","title":"Creating semantic representations","authors":["FÅ Nielsen, LK Hansen - Statistical Semantics, 2020"],"snippet":"… Evaluations in 2017 with fastText trained for 3 days on either the very large Common Crawl data set or a combination of the English Wikipedia and news datasets set a new state-of-the-art on 88.5% for the accuracy on a …","url":["https://link.springer.com/chapter/10.1007/978-3-030-37250-7_2"]} {"year":"2020","title":"Creation of a database based on artificial intelligence in order to understand the role played by biofilms on outbreaks","authors":["APP de Melo - 2020"],"snippet":"Page 1. FACULDADE DE ENGENHARIA DA UNIVERSIDADE DO PORTO Creation of a database based on artificial intelligence in order to understand the role played by biofilms on outbreaks Ana Patrícia Pinheiro de Melo Integrated Master in Bioengineering …","url":["https://repositorio-aberto.up.pt/bitstream/10216/130013/2/428683.pdf"]} {"year":"2020","title":"Creative Natural Language Generation: Humor and Beyond","authors":["NT Hossain - 2020"],"snippet":"Page 1. Creative Natural Language Generation: Humor and Beyond by Nabil Tarique Hossain Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Supervised by Professor Henry Kautz and Dr …","url":["https://urresearch.rochester.edu/fileDownloadForInstitutionalItem.action?itemId=36525&itemFileId=189977"]} {"year":"2020","title":"Credibility Assessment of User Generated health information of the Bengali language in microblogging sites employing NLP techniques","authors":["A Benazir, S Sharmin"],"snippet":"… word. We use the pre-trained word vectors of Fasttext 5 over Word2Vec 6 and GloVe 7 since it has the best multi lingual word vectors amongst the three, supporting 157 languages, trained on Common Crawl and Wikipedia. We …","url":["https://www.researchgate.net/profile/Afsara_Benazir2/publication/347524581_Credibility_Assessment_of_User_Generated_health_information_of_the_Bengali_language_in_microblogging_sites_employing_NLP_techniques/links/5fe0f9eaa6fdccdcb8ef603e/Credibility-Assessment-of-User-Generated-health-information-of-the-Bengali-language-in-microblogging-sites-employing-NLP-techniques.pdf"]} {"year":"2020","title":"Cross-Cultural Polarity and Emotion Detection Using Sentiment Analysis and Deep Learning--a Case Study on COVID-19","authors":["AS Imran, SM Doudpota, Z Kastrati, R Bhatra - arXiv preprint arXiv:2008.10031, 2020"],"snippet":"… corpora, a 2010 Wikipedia dump with 1 billion tokens; a 2014 Wikipedia dump with 1.6 billion tokens; Gigaword 5 which has 4.3 billion tokens; the combination Gigaword5 + Wikipedia2014, which has 6 billion tokens; and …","url":["https://arxiv.org/pdf/2008.10031"]} {"year":"2020","title":"Cross-lingual Inductive Transfer to Detect Offensive Language","authors":["K Pant, T Dadu - arXiv preprint arXiv:2007.03771, 2020"],"snippet":"… We further finetune XLM-R, pretrained on 2.5 TB Common Crawl corpus spanning 100 languages … XLM-R is a transformer-based cross-lingual model pretrained using a multilingual masked language model objective on 2.5 …","url":["https://arxiv.org/pdf/2007.03771"]} {"year":"2020","title":"Cross-Lingual Information Retrieval in the Medical Domain","authors":["S Saleh - 2020"],"snippet":"Page 1. DOCTORAL THESIS Shadi Saleh Cross-lingual Information Retrieval in the Medical Domain Institute of Formal and Applied Linguistics Supervisor of the doctoral thesis: doc. RNDr. Pavel Pecina, PhD. Study programme …","url":["https://dspace.cuni.cz/bitstream/handle/20.500.11956/123570/140087429.pdf?sequence=1"]} {"year":"2020","title":"Cross-Lingual Relation Extraction with Transformers","authors":["J Ni, T Moon, P Awasthy, R Florian - arXiv preprint arXiv:2010.08652, 2020"],"snippet":"… Conneau et al., 2020). mBERT was pre-trained with Wikipedia text of 104 languages with the largest sizes, and XLM-R were pre-trained with Wikipedia text and CommonCrawl Corpus of 100 languages. Both models use no …","url":["https://arxiv.org/pdf/2010.08652"]} {"year":"2020","title":"Cross-lingual Retrieval for Iterative Self-Supervised Training","authors":["C Tran, Y Tang, X Li, J Gu - arXiv preprint arXiv:2006.09526, 2020"],"snippet":"… 5 Experiment Evaluation We pretrained an mBART model with Common Crawl dataset constrained to the 25 languages as in [19] for which we have evaluation data … We subsample the resulting common crawl data to 100 million sentences in each language …","url":["https://arxiv.org/pdf/2006.09526"]} {"year":"2020","title":"Cross-lingual Transfer Learning for Semantic Role Labeling in Russian","authors":["I Alimova, E Tutubalina, A Kirillovich"],"snippet":"… The model is also based on Transformer architecture (Vaswani et al., 2017). We applied the XLM-R Masked Language Model, which is pretrained on 2.5 TB of CommonCrawl data, in 100 languages, with 8 heads, 6 layers, 1024 hidden units per layer …","url":["https://www.researchgate.net/profile/Alexander_Kirillovich/publication/342734555_Cross-lingual_Transfer_Learning_for_Semantic_Role_Labeling_in_Russian/links/5f0410d0299bf1881607dae8/Cross-lingual-Transfer-Learning-for-Semantic-Role-Labeling-in-Russian.pdf"]} {"year":"2020","title":"Cross-Lingual Word Embeddings for Turkic Languages","authors":["E Kuriyozov, Y Doval, C Gómez-Rodríguez - arXiv preprint arXiv:2005.08340, 2020"],"snippet":"… Cross-lingual embeddings used for both experiments were trained under the following conditions: • Monolingual word embeddings were obtained from available pre-trained word vectors (Grave et al., 2018) trained on …","url":["https://arxiv.org/pdf/2005.08340"]} {"year":"2020","title":"Cross-Modal Transfer Learning for Multilingual Speech-to-Text Translation","authors":["C Tran, C Wang, Y Tang, Y Tang, J Pino, X Li - arXiv preprint arXiv:2010.12829, 2020"],"snippet":"… This model is pretrained using two types of noise in g — random span masking and order permutation — as described in [3]. We re-use the finetuned mBART50 models from [13] which are pretrained on …","url":["https://arxiv.org/pdf/2010.12829"]} {"year":"2020","title":"CS-NLP team at SemEval-2020 Task 4: Evaluation of State-of-the-artNLP Deep Learning Architectures on Commonsense Reasoning Task","authors":["S Saeedi, A Panahi, S Saeedi, AC Fong - arXiv preprint arXiv:2006.01205, 2020"],"snippet":"… The architecture of RoBERTalarge is comprised of of 24-layer, 1024-hidden dimension, 16-self attention heads, 355M parameters and pretrained on book corpus plus English Wikipedia, English CommonCrawl News, and WebText corpus …","url":["https://arxiv.org/pdf/2006.01205"]} {"year":"2020","title":"Culprit Analytics from Detective Novels","authors":["A Motwani - 2020"],"snippet":"Page 1. Culprit Analytics from Detective Novels Thesis submitted in partial fulfillment of the requirements for the degree of Masters of Science in Computer Science and Engineering by Research by Aditya Motwani …","url":["http://web2py.iiit.ac.in/research_centres/publications/download/mastersthesis.pdf.8981260fae86b1a4.4d535f5468657369735f46696e616c202833292e706466.pdf"]} {"year":"2020","title":"Cultural Cartography with Word Embeddings","authors":["DS Stoltz, MA Taylor - arXiv preprint arXiv:2007.04508, 2020"],"snippet":"… fastText embeddings are trained on Wikipedia data dumps and the 25 billion web pages of the Common Crawl), and thus are not trained on the researcher's own corpus. Corpus-trained embeddings, by contrast, are word vectors trained exclusively on the …","url":["https://arxiv.org/pdf/2007.04508"]} {"year":"2020","title":"Cultural Differences in Bias? Origin and Gender Bias in Pre-Trained German and French Word Embeddings","authors":["M Kurpicz-Briki"],"snippet":"… The validation experiments in English were executed on the same pre-trained word embeddings as in the original experiments (Caliskan et al., 2017): • GloVe pre-trained word embeddings using the ”Common …","url":["http://ceur-ws.org/Vol-2624/paper6.pdf"]} {"year":"2020","title":"Cultural influences on word meanings revealed through large-scale semantic alignment","authors":["B Thompson, SG Roberts, G Lupyan - Nature Human Behaviour, 2020"],"snippet":"If the structure of language vocabularies mirrors the structure of natural divisions that are universally perceived, then the meanings of words in different languages should closely align. By contrast, if shared word meanings are …","url":["https://www.nature.com/articles/s41562-020-0924-8"]} {"year":"2020","title":"Current Limitations of Language Models: What You Need is Retrieval","authors":["A Komatsuzaki - arXiv preprint arXiv:2009.06857, 2020"],"snippet":"… Most naturally available samples as well as the reasonable output of most tasks have rather limited length, though others (eg books) do not. For example, the average sample length of WebText is only about 1000 tokens …","url":["https://arxiv.org/pdf/2009.06857"]} {"year":"2020","title":"Curriculum Pre-training for End-to-End Speech Translation","authors":["C Wang, Y Wu, S Liu, M Zhou, Z Yang - arXiv preprint arXiv:2004.10093, 2020"],"snippet":"… (2017) 6LibriSpeech En-Fr, IWSLT En-De and Fisher-CallHome Es-En 7https://wit3.fbk.eu/mt.php?release= 2017-01-trnted 8Europarl v7, Common Crawl, News Comentary v13 and Rapid corpus of EU press releases. using …","url":["https://arxiv.org/pdf/2004.10093"]} {"year":"2020","title":"CX DB8: A queryable extractive summarizer and semantic search engine","authors":["A Roush - arXiv preprint arXiv:2012.03942, 2020"],"snippet":"… unsupervised models. Since unsupervised models are usually trained on massive corpuses, like Wikipedia or Common Crawl (Penninglon et al., 2014), they do not overfit as much to any particular topic or domain. Furthermore …","url":["https://arxiv.org/pdf/2012.03942"]} {"year":"2020","title":"Dávid Márk Nemeskey Natural Language Processing Methods for Language Modeling","authors":["CV Erzsébet, Z Horváth, A Benczúr, A Kornai"],"snippet":"… 83 4.2.2 Common Crawl . . . . . 84 … Chapter 4 details our work of compiling Webcorpus 2.0, a new Hungarian gigaword corpus, from the Common Crawl and the Hungarian Wikipedia. Its main purpose being a …","url":["https://hlt.bme.hu/media/pdf/thesis.pdf"]} {"year":"2020","title":"DAN+: Danish Nested Named Entities and Lexical Normalization","authors":["B Plank, KN Jensen, R van der Goot"],"snippet":"… Twitter data. Bert variants For Danish BERT we use the model trained by Botxo (https://github.com/ botxo/nordic_bert), which is pre-trained on Wikipedia, Common Crawl, Danish debate forums and Danish open subtitles. For …","url":["http://www.robvandergoot.com/doc/danP.pdf"]} {"year":"2020","title":"Danish Clinical Event Extraction Developing a clinical event extraction system for electronic health records using deep learning and active learning","authors":["F WONSILD, MG MØLLER - 2020"],"snippet":"Page 1. Danish Clinical Event Extraction Developing a clinical event extraction system for electronic health records using deep learning and active learning FREDERIK WONSILD MATHIAS GIOVANNI MØLLER Master's thesis …","url":["https://www.derczynski.com/itu/docs/clin-events_frwo_mgmo.pdf"]} {"year":"2020","title":"Data augmentation techniques for the Video Question Answering task","authors":["A Falcon, O Lanz, G Serra - arXiv preprint arXiv:2008.09849, 2020"],"snippet":"… To compute the word embeddings for the question and the answers, we consider GloVe [23], pretrained on the Common Crawl dataset3, which outputs a vector of size E = 300 for 3 The Common Crawl dataset is available …","url":["https://arxiv.org/pdf/2008.09849"]} {"year":"2020","title":"Data selection for unsupervised translation of German–Upper Sorbian","authors":["L Edman, A Toral, G van Noord - Proceedings of the Fifth Conference on Machine …, 2020"],"snippet":"… gz. For German, we use monolingual data from News Crawl and Common Crawl … 19.57 Table 3: BLEU scores of models trained using 5 million sentences from News Crawl and various amounts of sentences from Common Crawl …","url":["https://www.aclweb.org/anthology/2020.wmt-1.130.pdf"]} {"year":"2020","title":"Data-driven Crosslinguistic Modeling of Constituent Ordering Preferences","authors":["ZY Liu - 2020"],"snippet":"Page 1. Data-driven Crosslinguistic Modeling of Constituent Ordering Preferences By Zoey (Ying) Liu Dissertation Submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Linguistics in the Office of Graduate Studies of the …","url":["https://www.researchgate.net/profile/Zoey_Liu2/publication/343836788_Data-driven_Crosslinguistic_Modeling_of_Constituent_Ordering_Preferences/links/5f4408cb92851cd3022569fe/Data-driven-Crosslinguistic-Modeling-of-Constituent-Ordering-Preferences.pdf"]} {"year":"2020","title":"Data-driven models and computational tools for neurolinguistics: a language technology perspective","authors":["E Artemova, A Bakarov, A Artemov, E Burnaev… - arXiv preprint arXiv …, 2020"],"snippet":"… The corpus size can be estimated by the number of tokens8: so, the size of English Wikipedia is 1M tokens, the size of Google news corpus is 1B tokens, and the size of CommonCrawl corpus is 600B tokens. Structured expert-based …","url":["https://arxiv.org/pdf/2003.10540"]} {"year":"2020","title":"Dataless Short Text Classification Based on Biterm Topic Model and Word Embeddings","authors":["Y Yang, H Wang, J Zhu, Y Wu, K Jiang, W Guo, W Shi"],"snippet":"… We set the number of iterations to 50 as our models achieve competitive performance since then. For word embeddings, we employ the widely used GloVe Common Crawl as mentioned before. It contains 840B to- kens, 2.2M vocab and 300d vectors …","url":["https://www.ijcai.org/Proceedings/2020/0549.pdf"]} {"year":"2020","title":"Dataset for Automatic Summarization of Russian News","authors":["I Gusev - arXiv preprint arXiv:2006.11063, 2020"],"snippet":"… 3.3 Abstractive methods All of the tested models are based on a sequence-to-sequence framework. Pointergenerator and CopyNet were trained only on our train dataset, and mBART was pretrained on texts of 25 languages extracted from the Common Crawl …","url":["https://arxiv.org/pdf/2006.11063"]} {"year":"2020","title":"Dataset for Automatic Summarization of Russian","authors":["I Gusev - arXiv preprint arXiv:2006.11063, 2020"],"snippet":"… sequence-to-sequence framework. Pointer-generator and CopyNet were trained only on our training dataset, and mBART was pretrained on texts of 25 languages extracted from the Common Crawl. We performed no additional …","url":["https://www.researchgate.net/profile/Ilya_Gusev2/publication/342352344_Dataset_for_Automatic_Summarization_of_Russian_News/links/5f16164a92851c1eff22059b/Dataset-for-Automatic-Summarization-of-Russian-News.pdf"]} {"year":"2020","title":"Datasets and Performance Metrics for Greek Named Entity Recognition","authors":["N Bartziokas, T Mavropoulos, C Kotropoulos - 11th Hellenic Conference on Artificial …, 2020"],"snippet":"… performance. Thus, most works omit such supplementary features. Established word embeddings, usually pre-trained on large corpora, such as Common Crawl's or Wikipedia's collections, are fine-tuned to a great extent. Google …","url":["https://dl.acm.org/doi/abs/10.1145/3411408.3411437"]} {"year":"2020","title":"DeBERTa: Decoding-enhanced BERT with Disentangled Attention","authors":["P He, X Liu, J Gao, W Chen - arXiv preprint arXiv:2006.03654, 2020"],"snippet":"… the setting of BERT [4], except that we use the BPE vocabulary as [2, 5]. For training data, we use Wikipedia (English Wikipedia dump3; 12GB), BookCorpus [26] (6GB), OPENWEBTEXT (public Reddit content [27]; 38GB) …","url":["https://arxiv.org/pdf/2006.03654"]} {"year":"2020","title":"DECAB-LSTM: Deep Contextualized Attentional Bidirectional LSTM for cancer hallmark classification","authors":["L Jiang, X Sun, F Mercaldo, A Santone - Knowledge-Based Systems, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0950705120306158"]} {"year":"2020","title":"Decoding individual identity from brain activity elicited in imagining common experiences","authors":["AJ Anderson, K McDermott, B Rooks, KL Heffner… - Nature Communications, 2020"],"snippet":"Everyone experiences common events differently. This leads to personal memories that presumably provide neural signatures of individual identity when events are reimagined. We present initial evidence that these signatures …","url":["https://www.nature.com/articles/s41467-020-19630-y"]} {"year":"2020","title":"Deep Exogenous and Endogenous Influence Combination for Social Chatter Intensity Prediction","authors":["S Dutta, S Masud, S Chakrabarti, T Chakraborty - arXiv preprint arXiv:2006.07812, 2020"],"snippet":"… news on discussions. We report on extensive experiments using a two-month-long discussion corpus of Reddit, and a contemporaneous corpus of online news articles from the Common Crawl. ChatterNet shows considerable …","url":["https://arxiv.org/pdf/2006.07812"]} {"year":"2020","title":"Deep Intelligent Contextual Embedding for Twitter Sentiment Analysis","authors":["K Musial-Gabrys, U Naseem - International Conference on Document Analysis and …, 2019"],"snippet":"… and GloVe. In our model we have used pre-trained GloVe embedding of 300 dimensions which are trained on 840 billion token from common crawl because it gives better results as compared to Word2Vec in our case. GloVe …","url":["https://opus.lib.uts.edu.au/bitstream/10453/137711/1/ICDAR_2019_paper_279.pdf"]} {"year":"2020","title":"Deep Learning Based Multi-Label Text Classification of UNGA Resolutions","authors":["F Sovrano, M Palmirani, F Vitali - arXiv preprint arXiv:2004.03455, 2020"],"snippet":"… The pre-trained models we are going to use are: a GloVe model from Spacy [19] and pre-trained on data from Common Crawl [20], and the Universal Sentence Encoder (USE) model for document embedding coming from …","url":["https://arxiv.org/pdf/2004.03455"]} {"year":"2020","title":"Deep Learning for Twitter Sentiment Analysis: The Effect of Pre-trained Word Embedding","authors":["A Krouska, C Troussas, M Virvou - Machine Learning Paradigms, 2020"],"snippet":"… The model contains 300-dimensional vectors for 3 million words and phrases. Crawl GloVe was trained on a Common Crawl dataset of 42 billion tokens (words), providing a vocabulary of 2 million words with an embedding vector …","url":["https://link.springer.com/chapter/10.1007/978-3-030-49724-8_5"]} {"year":"2020","title":"Deep learning model for end-to-end approximation of COSMIC functional size based on use-case names","authors":["M Ochodek, S Kopczyńska, M Staron - Information and Software Technology, 2020"],"snippet":"… we investigate different pre-trained word embeddings to learn that using the embeddings trained on Wikipedia+Gigaworld (300d), Common Crawl 840B/42B (300d), and Stack Overflow (200d) give the best prediction accuracy. This paper is structured as follows …","url":["https://www.sciencedirect.com/science/article/pii/S0950584920300628"]} {"year":"2020","title":"Deep N-ary Error Correcting Output Codes","authors":["H Zhang, JT Zhou, T Wang, IW Tsang, RSM Goh - arXiv preprint arXiv:2009.10465, 2020"],"snippet":"… For the Bi-LSTMs model of TREC and SST text datasets, we use the 300-dimensional publicly available pre-trained word embeddings as the word-level feature representation, which is trained by fastText4 package …","url":["https://arxiv.org/pdf/2009.10465"]} {"year":"2020","title":"Deep Neural Attention-Based Model for the Evaluation of Italian Sentences Complexity","authors":["D Schicchi, G Pilato, GL Bosco - 2020 IEEE 14th International Conference on …, 2020"],"snippet":"… based algorithms. To the best of our knowledge, the most prominent sentence-based corpus for the Italian language is the PACCSS-IT corpus [19]. It has 1www.wikipedia. org 2www.commoncrawl.org 254 Page 3. been created …","url":["https://ieeexplore.ieee.org/abstract/document/9031472/"]} {"year":"2020","title":"Deep Neural Networks Ensemble with Word Vector Representation Models to Resolve Coreference Resolution in Russian","authors":["A Sboev, R Rybka, A Gryaznov - Advanced Technologies in Robotics and Intelligent …, 2020"],"snippet":"… Among the context-insensitive vectorization models, the following were compared: Word2vec model, 3 trained on corpus of articles from the Russian part of Wikipedia (further referred to as RuWiki) and data from CommonCrawl 4 ; …","url":["https://link.springer.com/chapter/10.1007/978-3-030-33491-8_4"]} {"year":"2020","title":"Deep Neural Networks for Sentiment Analysis in Tweets with Emoticons","authors":["M Narayanaperumal - 2020"],"snippet":"Page 1. DEEP NEURAL NETWORKS FOR SENTIMENT ANALYSIS IN TWEETS WITH EMOTICONS by Mutharasu Narayanaperumal A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Systems …","url":["http://search.proquest.com/openview/ce5f7af40a2bea968b30a3ab132f22bb/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2020","title":"Deep Question Answering: A New Teacher For DistilBERT","authors":["F Tamburini, P Cimiano, S Preite"],"snippet":"Page 1. Alma Mater Studiorum · Universit`a di Bologna SCUOLA DI SCIENZE Corso di Laurea Magistrale in Informatica Deep Question Answering: A New Teacher For DistilBERT Relatore: Chiar.mo Prof. Fabio Tamburini Correlatore: Chiar.mo Prof. Philipp Cimiano …","url":["https://amslaurea.unibo.it/20384/1/MasterThesisBologna.pdf"]} {"year":"2020","title":"DeepPhish: Automated Phishing Detection Using Recurrent Neural Network","authors":["M Arivukarasi, A Antonidoss - Advances in Smart System Technologies"],"snippet":"… At that point, we prepared an irregular timberland classifier with 100 choice trees. 6 Conclusions. To assess the methodologies, we utilized a database that included one million real URLs from the common crawl database and …","url":["https://link.springer.com/chapter/10.1007/978-981-15-5029-4_18"]} {"year":"2020","title":"DeepSinger: Singing Voice Synthesis with Data Mined From the Web","authors":["Y Ren, X Tan, T Qin, J Luan, Z Zhao, TY Liu - arXiv preprint arXiv:2007.04590, 2020"],"snippet":"… A variety of tasks collect training data from the Web, such as the large-scale web-crawled text dataset ClueWeb [3] and Common Crawl3 for language modeling [40], LETOR [31] for search ranking [4], and WebVision [22] for image classification …","url":["https://arxiv.org/pdf/2007.04590"]} {"year":"2020","title":"Definition of Phishing Sites Based on the Team Model of Fuzzy Neural Networks","authors":["II Ismagilov, AA Murtazin, DV Kataseva, AS Katasev… - Helix, 2020"],"snippet":"… To obtain a set of data on legitimate sites, two sources were used: Alexa Internet and Common Crawl … Common Crawl is a non-profit organization; it crawls monthly the Internet and makes its archives and datasets available …","url":["https://helixscientific.pub/index.php/home/article/download/237/190"]} {"year":"2020","title":"DeL-haTE: A Deep Learning Tunable Ensemble for Hate Speech Detection","authors":["J Melton, A Bagavathi, S Krishnan - arXiv preprint arXiv:2011.01861, 2020"],"snippet":"… We compare the following five word embedding methods: Word2Vec vectors trained on Google News corpus [17], GloVe vectors trained on CommonCrawl (GLoVe-CC) and Twitter (GLoVe-Twitter) corpora [18], and FastText vectors …","url":["https://arxiv.org/pdf/2011.01861"]} {"year":"2020","title":"Delay Mitigation for Backchannel Prediction in Spoken Dialog System","authors":["AI Adiba, T Homma, D Bertero, T Sumiyoshi… - Conversational Dialogue Systems …"],"snippet":"… individual word. The word embedding is then used as input for our model architecture. We found that our dataset in the fastText model trained with the Common Crawl dataset 2 had the smallest number of unknown words. Thus, the …","url":["https://link.springer.com/chapter/10.1007/978-981-15-8395-7_10"]} {"year":"2020","title":"DeLFT and entity-fishing: Tools for CLEF HIPE 2020 Shared Task","authors":["T Kristanti, L Romary - CLEF 2020-Conference and Labs of the Evaluation …, 2020"],"snippet":"… Word Embeddings We use various static word embeddings: Global Vectors for Word Representation (GloVe) [14], English fastText Common Crawl [1,11], and French Wikipedia fastText.5 We also use ELMo [16] contextualized …","url":["https://hal.inria.fr/hal-02974946/document"]} {"year":"2020","title":"Depthwise Separable Convolutional Neural Network for Confidential Information Analysis","authors":["Y Lu, J Jiang, M Yu, C Liu, C Liu, W Huang, Z Lv - International Conference on …, 2020"],"snippet":"… Word2Vec. The Word2VecModified-Wikipedia are trained on Wikipedia through modified Word2vec. The GloVe-Crawl840B are trained on Common Crawl through GloVe. The GloVe-Wikipedia are trained on Wikipedia through GloVe …","url":["https://link.springer.com/chapter/10.1007/978-3-030-55393-7_40"]} {"year":"2020","title":"Design2Struct: Generating website structures from design images using neural networks","authors":["MM Velzel - 2020"],"snippet":"… The second contribution is the release of a large CommonCrawl2 based dataset, filtered and transformed to be used in the field of GUI to structure conversion. The dataset is 1https://github.com/mvelzel/Design2Struct 2https://commoncrawl.org …","url":["http://essay.utwente.nl/81988/1/VELZEL_BA_EEMCS.pdf"]} {"year":"2020","title":"Detecting Alzheimer's Disease by Exploiting Linguistic Information from Nepali Transcript","authors":["S Thapa, S Adhikari, U Naseem, P Singh, G Bharathy… - International Conference on …, 2020"],"snippet":"… For pre-trained Nepali Word2Vec model, the model created by Lamsal [15] is used in the study. Similarly, for the pre-trained fastText embeddings, the pre-trained word vectors trained on Common Crawl and Wikipedia using fastText were used [14] …","url":["https://link.springer.com/chapter/10.1007/978-3-030-63820-7_20"]} {"year":"2020","title":"Detecting and Visualizing Hate Speech in Social Media: A Cyber Watchdog for Surveillance","authors":["S Modha, P Majumder, T Mandl, C Mandalia - Expert Systems with Applications, 2020"],"snippet":"… The word vectors were trained on 600 billion tokens of the Common Crawl corpus (Simonite, 2013). The Common Crawl is a nonprofit organization that crawls the web and freely provides its archives and datasets to the public …","url":["https://www.sciencedirect.com/science/article/pii/S0957417420305492"]} {"year":"2020","title":"Detecting Deceptive Language in Crime Interrogation","authors":["YY Kao, PH Chen, CC Tzeng, ZY Chen, B Shmueli… - International Conference on …, 2020"],"snippet":"… fastText is a lightweight library for text representation. Its pre-trained model, trained on Common Crawl and Wikipedia corpus, has the ability to capture hidden information about a language such as word analogies or semantic …","url":["https://link.springer.com/chapter/10.1007/978-3-030-50341-3_7"]} {"year":"2020","title":"Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases","authors":["W Guo, A Caliskan - arXiv preprint arXiv:2006.03955, 2020"],"snippet":"… intersectional group members. Caliskan et al. [1] have shown that social biases are embedded in linguistic regularities learned by GloVe. These embeddings are trained on the word co-occurrence statistics of the Common Crawl corpus …","url":["https://arxiv.org/pdf/2006.03955"]} {"year":"2020","title":"Detecting Entailment in Code-Mixed Hindi-English Conversations","authors":["S Chakravarthy, A Umapathy, AW Black - Proceedings of the Sixth Workshop on …, 2020"],"snippet":"… XLM-RoBERTa (XLM-R) (Conneau et al., 2020) is trained on the CommonCrawl corpus, which in- cludes Romanized Hindi text, making this model the closest one to being pre-trained on Hinglish. 3 Task Definition Khanuja et al …","url":["https://www.aclweb.org/anthology/2020.wnut-1.22.pdf"]} {"year":"2020","title":"Detecting Fine-Grained Cross-Lingual Semantic Divergences without Supervision by Learning to Rank","authors":["E Briakou, M Carpuat - arXiv preprint arXiv:2010.03662, 2020"],"snippet":"… (2018) work on subtitles and Common Crawl corpora where sentence alignment errors abound, and Pham et al … di- vergence English-French parallel sentences drawn from OpenSubtitles and CommonCrawl corpora by prior work (Vyas et al., 2018) …","url":["https://arxiv.org/pdf/2010.03662"]} {"year":"2020","title":"Detecting Hallucinated Content in Conditional Neural Sequence Generation","authors":["C Zhou, J Gu, M Diab, P Guzman, L Zettlemoyer… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. DETECTING HALLUCINATED CONTENT IN CONDITIONAL NEURAL SEQUENCE GENERATION Chunting Zhou1∗, Jiatao Gu2, Mona Diab2, Paco Guzman2, Luke Zettlemoyer2, Marjan Ghazvininejad2 Language …","url":["https://arxiv.org/pdf/2011.02593"]} {"year":"2020","title":"Detecting Incivility and Impoliteness in Online Discussions.","authors":["AK Stoll, M Ziegele, O Quiring - Computational Communication Research, 2020"],"snippet":"Page 1. VOL. 2, NO. 1, 2020 109 Detecting Impoliteness and Incivility in Online Discussions Classification Approaches for German User Comments Anke Stoll, Marc Ziegele, Oliver Quiring CCR 2 (1): 109–134 DOI: 10.5117/CCR2020.1.005.KATH …","url":["https://computationalcommunication.org/ccr/article/download/19/10"]} {"year":"2020","title":"Detecting misogyny in Spanish Tweets. An approach based on linguistics features and word embeddings","authors":["JA García-Díaz, M Cánovas-García… - Future Generation …, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0167739X20301928"]} {"year":"2020","title":"DETECTING OUT-OF-DISTRIBUTION TRANSLATIONS WITH VARIATIONAL TRANSFORMERS","authors":["WAT ZEI"],"snippet":"… The following datasets were used in our experiments: (1) WMT EN ↔ DE: The training set for translation tasks between English (EN) and German (DE) composed of news-commentary-v13 with 284k sentences pairs …","url":["https://openreview.net/pdf/e2667f2c5169fcbdca8e1d0596e67792da06d3a0.pdf"]} {"year":"2020","title":"Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks","authors":["D Emelin, I Titov, R Sennrich - arXiv preprint arXiv:2011.01846, 2020"],"snippet":"Page 1. Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks Denis Emelin1, Ivan Titov1, 2, and Rico Sennrich3, 1 1University of Edinburgh, Scotland 2University of Amsterdam …","url":["https://arxiv.org/pdf/2011.01846"]} {"year":"2020","title":"Detecting, Classifying, and Mapping Retail Storefronts Using Street-level Imagery","authors":["S Sharifi Noorian, S Qiu, A Psyllidis, A Bozzon… - Proceedings of the 2020 …, 2020","SS Noorian, S Qiu, A Psyllidis, A Bozzon, GJ Houben"],"snippet":"… a bag of character n-grams. As our evaluation will focus on the Manhattan Borough of New York City, the pre-trained (on Common Crawl and Wikipedia 3) word vectors for English are used. According to the desired language …","url":["https://dl.acm.org/doi/pdf/10.1145/3372278.3390706","https://qiusihang.github.io/files/publications/icmr2020detecting.pdf"]} {"year":"2020","title":"Detection of Emerging Words in Portuguese Tweets","authors":["A Pinto, H Moniz, F Batista - 9th Symposium on Languages, Applications and …, 2020"],"snippet":"… We have used two pre-trained word vectors for Portuguese4, the first one trained on the Common Crawl5, and the second trained on the … www.nilc.icmc. usp.br/nilc/projects/unitex-pb/web/dicionarios.html 4 https://fasttext.cc/docs …","url":["https://drops.dagstuhl.de/opus/volltexte/2020/13016/pdf/OASIcs-SLATE-2020-3.pdf"]} {"year":"2020","title":"Detection of Harassment on Twitter with Deep Learning Techniques","authors":["I Espinoza, F Weiss - Machine Learning and Knowledge Discovery in …, 2020"],"snippet":"… trained embedding model. We use the implementation off Spacy library for Python4 with the pre-trained model called 'en vectors web lg', which has 300 dimensions and it's trained over common crawl texts. With Spacy we have …","url":["https://link.springer.com/content/pdf/10.1007/978-3-030-43887-6_24.pdf"]} {"year":"2020","title":"Determining Event Outcomes: The Case of# fail","authors":["S Murugan, D Chinnappa, E Blanco - Proceedings of the 2020 Conference on …, 2020"],"snippet":"… has variable length). Additionally, the word embeddings (GloVe embeddings pre-trained with CommonCrawl) allow us to leverage a distributional representation of tags, including those not seen during training. The second …","url":["https://www.aclweb.org/anthology/2020.findings-emnlp.359.pdf"]} {"year":"2020","title":"Developing a Twitter bot that can join a discussion using state-of-the-art architectures","authors":["YM Çetinkaya, İH Toroslu, H Davulcu - Social Network Analysis and Mining, 2020"],"snippet":"… requests. Radford et al. (2019) construct an auto-regressive feed-forward model instead of seq2seq-RNN as a language model using Common Crawl as a dataset and generate sentences with predicting next word. Generative …","url":["https://link.springer.com/article/10.1007/s13278-020-00665-4"]} {"year":"2020","title":"Developing an online hate classifier for multiple social media platforms","authors":["J Salminen, M Hopf, SA Chowdhury, S Jung… - Human-centric Computing …, 2020"],"snippet":"The proliferation of social media enables people to express their opinions widely online. However, at the same time, this has resulted in the emergence of conflict and hate, making online...","url":["https://link.springer.com/article/10.1186/s13673-019-0205-6"]} {"year":"2020","title":"Development and evaluation of a Polish Automatic Speech Recognition system using the TLK toolkit","authors":["NU Roselló Beneitez - 2020"],"snippet":"… 12 2.8 Trellis representing the decoding step . . . . . 15 3.1 Examples of sentences extracted from the Common Crawl corpus . . . . . 21 4.1 Actions performed to create the final acoustic models . . . . . 24 4.2 Phonetic transcription of a Polish word …","url":["https://riunet.upv.es/bitstream/handle/10251/150495/Rosell%C3%B3%20-%20Desarrollo%20y%20evaluaci%C3%B3n%20de%20un%20sistema%20de%20Reconocimiento%20Autom%C3%A1tico%20del%20Habla%20en%20Polaco%20....pdf?sequence=1"]} {"year":"2020","title":"Development of a Search Engine to Answer Comparative Queries","authors":["J Huck"],"snippet":"… extraction and tuning the retrieval model. Page 6. References 1. Bevendorff, J., Stein, B., Hagen, M., Potthast, M.: Elastic chatnoir: Search engine for the clueweb and the common crawl. In: ECIR (2018) 2. Bondarenko, A., Fröbe …","url":["http://www.dei.unipd.it/~ferro/CLEF-WN-Drafts/CLEF2020/paper_178.pdf"]} {"year":"2020","title":"Development of Word Embeddings for Uzbek Language","authors":["B Mansurov, A Mansurov - arXiv preprint arXiv:2009.14384, 2020"],"snippet":"… variant of Uzbek. As far as we're aware, only fastText [5] word embeddings exist for the Latin variant. However, fastText was trained on the relatively low quality Uzbek Wikipedia and noisy Common Crawl corpus. In this paper …","url":["https://arxiv.org/pdf/2009.14384"]} {"year":"2020","title":"Dialog Response Generation Using Adversarially Learned Latent Bag-of-Words","authors":["K Khan - 2020"],"snippet":"Page 1. Dialog Response Generation Using Adversarially Learned Latent Bag-of-Words by Kashif Khan A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Mathematics in Computer Science …","url":["https://uwspace.uwaterloo.ca/bitstream/handle/10012/16188/Khan_Kashif.pdf?sequence=3&isAllowed=y"]} {"year":"2020","title":"Dictionary for Computer-Assisted Text Analysis of Cyber Security (TACS)","authors":["A Levordashka, A Joinson, S Jones"],"snippet":"… algorithm [372] implemented via the Python package textacy [7]. We then grouped the terms by semantic similarity, with the help of a vector space model available via the Python library spacy ('en_core_web_lg'), with 685k …","url":["https://nbviewer.jupyter.org/github/anidroid/tacs/blob/master/tacs-soups.pdf"]} {"year":"2020","title":"Dictionary-based Data Augmentation for Cross-Domain Neural Machine Translation","authors":["W Peng, C Huang, T Li, Y Chen, Q Liu - arXiv preprint arXiv:2004.02577, 2020"],"snippet":"… The OOD data used for pre-training for the baseline model are extracted from WMT 144 including Eu- roparl V7, New-commentary V9 and Common Crawl corpora … Train Dataset (OOD) Europarl,News-commentary, Common …","url":["https://arxiv.org/pdf/2004.02577"]} {"year":"2020","title":"Differences Beyond Identity: Perceived Construal Distance and Interparty Animosity in the United States","authors":["A van Loon, A Goldberg, S Srivastava - SocArXiv. July, 2020"],"snippet":"Page 1. Differences Beyond Identity: Perceived Construal Distance and Interparty Animosity in the United States ∗ Austin van Loon Stanford University Amir Goldberg Stanford University Sameer B. Srivastava …","url":["https://osf.io/j2f6u/download"]} {"year":"2020","title":"Dilated Convolution Networks for Classification of ICD-9 based Clinical Summaries","authors":["M Morisio, N Kanwal, I Tutor, DG Rizzo - 2020","N Kanwal - 2020"],"snippet":"… This architecture uses multiple dilation layers with a label-specific dot-based attention mechanism. We have extracted the embeddings from Common Crawl Glove (Global Vector). The architecture of the model is designed to calculate attention to words and their context …","url":["https://webthesis.biblio.polito.it/14400/","https://webthesis.biblio.polito.it/14400/1/tesi.pdf"]} {"year":"2020","title":"Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures","authors":["J Launay, I Poli, F Boniface, F Krzakala - arXiv preprint arXiv:2006.12878, 2020"],"snippet":"Page 1. Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures Julien Launay1,2 Iacopo Poli1 François Boniface1 Florent Krzakala1,2 1LightOn 2École Normale Supérieure {julien, iacopo, francois, florent}@lighton.ai Abstract …","url":["https://arxiv.org/pdf/2006.12878"]} {"year":"2020","title":"Discovering key topics from short, real-world medical inquiries via natural language processing and unsupervised learning","authors":["A Ziletti, C Berns, O Treichel, T Weber, J Liang… - arXiv preprint arXiv …, 2020"],"snippet":"… Table IIA1 presents a qualitative comparison of a standard embedding (en core web lg, trained on the Common Crawl) and a specialized biomedical embedding (scispaCy en core sci lg, trained also on PubMed). Specifically …","url":["https://arxiv.org/pdf/2012.04545"]} {"year":"2020","title":"Discovering Relational Intelligence in Online Social Networks","authors":["L Tan, T Pham, HK Ho, TS Kok - International Conference on Database and Expert …, 2020"],"snippet":"… 100 mil tweets. 283. \\(^\\text {a}\\)https://archive.ics.uci.edu/ml/datasets /bag+of+words. \\(^\\text {b}\\)http://commoncrawl.org/2014/07/april-2014-crawldata-available/. \\(^text {c}\\)https://developer.twitter.com/en/docs.html. \\(^\\text …","url":["https://link.springer.com/chapter/10.1007/978-3-030-59003-1_22"]} {"year":"2020","title":"Discovering web services in social web service repositories using deep variational autoencoders","authors":["I Lizarralde, C Mateos, A Zunino, TA Majchrzak… - Information Processing & …, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0306457319310878"]} {"year":"2020","title":"Disentangling semantic composition and semantic association in the left temporal lobe Abbreviated title: Semantic composition versus association","authors":["J Li, S Island, A Dhabi, L Pylkkänen"],"snippet":"… derived using the well-known GloVe word embeddings model (Pennington et al., 2014; freely available at https://nlp.stanford.edu/projects/glove/) trained on Common Crawl (https://commoncrawl.org/), which contains petabytes …","url":["https://www.biorxiv.org/content/10.1101/2020.08.17.254482v2.full.pdf"]} {"year":"2020","title":"Distractor Analysis and Selection for Multiple-Choice Cloze Questions for Second-Language Learners","authors":["L Gao, K Gimpel, A Jensson"],"snippet":"… form pair. For features that require embedding words, we use the 300-dimensional GloVe word embedPage 4. dings (Pennington et al., 2014) pretrained on the 42 billion token Common Crawl corpus. The GloVe embeddings …","url":["https://ttic.uchicago.edu/~kgimpel/papers/gao+etal.bea2020.pdf"]} {"year":"2020","title":"Distributed frameworks for approximate data analytics","authors":["G Hu - 2020"],"snippet":"… analytical queries can be expensive, eg, the Google book Ngrams dataset contains 2.2 TB of text data [17], and CommonCrawl corpus petabytes of web pages [18]. The above challenge is exacerbated when it is desirable to run different types of …","url":["https://rucore.libraries.rutgers.edu/rutgers-lib/65036/PDF/1/play/"]} {"year":"2020","title":"Distributed Training of Graph Convolutional Networks using Subgraph Approximation","authors":["A Angerd, K Balasubramanian, M Annavaram - arXiv preprint arXiv:2012.04930, 2020"],"snippet":"… The vertex label is the community a post belongs to. The features consist of an embedding of post information, created using GloVe CommonCrawl (Pennington et al., 2014). The first 20 days of posts are used for training, while the rest are used for testing and validation …","url":["https://arxiv.org/pdf/2012.04930"]} {"year":"2020","title":"Distributional and Lexical Exploration of Semantics of Derivational Morphology","authors":["UC Kunter, GN Özdemir, C Bozşahin"],"snippet":"… using Wikipedia datasets. The second one is presented in Grave et al. (2018), covering 157 languages including Turkish. Their models used Common Crawl and Wikipedia datasets and trained on fastText. The models were …","url":["http://www.academia.edu/download/63487208/Distributional_and_Lexical_Exploration_of_Semantics_of_DM20200531-40903-1x0rwv8.pdf"]} {"year":"2020","title":"Distributional Models in the Task of Hypernym Discovery","authors":["V Yadrintsev, A Ryzhova, I Sochenkov - Russian Conference on Artificial Intelligence, 2020"],"snippet":"… Most likely, the largest text corpus was used for the first model, which includes Wikipedia and Common Crawl (we do not know the exact volume of crawl-data for the Russian, but roughly 24 terabytes of plain text was used …","url":["https://link.springer.com/chapter/10.1007/978-3-030-59535-7_25"]} {"year":"2020","title":"Distributional semantic modeling: a revised technique to train term/word vector space models applying the ontology-related approach","authors":["O Palagin, V Velychko, K Malakhov, O Shchurov - arXiv preprint arXiv:2003.03350, 2020"],"snippet":"… Ac- cessed: 2020-03-03. [42] Firefly documentation. https://rorodata.github. io/firefly/. Accessed: 2020-03-03. [43] Common crawl. http://commoncrawl org/. Accessed: 2020-03-03. [44] Google dataset search. https://datasetsearch …","url":["https://arxiv.org/pdf/2003.03350"]} {"year":"2020","title":"Do Neural Language Models Show Preferences for Syntactic Formalisms?","authors":["A Kulmizev, V Ravishankar, M Abdou, J Nivre - arXiv preprint arXiv:2004.14096, 2020"],"snippet":"… Che et al. (2018). These models are trained on 20 million words randomly sampled from the concatenation of WikiDump and CommonCrawl datasets for 44 different languages, including our 13 languages. Each model features …","url":["https://arxiv.org/pdf/2004.14096"]} {"year":"2020","title":"Document Representations for Fast and Accurate Retrieval of Mathematical Information","authors":["V Novotný"],"snippet":"Page 1. Masaryk University Faculty of Informatics Document Representations for Fast and Accurate Retrieval of Mathematical Information Rigorous Thesis Vít Novotný Advisor: Doc. RNDr. Petr Sojka, Ph. D. Brno, Fall 2019 Signature …","url":["https://is.muni.cz/th/x86jd/thesis-with-papers.pdf"]} {"year":"2020","title":"Domain Name System Security and Privacy: A Contemporary Survey","authors":["A Khormali, J Park, H Alasmary, A Anwar, D Mohaisen - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Domain Name System Security and Privacy: A Contemporary Survey Aminollah Khormali, Jeman Park, Hisham Alasmary, Afsah Anwar, David Mohaisen University of Central Florida Abstract The domain name system …","url":["https://arxiv.org/pdf/2006.15277"]} {"year":"2020","title":"Domain Specific Complex Sentence (DCSC) Semantic Similarity Dataset","authors":["D Chandrasekaran, V Mago - arXiv preprint arXiv:2010.12637, 2020"],"snippet":"… trained. While BERT was trained on Book Corpus and Wikipedia corpus, RoBERTa model was trained on four different corpora namely Book Corpus, Common Crawl News dataset, OpenWebText dataset and the Stories dataset …","url":["https://arxiv.org/pdf/2010.12637"]} {"year":"2020","title":"Domain-Specific Meta-Embedding with Latent Semantic Structures","authors":["Q Liu, J Lu, G Zhang, T Shen, Z Zhang, H Huang - Information Sciences, 2020"],"snippet":"… For example, GloVe is trained on aggregated global word-word co-occurrence statistics from a corpus of over 840B tokens and fastText [2] pre-trained word representations for 157 languages on Common Crawl and the Wikipedia Corpora …","url":["https://www.sciencedirect.com/science/article/pii/S002002552031029X"]} {"year":"2020","title":"DOMINANCE STYLE AND VOCAL COMMUNICATION IN NON-HUMAN PRIMATES","authors":["ZC CHEN-KRAUS, C COYE10, M EMERY… - LANGUAGE of, 2020"],"snippet":"Page 441. DOMINANCE STYLE AND VOCAL COMMUNICATION IN NON-HUMAN PRIMATES K KATIE SLOCOMBE* 1, EITHNE KAVANAGH11,, SALLY STREET2, FELIX O. ANGWELA3, THORE J. BERGMAN4, MARYJKA …","url":["https://pure.mpg.de/rest/items/item_3190925/component/file_3219601/content#page=441"]} {"year":"2020","title":"Don't Stop Pretraining: Adapt Language Models to Domains and Tasks","authors":["S Gururangan, A Marasović, S Swayamdipta, K Lo… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Don't Stop Pretraining: Adapt Language Models to Domains and Tasks Suchin Gururangan† Ana Marasovic†♦ Swabha Swayamdipta†♦ Kyle Lo† Iz Beltagy† Doug Downey† Noah A. Smith†♦ †Allen Institute for Artificial …","url":["https://arxiv.org/pdf/2004.10964"]} {"year":"2020","title":"DRIVING INTENT EXPANSION VIA ANOMALY DETECTION IN A MODULAR CONVERSATIONAL SYSTEM","authors":["NR Mallinar, TK HO - US Patent App. 16/180,613, 2020"],"snippet":"… In various embodiments, the dataset builder 365 uses one or more of word2vec, the enwiki 2015 document vectorizer, ppdb paragram sentence embeddings, common-crawl uncased GloVe word embeddings, and enwiki …","url":["http://www.freepatentsonline.com/y2020/0142959.html"]} {"year":"2020","title":"Drug-Drug Interaction Classification Using Attention Based Neural Networks","authors":["D Zaikis, I Vlahavas - 11th Hellenic Conference on Artificial Intelligence, 2020"],"snippet":"… word in a given sentence. The large English statistical model was used, which is trained with GloVe vectors on the OntoNotes 5 Common Crawl corpus and has a POS syntax accuracy of 97.22 percent. The unique POS tags …","url":["https://dl.acm.org/doi/abs/10.1145/3411408.3411461"]} {"year":"2020","title":"Dual Conditional Cross Entropy Scores and LASER Similarity Scores for the WMT20 Parallel Corpus Filtering Shared Task","authors":["F Koerner, P Koehn"],"snippet":"… sentences. The Pashto language model was trained on a concatenation of the CommonCrawl and Wikipedia corpora, with the CommonCrawl oversampled by a factor of 64 to produce a dataset of 9,273,763 sentences. The …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.109.pdf"]} {"year":"2020","title":"Dynamic Data Selection and Weighting for Iterative Back-Translation","authors":["ZY Dou, A Anastasopoulos, G Neubig - arXiv preprint arXiv:2004.03672, 2020"],"snippet":"Page 1. Dynamic Data Selection and Weighting for Iterative Back-Translation Zi-Yi Dou, Antonios Anastasopoulos, Graham Neubig Language Technologies Institute, Carnegie Mellon University {zdou, aanastas, gneubig}@cs.cmu.edu Abstract …","url":["https://arxiv.org/pdf/2004.03672"]} {"year":"2020","title":"DynE: Dynamic Ensemble Decoding for Multi-Document Summarization","authors":["C Hokamp, DG Ghalandari, NT Pham, J Glover - arXiv preprint arXiv:2006.08748, 2020"],"snippet":"… Ghalandari et al. (2020) presented the WCEP dataset, which is a large-scale collection of clusters of news articles with a corresponding summary, constructed using the Wikipedia Current Events Portal, with additional articles gathered from CommonCrawl …","url":["https://arxiv.org/pdf/2006.08748"]} {"year":"2020","title":"EEGdenoiseNet: A benchmark dataset for deep learning solutions of EEG denoising","authors":["H Zhang, M Zhao, C Wei, D Mantini, Z Li, Q Liu - arXiv preprint arXiv:2009.11662, 2020"],"snippet":"Page 1. EEGdenoiseNet: A benchmark dataset for deep learning solutions of EEG denoising Haoming Zhang1,†, Mingqi Zhao1,2,†, Chen Wei1, Dante Mantini2,3, Zherui Li1, Quanying Liu1,∗ September 25, 2020 1 Department …","url":["https://arxiv.org/pdf/2009.11662"]} {"year":"2020","title":"Effect of Character and Word Features in Bidirectional LSTM-CRF for NER","authors":["C Ronran, S Lee - 2020 IEEE International Conference on Big Data and …, 2020"],"snippet":"… CRF), respectively, using public word embedding, character features and word features [6,4]. We explore existing word embeddings that is distinct from the previous studies including (1) Glove 300 embedding of 42B, 840B …","url":["https://ieeexplore.ieee.org/abstract/document/9070329/"]} {"year":"2020","title":"Efficient and High-Quality Neural Machine Translation with OpenNMT","authors":["G Klein, D Zhang, C Chouteau, JM Crego, J Senellart - Proceedings of the Fourth …, 2020"],"snippet":"… Europarl v9 1,838,568 Common Crawl corpus 2,399,123 News Commentary v14 338,285 Wiki Titles v1 1,305,141 Document-split Rapid 1,531,261 ParaCrawl v3 31,358,551 Total 38,770,929 news-crawl 2007-2018 …","url":["https://www.aclweb.org/anthology/2020.ngt-1.25.pdf"]} {"year":"2020","title":"Efficient strategies for hierarchical text classification: External knowledge and auxiliary tasks","authors":["KR Rojas, G Bustamante, MAS Cabezudo, A Oncevay - arXiv preprint arXiv …, 2020"],"snippet":"… of tokens in the input document. We use pre-trained word embeddings from Common Crawl (Grave et al., 2018) for the weights of this layer, and we do not fine-tune them during training time. Encoder: It is a bidirectional GRU …","url":["https://arxiv.org/pdf/2005.02473"]} {"year":"2020","title":"Efficient Transfer Learning for Quality Estimation with Bottleneck Adapter Layer","authors":["H Yang, M Wang, N Xie, Y Qin, Y Deng - Proceedings of the 22nd Annual Conference …, 2020"],"snippet":"… BPE is used for tokenizing, where 32000 tokens are reserved. We use UN corpus and Common Crawl parallel corpus with the size of 1https://github.com/pytorch/fairseq Page 5. Total Params Training Params …","url":["https://www.aclweb.org/anthology/2020.eamt-1.4.pdf"]} {"year":"2020","title":"Efficiently Reusing Old Models Across Languages via Transfer Learning","authors":["T Kocmi, O Bojar - Proceedings of the 22nd Annual Conference of the …, 2020"],"snippet":"… EN - Russian 12.6M News Commentary, Yandex, and UN Corpus WMT 2012 WMT 2018 EN - French 34.3M Commoncrawl, Europarl, Giga FREN, News commentary, UN corpus WMT 2013 WMT dis. 2015 Table 2: Corpora used for each language pair …","url":["https://www.aclweb.org/anthology/2020.eamt-1.3.pdf"]} {"year":"2020","title":"Embedding Compression with Isotropic Iterative Quantization","authors":["S Liao, J Chen, Y Wang, Q Qiu, B Yuan - arXiv preprint arXiv:2001.05314, 2020"],"snippet":"… We perform experiments with the GloVe embedding (Pennington et al. 2014) and the HDC embedding (Sun et al. 2015). The GloVe embedding is trained from 42B tokens of Common Crawl data. The HDC Table 1: Experiment …","url":["https://arxiv.org/pdf/2001.05314"]} {"year":"2020","title":"Embedding Compression with Right Triangle Similarity Transformations","authors":["H Song, D Zou, L Hu, J Yuan - International Conference on Artificial Neural Networks, 2020"],"snippet":"… model. 4.1 Experimental Setup. Pre-trained Continuous Embeddings. We conduct experiments on GloVe [16] and fasttext [1]. GloVe embeddings have been trained on 42B tokens of Common Crawl data with 400k words. Fasttext …","url":["https://link.springer.com/chapter/10.1007/978-3-030-61616-8_62"]} {"year":"2020","title":"EmoDet2: Emotion Detection in English Textual Dialogue using BERT and BiLSTM Models","authors":["H Al-Omari, MA Abdullah, S Shaikh - 2020 11th International Conference on …, 2020"],"snippet":"… Moreover, we have encoded the words in the conversation using Word2vec, Glove Wiki, and Glove Common Crawl packages … The hyperparameters as follow: Dropout = 0.4, the text in Word2Vec and Glove Wiki are lowered …","url":["https://ieeexplore.ieee.org/abstract/document/9078946/"]} {"year":"2020","title":"Emotion Aided Dialogue Act Classification for Task-Independent Conversations in a Multi-modal Framework","authors":["T Saha, D Gupta, S Saha, P Bhattacharyya - Cognitive Computation"],"snippet":"… To extract textual features, a convolutional neural network (CNN) [48]–based approach is used. Pretrained GloVe [49] embeddings trained on the CommonCrawl corpus of dimension 300 have been used to represent words as word vectors …","url":["https://link.springer.com/article/10.1007/s12559-019-09704-5"]} {"year":"2020","title":"Employing distributional semantics to organize task-focused vocabulary learning","authors":["HS Ponnusamy, D Meurers - arXiv preprint arXiv:2011.11115, 2020"],"snippet":"… graph, we start with a distributional semantic vector representation of each word, which we obtain from the pre-trained model of GloVe (Pennington et al., 2014) based on the co-occurrence statistics of the the words form a large …","url":["https://arxiv.org/pdf/2011.11115"]} {"year":"2020","title":"Empowering Architects and Designers: A Classification of What Functions to Accelerate in Storage","authors":["C Zou, AA Chien"],"snippet":"Page 1. Empowering Architects and Designers: A Classification of What Functions to Accelerate in Storage Chen Zou chenzou@uchicago.edu University of Chicago Andrew A. Chien achien@cs.uchicago.edu University of Chicago …","url":["https://newtraell.cs.uchicago.edu/files/tr_authentic/TR-2020-02.pdf"]} {"year":"2020","title":"End to end approach for i2b2 2012 challenge based on Cross-lingual models","authors":["EA Santamaría - 2020"],"snippet":"… Joshi et al., 2019). Unlike mBERT who has been trained on Wikipedia, XLM-RoBERT uses the CommonCrawl(Conneau et al., 2019a) corpus for its training. In this section we explain step by step our approach. First we adapt …","url":["https://addi.ehu.es/bitstream/handle/10810/48623/MAL-Edgar_Andres.pdf?sequence=1"]} {"year":"2020","title":"End-to-End Simultaneous Translation System for IWSLT2020 Using Modality Agnostic Meta-Learning","authors":["HJ Han, MA Zaidi, SR Indurthi, NK Lakumarapu, B Lee… - Proceedings of the 17th …, 2020"],"snippet":"… We evaluate our system on the MuST-C Dev set. Our parallel corpus of WMT19 consists of Europarl v9, ParaCrawl v3, Common Crawl, News Commentary v14, Wiki Titles v1 and Documentsplit Rapid for the German-English language pair …","url":["https://www.aclweb.org/anthology/2020.iwslt-1.5.pdf"]} {"year":"2020","title":"Energy-Based Models for Text","authors":["A Bakhtin, Y Deng, S Gross, M Ott, MA Ranzato… - arXiv preprint arXiv …, 2020"],"snippet":"… (2015); Kiros et al. (2015), which consists of fiction books in 16 different genres, totaling about half a billion words. • CCNews: We collect a de-duplicated subset of the English portion of the CommonCrawl news …","url":["https://arxiv.org/pdf/2004.10188"]} {"year":"2020","title":"English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too","authors":["J Phang, PM Htut, Y Pruksachatkun, H Liu, C Vania… - arXiv preprint arXiv …, 2020"],"snippet":"… 5XLM-R Large (Conneau et al., 2019) is a 550m-parameter variant of the RoBERTa masked language model (Liu et al., 2019b) trained on a cleaned version of CommonCrawl on 100 languages. 6Excluded in this draft due to implementation issues. Page 5 …","url":["https://arxiv.org/pdf/2005.13013"]} {"year":"2020","title":"Enhanced-RCNN: An Efficient Method for Learning Sentence Similarity","authors":["S Peng, H Cui, N Xie, S Li, J Zhang, X Li - Proceedings of The Web Conference 2020, 2020"],"snippet":"… Base model. Model parameter size time (s/batch) BERT-Base 102.2M 0.23 ± 0.20 Enhanced-RCNN 7.7M 0.02 ± 0.01 from the 840B Common Crawl corpus [21]. We set the hidden size as 192 for all BiGRU layers. Ant Financial …","url":["https://dl.acm.org/doi/pdf/10.1145/3366423.3379998"]} {"year":"2020","title":"Enhancing Word Embeddings with Knowledge Extracted from Lexical Resources","authors":["M Biesialska, B Rafieian, MR Costa-jussà - arXiv preprint arXiv:2005.10048, 2020"],"snippet":"… Dinu et al., 2015; Artetxe et al., 2017, 2018). Moreover, GloVe vectors for English were trained on Common Crawl (Pennington et al., 2014). Linguistic Constraints. To perform semantic specialization of word vector spaces, we …","url":["https://arxiv.org/pdf/2005.10048"]} {"year":"2020","title":"Entity-Switched Datasets: An Approach to Auditing the In-Domain Robustness of Named Entity Recognition Models","authors":["O Agarwal, Y Yang, BC Wallace, A Nenkova - arXiv preprint arXiv:2004.04123, 2020"],"snippet":"… al., 2019). For the first two, we used 300-d cased GloVe (Pennington et al., 2014) vectors trained on Common Crawl.7 For BERT, we use the public large8 uncased9 model and apply the default fine-tuning strategy. We use …","url":["https://arxiv.org/pdf/2004.04123"]} {"year":"2020","title":"Entrepreneurial Organizations and the Use of Strategic Silence","authors":["W Shi, M Weber - Proceedings of the 54th Hawaii International …"],"snippet":"… number of competing apps for a particular keyword), chart rankings (current ranking position for a keyword), difficulty (the popularity of apps including reviews and ratings) and traffic (eg, autosuggestion when typing in the store …","url":["https://scholarspace.manoa.hawaii.edu/bitstream/10125/71247/0506.pdf"]} {"year":"2020","title":"Establishing a New State-of-the-Art for French Named Entity Recognition","authors":["PJO Suárez, Y Dupont, B Muller, L Romary, B Sagot - LREC 2020-12th Language …, 2020"],"snippet":"… They use zero to three of the following vector representations: FastText non-contextual embeddings (Bojanowski et al., 2017), the FrELMo contextual language model ob- tained by training the ELMo architecture on the OSCAR …","url":["https://hal.inria.fr/hal-02617950/document"]} {"year":"2020","title":"Estimating educational outcomes from students' short texts on social media","authors":["I Smirnov - EPJ Data Science, 2020"],"snippet":"… We obtained significantly better results with a model that used word-embeddings (see Methods). We also find that embeddings trained on the VK corpus outperform models trained on the Wikipedia and Common Crawl corpora (Table 1). Page 6 …","url":["https://link.springer.com/content/pdf/10.1140/epjds/s13688-020-00245-8.pdf"]} {"year":"2020","title":"Estimating Mutual Information Between Dense Word Embeddings","authors":["V Zhelezniak, A Savkov, N Hammerla - Proceedings of the 58th Annual Meeting of …, 2020"],"snippet":"… Our focus here is on fastText vectors (Bojanowski et al., 2017) trained on Common Crawl (600B tokens), as previous literature suggests that among unsupervised vectors fastText yields the best performance for all tasks and …","url":["https://www.aclweb.org/anthology/2020.acl-main.741.pdf"]} {"year":"2020","title":"Estimating the influence of auxiliary tasks for multi-task learning of sequence tagging tasks","authors":["F Schröder, C Biemann"],"snippet":"Page 1. Estimating the influence of auxiliary tasks for multi-task learning of sequence tagging tasks Fynn Schröder Language Technology Group Universität Hamburg Hamburg, Germany fschroeder@informatik.uni-hamburg.de …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2020-schroeder-biemann-acl20-dataset-similarity.pdf"]} {"year":"2020","title":"Evaluating cross-lingual textual similarity on dictionary alignment problem","authors":["Y Sever, G Ercan - Language Resources and Evaluation, 2020"],"snippet":"… 2018) provides embeddings for 157 different languages trained on Wikipedia and Common Crawl Footnote 3 . For efficiency concerns, each embedding set of the 7 languages are pruned to the the most frequent \\(500\\times 10^3\\) words in each language …","url":["https://link.springer.com/article/10.1007/s10579-020-09498-1"]} {"year":"2020","title":"Evaluating German Transformer Language Models with Syntactic Agreement Tests","authors":["K Zaczynska, N Feldhus, R Schwarzenberg…"],"snippet":"… 1 The first model which we refer to as GBERTlarge is a community model provided by the Bavarian State Library.2 It was trained on multiple German corpora including a recent Wikipedia dump, EU Bookshop corpus …","url":["http://ceur-ws.org/Vol-2624/paper7.pdf"]} {"year":"2020","title":"Evaluating Multilingual BERT for Estonian","authors":["C Kittask, K Milintsevich, K Sirts - arXiv preprint arXiv:2010.00454, 2020"],"snippet":"… 1Corresponding Author: Claudia Kittask; E-mail: claudiakittask@gmail.com Page 2. and cross-lingual RoBERTa (XLM-RoBERTa) [3], which was trained on much larger CommonCrawl corpora and also includes 100 languages …","url":["https://arxiv.org/pdf/2010.00454"]} {"year":"2020","title":"Evaluating Sentence Representations for Biomedical Text: Methods and Experimental Results","authors":["NS Tawfik, MR Spruit - Journal of Biomedical Informatics, 2020"],"snippet":"… 3.2. Embedding Methods. GloVe We use the pre-trained embeddings consisting of 2.2 million vocabulary words available at https://nlp.stanford.edu/projects/glove/ which were trained on the Common Crawl (840B tokens) dataset …","url":["https://www.sciencedirect.com/science/article/pii/S1532046420300253"]} {"year":"2020","title":"Evaluating Word Embeddings on Low-Resource Languages","authors":["N Stringham, M Izbicki - Proceedings of the First Workshop on Evaluation and …, 2020"],"snippet":"… Grave et al. (2018) trained FastText embeddings on 157 languages using data from the Common Crawl project. But they were only able to explicitly evaluate 10 of these language models using the analogy task due to the …","url":["https://www.aclweb.org/anthology/2020.eval4nlp-1.17.pdf"]} {"year":"2020","title":"Evaluation of related news recommendations using document similarity methods","authors":["M Pranjic, V Podpecan, M Robnik-Šikonja, S Pollak"],"snippet":"… RoBERTa (Liu et al., 2019). It uses the sentence piece tokenizer and it is trained with the masked language model objective (MLM) on the CommonCrawl data in 100 languages, including Croatian. Similar to the mBERT, all …","url":["http://nl.ijs.si/jtdh20/pdf/JT-DH_2020_Pranjic-et-al_Evaluation-of-related-news-recommendations-using-document-similarity-methods.pdf"]} {"year":"2020","title":"Event Detection on Literature by Utilizing Word Embedding","authors":["J Chun, C Kim - International Conference on Database Systems for …, 2020"],"snippet":"… On the contrary, Neural-based methods have the limitation that they ignore semantic relationships in a text. 2.3 Facebook Pre-trained Word Vectors. We chose pre-trained word vectors published by Facebook, trained on …","url":["https://link.springer.com/chapter/10.1007/978-3-030-59413-8_21"]} {"year":"2020","title":"Evidence Integration for Multi-hop Reading Comprehension with Graph Neural Networks","authors":["L Song, Z Wang, M Yu, Y Zhang, R Florian, D Gildea - IEEE Transactions on …, 2020"],"snippet":"Page 1. 1041-4347 (c) 2020 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/9051845/"]} {"year":"2020","title":"Examining the rhetorical capacities of neural language models","authors":["Z Zhu, C Pan, M Abdalla, F Rudzicz - arXiv preprint arXiv:2010.00153, 2020"],"snippet":"… Non-contextualized word embeddings We consider two popular word embeddings here: • GloVe (Pennington et al., 2014) contains 2.2M vocabulary items and produces 300dimensional word vectors. The GloVe embedding …","url":["https://arxiv.org/pdf/2010.00153"]} {"year":"2020","title":"Experience Grounds Language","authors":["Y Bisk, A Holtzman, J Thomason, J Andreas, Y Bengio… - arXiv preprint arXiv …, 2020"],"snippet":"… (2013) trained on 1.6 billion tokens, while Pennington et al. (2014) scaled up to 840 billion tokens from Common Crawl. Recent approaches have made progress by substantially increasing the number of model parameters …","url":["https://arxiv.org/pdf/2004.10151"]} {"year":"2020","title":"Experiencers, Stimuli, or Targets: Which Semantic Roles Enable Machine Learning to Infer the Emotions?","authors":["L Oberländer, K Reich, R Klinger - arXiv preprint arXiv:2011.01599, 2020"],"snippet":"… 1For ET, 90% of the annotated experiencers are the authors of the tweets without corresponding span annotation. 2We use 42B tokens, pretrained on CommonCrawl (Pennington et al., 2014), https://nlp.stanford.edu …","url":["https://arxiv.org/pdf/2011.01599"]} {"year":"2020","title":"Experiments on Paraphrase Identification Using Quora Question Pairs Dataset","authors":["A Chandra, R Stefanus - arXiv preprint arXiv:2006.02648, 2020"],"snippet":"… matching result into a fix-length matching vector and continued to last layer of the model which is a fully connected layer. The paper use GloVe as a pretrained word vector from 840B Common Crawl corpus and apply it to Quora …","url":["https://arxiv.org/pdf/2006.02648"]} {"year":"2020","title":"Explicit Alignment Objectives for Multilingual Bidirectional Encoders","authors":["J Hu, M Johnson, O Firat, A Siddhant, G Neubig - arXiv preprint arXiv:2010.07972, 2020"],"snippet":"… Sentence Alignment Our first proposed objective encourages cross-lingual alignment of sentence 1 AMBER is trained on 26GB parallel data and 80GB monolingual Wikipedia data, while XLM-R-large is trained on 2.5TB …","url":["https://arxiv.org/pdf/2010.07972"]} {"year":"2020","title":"Explicit Sentence Compression for Neural Machine Translation","authors":["Z Li, R Wang, K Chen, M Utiyama, E Sumita, Z Zhang… - arXiv preprint arXiv …, 2019"],"snippet":"… for NMT evaluation. For the EN-DE translation task, 4.43 M bilingual sentence pairs from the WMT14 dataset were used as training data, including Common Crawl, News Commentary, and Europarl v7. The newstest2013 and …","url":["https://arxiv.org/pdf/1912.11980"]} {"year":"2020","title":"Exploiting Categorization of Online News for Profiling City Areas","authors":["A Bondielli, P Ducange, F Marcelloni - 2020 IEEE Conference on Evolving and …, 2020"],"snippet":"… FastText is created by Facebook and is based on Neural Networks. Pre-trained word vectors are available for 157 languages, trained on Common Crawl and Wikipedia. More specifically, we chose to use the pre-trained Italian model …","url":["https://ieeexplore.ieee.org/abstract/document/9122777/"]} {"year":"2020","title":"Exploiting Class Labels to Boost Performance on Embedding-based Text Classification","authors":["A Zubiaga - arXiv preprint arXiv:2006.02104, 2020"],"snippet":"… 4.2 Word Embedding Models & Classifiers We tested four word embedding models: (1) Google's Word2Vec model (gw2v), (2) a Twitter Word2Vec model5 (tw2v) [10], (3) GloVe embeddings trained from Common Crawl (cglove) …","url":["https://arxiv.org/pdf/2006.02104"]} {"year":"2020","title":"Exploiting Structured Knowledge in Text via Graph-Guided Representation Learning","authors":["T Shen, Y Mao, P He, G Long, A Trischler, W Chen - arXiv preprint arXiv:2004.14224, 2020"],"snippet":"Page 1. EXPLOITING STRUCTURED KNOWLEDGE IN TEXT VIA GRAPH-GUIDED REPRESENTATION LEARNING Tao Shen∗ University of Technology Sydney tao.shen@student.uts.edu.au Yi Mao, Pengcheng …","url":["https://arxiv.org/pdf/2004.14224"]} {"year":"2020","title":"Exploring Different Methods for Solving Analogies with Portuguese Word Embeddings","authors":["T Sousa, H Gonçalo Oliveira, A Alves - 9th Symposium on Languages, Applications …, 2020"],"snippet":"… from several sources. Such sources include raw text (ie, an ensemble of Google News word2vec, Common Crawl GloVe, Open Subtitles fastText) combined with the ConceptNet semantic network with retrofitting. 1 https://github …","url":["https://drops.dagstuhl.de/opus/volltexte/2020/13022/pdf/OASIcs-SLATE-2020-9.pdf"]} {"year":"2020","title":"Exploring Event Extraction Across Languages","authors":["S Prabhu - 2020"],"snippet":"Page 1. Exploring Event Extraction Across Languages Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computational Linguistics by Research by Suhan Prabhu 201525118 suhan.prabhuk@research.iiit.ac.in …","url":["http://web2py.iiit.ac.in/research_centres/publications/download/mastersthesis.pdf.b481c850852a12a8.737568616e5f66696e616c5f7468657369732e706466.pdf"]} {"year":"2020","title":"Exploring Neural Network Approaches in Automatic Personality Recognition of Filipino Twitter Users","authors":["E Tighe, O Aran, C Cheng"],"snippet":"… FastText differs by learning character-grams, as opposed to word-grams. We utilize the embeddings trained on Common Crawl and Wikipedia data2 – specifically, the embeddings for English and Tagalog (both 300 dimensions) …","url":["https://www.researchgate.net/profile/Edward_Tighe/publication/343189230_Exploring_Neural_Network_Approaches_in_Automatic_Personality_Recognition_of_Filipino_Twitter_Users/links/5f1ae2aea6fdcc9626ad4c4d/Exploring-Neural-Network-Approaches-in-Automatic-Personality-Recognition-of-Filipino-Twitter-Users.pdf"]} {"year":"2020","title":"Exploring Swedish & English fastText Embeddings with the Transformer","authors":["TP Adewumi, F Liwicki, M Liwicki - arXiv preprint arXiv:2007.16007, 2020"],"snippet":"… We obtain better performance in both languages on the downstream task with far smaller training data, compared to recently released, common crawl versions and character n-grams appear useful for Swedish, a morphologically rich language …","url":["https://arxiv.org/pdf/2007.16007"]} {"year":"2020","title":"Exploring the Dominance of the English Language on the Websites of EU Countries","authors":["A Giannakoulopoulos, M Pergantis, N Konstantinou… - Future Internet, 2020"],"snippet":"… For this purpose, we used information obtained from Common Crawl, a “repository of web crawl data that is universally accessible and analyzable” [34]. Among the data Common Crawl offers is an index of every available webpage …","url":["https://www.mdpi.com/1999-5903/12/4/76/pdf"]} {"year":"2020","title":"Extended Overview of CLEF HIPE 2020: Named Entity Processing on Historical Newspapers","authors":["A Flückiger, S Clematide"],"snippet":"Page 1. Extended Overview of CLEF HIPE 2020: Named Entity Processing on Historical Newspapers Maud Ehrmann1[0000−0001−9900−2193], Matteo Romanello1[0000−0002− 1890−2577], Alex Flückiger2, and Simon Clematide2[0000−0003−1365−0662] …","url":["http://ceur-ws.org/Vol-2696/paper_255.pdf"]} {"year":"2020","title":"Extended study on using pretrained language models and YiSi-1 for machine translation evaluation","authors":["C Lo - Proceedings of the Fifth Conference on Machine …, 2020"],"snippet":"… The differencesbetweenXLM-RandBERTare1)XLM-Ris trained on the CommonCrawl corpus which is significantly larger than the Wikipedia training data used by BERT; 2) instead of a uniform data sampling rate used in BERT …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.99.pdf"]} {"year":"2020","title":"Extending Tables using a Web Table Corpus","authors":["S Sarabchi - 2020"],"snippet":"… Lehmberg et al. [12] gathered a web table corpus containing 233 million tables from the 2015 version of the CommonCrawl2, using a table extraction method similar to that of [11]. The table corpus … and 2https://commoncrawl.org/about …","url":["https://era.library.ualberta.ca/items/4f9f40b8-69ba-4c24-85b4-41f17517cc59/view/dfd3938b-a7d1-4b4c-85e0-4087c36d5713/Sarabchi_Saeed_202002_MSc.pdf"]} {"year":"2020","title":"Extracting Family History of Patients From Clinical Narratives: Exploring an End-to-End Solution With Deep Learning Models","authors":["X Yang, H Zhang, X He, J Bian, Y Wu - JMIR Medical Informatics, 2020"],"snippet":"… We screened 4 different word embeddings following a similar procedure reported in our previous study [46] and found that the Common Crawl embeddings—released by Facebook and trained using the fastText on the …","url":["https://medinform.jmir.org/2020/12/e22982/"]} {"year":"2020","title":"Extracting Training Data from Large Language Models","authors":["N Carlini, F Tramer, E Wallace, M Jagielski… - arXiv preprint arXiv …, 2020"],"snippet":"… In particular, we select samples from a subset of Common Crawl6 to feed as context to the model.7 6http://commoncrawl.org/ 7It is possible there is some intersection between these two datasets, effectively allowing this strategy to “cheat” …","url":["https://arxiv.org/pdf/2012.07805"]} {"year":"2020","title":"Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation","authors":["I Chung, B Kim, Y Choi, SJ Kwon, Y Jeon, B Park… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. arXiv:2009.07453v1 [cs.LG] 16 Sep 2020 Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation Insoo Chung∗ Byeongwook Kim∗ Yoonjung Choi Se Jung Kwon Yongkweon …","url":["https://arxiv.org/pdf/2009.07453"]} {"year":"2020","title":"Facebook AI's WMT20 News Translation Task Submission","authors":["PJ Chen, A Lee, C Wang, N Goyal, A Fan… - arXiv preprint arXiv …, 2020"],"snippet":"… we use all the available monolingual data, eg NewsCrawl + CommonCrawl + Wikipedia dumps for Tamil, and CommonCrawl for Inuktitut … unconstrained track, we use Tamil monolingual data and Tamil-English mined bitext data …","url":["https://arxiv.org/pdf/2011.08298"]} {"year":"2020","title":"Factors affecting sentence similarity and paraphrasing identification","authors":["M Alian, A Awajan - International Journal of Speech Technology, 2020"],"snippet":"… Grave et al. (2018) contributed in a pre-trained word vector representation for 157 languages including Arabic. The word vectors have been trained on Wikipedia and the Common Crawl corpus using an extension of the FastText model with subword information …","url":["https://link.springer.com/article/10.1007/s10772-020-09753-4"]} {"year":"2020","title":"Fairness in AI-based Recruitment and Career Pathway Optimization","authors":["DF Mujtaba - 2020"],"snippet":"Page 1. FAIRNESS IN AI-BASED RECRUITMENT AND CAREER PATHWAY OPTIMIZATION By Dena Freshta Mujtaba A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of …","url":["http://search.proquest.com/openview/f2938ed72cda2c3b656a0db1b2be7320/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2020","title":"Fake News Data Collection and Classification: Iterative Query Selection for Opaque Search Engines with Pseudo Relevance Feedback","authors":["A Elyashar, M Reuben, R Puzis - arXiv preprint arXiv:2012.12498, 2020"],"snippet":"Page 1. FAKE NEWS DATA COLLECTION AND CLASSIFICATION: ITERATIVE QUERY SELECTION FOR OPAQUE SEARCH ENGINES WITH PSEUDO RELEVANCE FEEDBACK APREPRINT Aviad Elyashar, Maor Reuben …","url":["https://arxiv.org/pdf/2012.12498"]} {"year":"2020","title":"Fake News Detection","authors":["IP Marín, D Arroyo - Conference on Complex, Intelligent, and Software …, 2020"],"snippet":"… 4. https://spacy.io/ [Last accessed 26 Jan 2020]. 5. en_core_web_lg, pre-trained English statistical models. English multi-task CNN trained on OntoNotes, with GloVe vectors trained on Common Crawl. 6. https …","url":["https://link.springer.com/chapter/10.1007/978-3-030-57805-3_22"]} {"year":"2020","title":"Fake news spreader detection using neural tweet aggregation","authors":["O Bakhteev, A Ogaltsov, P Ostroukhov"],"snippet":"… As a preprocessing step we lowercased tweets and removed stop-words and punctuation. We did not use any special preprocessing. For the word embeddings we used fastText [1] trained on Common Crawl and Wikipedia with dimension set to 100 …","url":["https://pan.webis.de/downloads/publications/papers/bakhteev_2020.pdf"]} {"year":"2020","title":"Fake News Spreader Identification in Twitter using Ensemble Modeling","authors":["A Hashemi, MR Zarei, MR Moosavi, M Taheri"],"snippet":"… Page 4. The sources used in the English model for training data are OntoNotes 51 and GloVe Common Crawl2 and the Spanish model utilizes UD Spanish AnCora v2.53, WikiNER4, OSCAR (Common Crawl)5 and Wikipedia …","url":["https://pan.webis.de/downloads/publications/papers/hashemi_2020.pdf"]} {"year":"2020","title":"Fashion-IQ 2020 Challenge 2nd Place Team's Solution","authors":["M Shin, Y Cho, S Hong - arXiv preprint arXiv:2007.06404, 2020"],"snippet":"… For training the LSTM and the GRU from scratch, we initialize the word embedding with the concatenation of three GloVe vectors2 learned from Wikipedia, Twitter, and Common Crawl that results in 900-dimensional input …","url":["https://arxiv.org/pdf/2007.06404"]} {"year":"2020","title":"Fast entity linking in noisy text environments","authors":["SM Shah, MD Conover, PN Skomoroch, MT Hayes… - US Patent 10,733,383, 2020"],"snippet":"… entry). In some embodiments, the candidate dictionary is determined by using the hyperlinks on a Wikipedia page, or a Common Crawl page to identify surface forms (the hyperlink anchor text) that point to a specific page. Each …","url":["http://www.freepatentsonline.com/10733383.html"]} {"year":"2020","title":"Fast Indexes for Gapped Pattern Matching","authors":["M Cáceres, SJ Puglisi, B Zhukova - International Conference on Current Trends in …, 2020"],"snippet":"… Open image in new window Fig. 2. Fig. 2. Time to search a 2GiB subset of the Common Crawl web collection (commoncrawl.org). for 20 VLG patterns (\\(k=2\\), \\(delta _i,\\varDelta _i = \\langle 100,110\\rangle \\)), composed of very …","url":["https://link.springer.com/chapter/10.1007/978-3-030-38919-2_40"]} {"year":"2020","title":"FinBERT: A Pre-trained Financial Language Representation Model for Financial Text Mining","authors":["Z Liu, D Huang, K Huang, Z Li, J Zhao"],"snippet":"… sizes, totaling over 61 GB text: • English Wikipedia1 and BooksCorpus (Zhu et al., 2015), which are the original training data used to train BERT (totaling 13GB, 3.31B words); • FinancialWeb (totaling 24GB, 6.38B words), which …","url":["https://www.ijcai.org/Proceedings/2020/0622.pdf"]} {"year":"2020","title":"Finding of asymmetric relation between words","authors":["M Muraoka, T Nasukawa, KMA Salam - US Patent App. 16/287,326, 2020"],"snippet":"US20200272696A1 - Finding of asymmetric relation between words - Google Patents. Finding of asymmetric relation between words. Download PDF Info. Publication number US20200272696A1. US20200272696A1 US16/287,326 …","url":["https://patents.google.com/patent/US20200272696A1/en"]} {"year":"2020","title":"Finding the needle in the haystack: Fine-tuning transformers to classify protest events in a sea ofnews articles, with Bayesian uncertainty measures","authors":["C Ghai - 2020"],"snippet":"Page 1. Finding the needle in the haystack: Fine-tuning transformers to classify protest events in a sea of news articles, with Bayesian uncertainty measures Chris Ghai Master's Thesis, Spring 2020 Page 2. This master's thesis …","url":["https://www.duo.uio.no/bitstream/handle/10852/79984/1/chris_ghai_thesis.pdf"]} {"year":"2020","title":"Findings of the 2020 conference on machine translation (wmt20)","authors":["L Barrault, M Biesialska, O Bojar, MR Costa-jussà… - Proceedings of the Fifth …, 2020"],"snippet":"… Distinct words – 76,013 – 6,165 178,453 85,189 Common Crawl Parallel Corpus German ↔ English Czech ↔ English Russian ↔ English French ↔ German … Common Crawl Language Model Data English German Czech Russian Polish Sent …","url":["https://www.aclweb.org/anthology/2020.wmt-1.1.pdf"]} {"year":"2020","title":"Findings of the WMT 2020 Biomedical Translation Shared Task: Basque, Italian and Russian as New Additional Languages","authors":["R Bawden, G Di Nunzio, C Grozea, I Unanue, A Yepes… - 5th Conference on Machine …, 2020"],"snippet":"Page 1. HAL Id: hal-02986356 https://hal.inria.fr/hal-02986356 Submitted on 2 Nov 2020 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not …","url":["https://hal.inria.fr/hal-02986356/document"]} {"year":"2020","title":"Findings of the WMT 2020 shared task on parallel corpus filtering and alignment","authors":["P Koehn, V Chaudhary, A El-Kishky, N Goyal, PJ Chen… - Proceedings of the Fifth …, 2020"],"snippet":"… Noisy parallel documents and parallel sentences were sourced from the CCAligned2 dataset (El- Kishky et al., 2020a), a massive collection of cross-lingual web documents covering over 8k language pairs aligned from …","url":["https://www.aclweb.org/anthology/2020.wmt-1.78.pdf"]} {"year":"2020","title":"Fine-Grained Argument Unit Recognition and Classification","authors":["D Trautmann, J Daxenberger, C Stab, H Schütze…"],"snippet":"… Table 1). This also increases comparability with related work. The topics are general enough to have good coverage in Common Crawl … 4http://commoncrawl.org/2016/02/ february-2016-crawlarchive-now-available/ 5https://www.elastic.co/products/elasticsearch …","url":["https://www.researchgate.net/profile/Dietrich_Trautmann/publication/332590723_Robust_Argument_Unit_Recognition_and_Classification/links/5e32b02ca6fdccd96576e059/Robust-Argument-Unit-Recognition-and-Classification.pdf"]} {"year":"2020","title":"Fine-grained entity type classification using GRU with self-attention","authors":["K Dhrisya, G Remya, A Mohan - International Journal of Information Technology, 2020"],"snippet":"… The dictionary for the proposed model is built with GLoVe vectors of 300 dimensions. It is a pre-trained word embedding created by utilizing common Crawl 840B. These vectors on various corpora can be downloaded from Stanford GLoVe Website. Parameter settings …","url":["https://link.springer.com/article/10.1007/s41870-020-00499-5"]} {"year":"2020","title":"FLERT: Document-Level Features for Named Entity Recognition","authors":["S Schweter, A Akbik - arXiv preprint arXiv:2011.06993, 2020"],"snippet":"… Conneau et al. (2019). We use the xlm-roberta-large model in our experiments, trained on 2.5TB of data from a cleaned Common Crawl corpus (Wenzek et al., 2020) for 100 different languages Embeddings (+WE). For each setup …","url":["https://arxiv.org/pdf/2011.06993"]} {"year":"2020","title":"Forum Duplicate Question Detection by Domain Adaptive Semantic Matching","authors":["Z Xu, H Yuan - IEEE Access, 2020"],"snippet":"… B. MODEL IMPLEMENTATION The word embedding was initialized with 300-dimensional GloVe [33] vectors which are pretrained in the 840B Common Crawl corpus. The Embedding was set to be trainable. The …","url":["https://ieeexplore.ieee.org/iel7/6287639/8948470/09043551.pdf"]} {"year":"2020","title":"Four dimensions characterizing comprehensive trait judgments of faces","authors":["C Lin, U Keles, R Adolphs - 2020"],"snippet":"… and text classi cation using a neural network provided within the FastText library40; this neural network had been trained on Common Crawl data of 600 billion words to predict the identity of a word given a context. We then applied …","url":["https://www.researchsquare.com/article/rs-41215/latest.pdf"]} {"year":"2020","title":"French Contextualized Word-Embeddings with a sip of CaBeRnet: a New French Balanced Reference Corpus","authors":["M Fabre, PJO Suárez, B Sagot, ÉV de la Clergerie - CMLC-8-8th Workshop on the …, 2020","M Popa-Fabre, PJO Suárez, B Sagot… - Proceedings of the 8th …, 2020"],"snippet":"… al., 2019), we decided to include in our comparison a corpus of French text extracted from Common Crawl8. We … 8More information available at https://commoncrawl … OSCAR gathers a set of monolingual text extracted …","url":["https://hal.inria.fr/hal-02678358/document","https://www.aclweb.org/anthology/2020.cmlc-1.3.pdf"]} {"year":"2020","title":"Frequency-dependent Regularization in Constituent Ordering Preferences","authors":["Z Liu, E Morgan"],"snippet":"… a total of around 9 billion tokens. This corpus consists of web page data from both Common Crawl and Wikipedia and is automatically parsed with UDPipe (Straka & Straková, 2017). Within this corpus, each token is represented …","url":["https://www.researchgate.net/profile/Zoey_Liu2/publication/341712949_Frequency-dependent_Regularization_in_Constituent_Ordering_Preferences/links/5ecffdb292851c9c5e65d021/Frequency-dependent-Regularization-in-Constituent-Ordering-Preferences.pdf"]} {"year":"2020","title":"From Chest X-Rays to Radiology Reports: A Multimodal Machine Learning Approach","authors":["S Singh, S Karimi, K Ho-Shon, L Hamey - 2019 Digital Image Computing: Techniques …, 2019"],"snippet":"… Also, on the text side, we use the Glove [30] word embeddings having a 300-dimensional embedding vector for each word, and have been trained on a generic text corpus named Common Crawl having 42B tokens, 1.9M vocab …","url":["https://ieeexplore.ieee.org/abstract/document/8945819/"]} {"year":"2020","title":"From Dataset Recycling to Multi-Property Extraction and Beyond","authors":["T Dwojak, M Pietruszka, Ł Borchmann, J Chłędowski… - arXiv preprint arXiv …, 2020"],"snippet":"… T5. Recently proposed T5 model (Raffel et al., 2020) is a Transformer model pretrained on a cleaned version of CommonCrawl. T5 is famous for achieving excellent performance on the SuperGLUE benchmark (Wang et al., 2019) …","url":["https://arxiv.org/pdf/2011.03228"]} {"year":"2020","title":"From Hero to Z\\'eroe: A Benchmark of Low-Level Adversarial Attacks","authors":["S Eger, Y Benz - arXiv preprint arXiv:2010.05648, 2020"],"snippet":"… The reason may be that our noises are not always natural, in the sense of having high support in large datasets such as CommonCrawl or Wikipedia, but they are still within the limits of cognitive abilities of ordinary humans …","url":["https://arxiv.org/pdf/2010.05648"]} {"year":"2020","title":"From Pixel to Patch: Synthesize Context-aware Features for Zero-shot Semantic Segmentation","authors":["Z Gu, S Zhou, L Niu, Z Zhao, L Zhang - arXiv preprint arXiv:2009.12232, 2020"],"snippet":"Page 1. 1 From Pixel to Patch: Synthesize Context-aware Features for Zero-shot Semantic Segmentation Zhangxuan Gu, Siyuan Zhou, Li Niu*, Zihan Zhao, Liqing Zhang* Abstract—Zero-shot learning has been actively studied …","url":["https://arxiv.org/pdf/2009.12232"]} {"year":"2020","title":"From Syntactic Structure to Semantic Relationship: Hypernym Extraction from Definitions by Recurrent Neural Networks Using the Part of Speech Information","authors":["Y Tan, X Wang, T Jia - International Semantic Web Conference, 2020"],"snippet":"… In recent years, much research pay attention to extracting hypernyms from larger data resources via the high precise of pattern-based methods. [25] extract hypernymy relations from the CommonCrawl web corpus using lexico-syntactic patterns …","url":["https://link.springer.com/chapter/10.1007/978-3-030-62419-4_30"]} {"year":"2020","title":"From Web Crawl to Clean Register-Annotated Corpora","authors":["V Laippala, S Rönnqvist, S Hellström, J Luotolahti… - … of the 12th Web as Corpus …, 2020"],"snippet":"… crawl or extracting data from existing crawl-based datasets, such as Common Crawl1. As … CommonCrawl is a free and openly available web crawl maintained by the CommonCrawl … Lately the Common Crawl dataset has …","url":["https://www.aclweb.org/anthology/2020.wac-1.3.pdf"]} {"year":"2020","title":"From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers","authors":["A Lauscher, V Ravishankar, I Vulić, G Glavaš - arXiv preprint arXiv:2005.00633, 2020"],"snippet":"… It is trained on the CommonCrawl-100 data (Wenzek et al., 2019) of 100 languages … Interestingly, for both high7For XLM-R, we take the reported sizes of languagespecific portions of CommonCrawl-100 from Conneau et al …","url":["https://arxiv.org/pdf/2005.00633"]} {"year":"2020","title":"From Zero to Hero: On the Limitations of Zero-Shot Language Transfer with Multilingual Transformers","authors":["A Lauscher, V Ravishankar, I Vulić, G Glavaš - … of the 2020 Conference on Empirical …, 2020"],"snippet":"… sampling. XLM on RoBERTa (XLM-R). XLM-R (Conneau et al., 2020) is an instance of RoBERTa, robustly trained on a large multilingual CommonCrawl-100 (CC-100) corpus (Wenzek et al., 2019) covering 100 languages …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.363.pdf"]} {"year":"2020","title":"Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing","authors":["Z Dai, G Lai, Y Yang, QV Le - arXiv preprint arXiv:2006.03236, 2020"],"snippet":"… ablation studies. 5 Page 6. • Large scale: Pretraining models for 500K steps with batch size 8K on the five datasets used by XLNet [3] and ELECTRA [5] (Wikipedia + Book Corpus + ClueWeb + Gigaword + Common Crawl). We will …","url":["https://arxiv.org/pdf/2006.03236"]} {"year":"2020","title":"Gated Semantic Difference Based Sentence Semantic Equivalence Identification","authors":["X Liu, Q Chen, X Wu, Y Hua, J Chen, D Li, B Tang… - IEEE/ACM Transactions on …, 2020"],"snippet":"… The word embeddings for the quora corpus are 300dimensional GloVe word vectors pre-trained from the 840B Common Crawl corpus [45] and for the LCQMC are 300-dimensional word vectors pre-trained from the Chinese 5[Online] …","url":["https://ieeexplore.ieee.org/abstract/document/9222233/"]} {"year":"2020","title":"Gender Bias in Multilingual Embeddings and Cross-Lingual Transfer","authors":["J Zhao, S Mukherjee, S Hosseini, KW Chang… - arXiv preprint arXiv …, 2020"],"snippet":"… is an OCCUPATION-TITLE” where name is recognized in each language by using the corresponding Named Entity Recognition model from spaCy.5 To control for the same time period for datasets across languages …","url":["https://arxiv.org/pdf/2005.00699"]} {"year":"2020","title":"Gender Bias in Multilingual Embeddings","authors":["J Zhao, S Mukherjee, S Hosseini, KW Chang…"],"snippet":"… an OCCUPATION-TITLE” where name is recognized in each language by using the corresponding Named Entity Recognition model from spaCy.5 To control for the same time period for datasets across languages …","url":["https://www.researchgate.net/profile/Subhabrata_Mukherjee/publication/340660062_Gender_Bias_in_Multilingual_Embeddings/links/5e97428692851c2f52a6200a/Gender-Bias-in-Multilingual-Embeddings.pdf"]} {"year":"2020","title":"Gender Detection on Social Networks using Ensemble Deep Learning","authors":["K Kowsari, M Heidarysafa, T Odukoya, P Potter… - arXiv preprint arXiv …, 2020"],"snippet":"… 25d, 50d, 100d, and 200d vectors. This word embedding is trained over even bigger corpora, including Wikipedia and Common Crawl content. The objective function is as follows: f(wi − wj, ˜wk) = Pik Pjk (2) where wi is refer to …","url":["https://arxiv.org/pdf/2004.06518"]} {"year":"2020","title":"Gender stereotype reinforcement: Measuring the gender bias conveyed by ranking algorithms","authors":["A Fabris, A Purpura, G Silvello, GA Susto - Information Processing & Management, 2020"],"snippet":"… Corrado, Dean, 2013). Most frequently, they are learnt from large text corpora available online (such as Wikipedia, Google News and Common Crawl, capturing semantic relationships of words based on their usage. Recent work …","url":["https://arxiv.org/pdf/2009.01334"]} {"year":"2020","title":"Gender stereotypes are reflected in the distributional structure of 25 languages","authors":["M Lewis, G Lupyan - Nature Human Behaviour, 2020"],"snippet":"Cultural stereotypes such as the idea that men are more suited for paid work and women are more suited for taking care of the home and family, may contribute to gender imbalances in science, technology, engineering and …","url":["https://www.nature.com/articles/s41562-020-0918-6"]} {"year":"2020","title":"Generalisation of Cyberbullying Detection","authors":["K Richard, L Marc-André - arXiv preprint arXiv:2009.01046, 2020","MA Larochelle, R Khoury"],"snippet":"… We use FastText pre-trained on Common Crawl data featuring 300 dimensions and 2 million word vectors with subword information6 to convert the words into vector representations, of which we concatenate a 60-dimensional binary …","url":["https://arxiv.org/pdf/2009.01046","https://web.ntpu.edu.tw/~myday/doc/ASONAM2020/ASONAM2020_Proceedings/pdf/papers/047_034_296.pdf"]} {"year":"2020","title":"Generalize Sentence Representation with Self-Inference","authors":["KC Yang, HY Kao"],"snippet":"… Our model is trained with the phrases in the parse trees and tested on the whole sentence. Experimental Settings We initialize word embeddings using the pretrained FastText common-crawl vectors (Mikolov et al. 2018) and freeze the weights during training …","url":["https://www.aaai.org/Papers/AAAI/2020GB/AAAI-YangKC.7098.pdf"]} {"year":"2020","title":"Generating Categories for Sets of Entities","authors":["S Zhang, K Balog, J Callan - arXiv preprint arXiv:2008.08428, 2020"],"snippet":"… entity linking for tables and table schema to predicate matching. Ritze et al. [31] propose an iterative method for matching tables to DBpedia. They develop a manually annotated dataset for matching between a Web table corpus …","url":["https://arxiv.org/pdf/2008.08428"]} {"year":"2020","title":"Generating Diverse Conversation Responses by Creating and Ranking Multiple Candidates","authors":["YP Ruan, ZH Ling, X Zhu, Q Liu, JC Gu - Computer Speech & Language, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0885230820300048"]} {"year":"2020","title":"Generating Fact Checking Briefs","authors":["A Fan, A Piktus, F Petroni, G Wenzek, M Saeidi… - Proceedings of the 2020 …, 2020"],"snippet":"… We take the top search hit as the evidence and retrieve the text from CommonCrawl4. Finally, the generated question and retrieved evidence document is provided to the question answering model to generate an answer. 4.1 Question Generation …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.580.pdf"]} {"year":"2020","title":"Generating fake websites: WikiGen","authors":["M Longland - 2020"],"snippet":"… [2015]. This is a relatively small vector file. An alternative considered was the Common Crawl (840B tokens) vectors from Stanford NLP's GloVe [Pennington et al., 2014] but memory issues meant the smaller file was used instead …","url":["https://pdfs.semanticscholar.org/0776/aece84f01a6f593d1748657bf2ec4dec49b4.pdf"]} {"year":"2020","title":"Generating Keyword Lists Related to Topics Represented by an Array of Topic Records, for Use in Targeting Online Advertisements and Other Uses","authors":["L Palaic, MH Gross, SA Schriber - US Patent App. 16/803,214, 2020"],"snippet":"… For data gathering purposes, a custom heuristic can be used that operates on a Common Crawl Corpus. Documents gathered from a Common Crawl process might be automatically annotated with appropriate topics tags so …","url":["https://patents.google.com/patent/US20200273069A1/en"]} {"year":"2020","title":"Generating Personalized Product Descriptions from User Reviews","authors":["G Elad, K Radinsky, B Kimelfeld - 2019"],"snippet":"Page 1. Generating Personalized Product Descriptions from User Reviews Guy Elad Technion - Computer Science Department - M.Sc. Thesis MSC-2019-25 - 2019 Page 2. Technion - Computer Science Department …","url":["http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/2019/MSC/MSC-2019-25.pdf"]} {"year":"2020","title":"Generating Query Suggestions for Cross-language and Cross-terminology Health Information Retrieval","authors":["PM Santos, C Teixeira Lopes - Advances in Information Retrieval: 42nd European …, 2020"],"snippet":"… The English collection is provided by the Consumer Health Search Task in the 2018 edition of the CLEF eHealth Lab2. This task uses a set of 50 English queries and a document corpus with 5,535,120 web pages acquired from a CommonCrawl dump …","url":["https://link.springer.com/content/pdf/10.1007/978-3-030-45442-5_43.pdf"]} {"year":"2020","title":"Generating Representative Headlines for News Stories","authors":["X Gu, Y Mao, J Han, J Liu, H Yu, Y Wu, C Yu, D Finnie… - arXiv preprint arXiv …, 2020"],"snippet":"… By fine-tuning the model on human-curated labels, we can combine the two sources of supervision and further improve performance. 6One can use CommonCrawl to fetch web articles. Page 5. Generating Representative Headlines for News Stories …","url":["https://arxiv.org/pdf/2001.09386"]} {"year":"2020","title":"Generative Language Modeling for Automated Theorem Proving","authors":["S Polu, I Sutskever - arXiv preprint arXiv:2009.03393, 2020"],"snippet":"… We pre-train our models on both GPT-3's post-processed version of CommonCrawl as well as a more reasoning-focused mix of Github, arXiv and Math StackExchange. 7 … 5.3 Pre-training Models are pre-trained on …","url":["https://arxiv.org/pdf/2009.03393"]} {"year":"2020","title":"Generative Models are Unsupervised Predictors of Page Quality: A Colossal-Scale Study","authors":["D Bahri, Y Tay, C Zheng, D Metzler, C Brunk… - arXiv preprint arXiv …, 2020"],"snippet":"… 3.1 Datasets This section describes the datasets used in our experiments. • Web500M. The core corpora used in our experiments consists of a random sample of 500 million English web documents obtained from the Common Crawl1. • GPT-2-Output …","url":["https://arxiv.org/pdf/2008.13533"]} {"year":"2020","title":"Geographically-Balanced Gigaword Corpora for 50 Language Varieties","authors":["J Dunn, B Adams - Proceedings of The 12th Language Resources and …, 2020"],"snippet":"… 3. Collecting Geo-Referenced Documents The data for this paper comes from the Common Crawl,2 as processed in the Corpus of Global Language Use (henceforth, CGLU). This project includes the Common Crawl data from …","url":["https://www.aclweb.org/anthology/2020.lrec-1.308.pdf"]} {"year":"2020","title":"Geoparsing the historical Gazetteers of Scotland: accurately computing location in mass digitised texts","authors":["R Filgueira, C Grover, M Terras, B Alex - Proceedings of the 8th Workshop on …, 2020"],"snippet":"… Small size model (11MB). • en core web md: English multi-task CNN trained on OntoNotes, with GloVe vectors trained on Common Crawl … en core web lg: English multi-task CNN trained on OntoNotes, with GloVe vectors trained on Common Crawl …","url":["https://www.aclweb.org/anthology/2020.cmlc-1.4.pdf"]} {"year":"2020","title":"GeoVectors: a Linked Open Corpus of OpenStreetMap Embeddings","authors":["N Tempelmeier, S Gottschalk, E Demidova - 2020"],"snippet":"… As most of the OSM keys are in English, we chose the 300-dimensional English word vectors trained on the Common Crawl and Wikipedia [5]. Encoding: To encode an OSM entity o, we utilise the individual word em …","url":["https://openreview.net/pdf?id=EibPtOjZUn"]} {"year":"2020","title":"German's Next Language Model","authors":["B Chan, S Schweter, T Möller - arXiv preprint arXiv:2010.10906, 2020"],"snippet":"… The XLM-RoBERTa model is trained on 2.5TB of data from a cleaned Common Crawl corpus (Wenzek et al., 2020) for 100 different languages … OSCAR (Ortiz Suárez et al., 2019) is a set of monolingual corpora extracted from Common Crawl …","url":["https://arxiv.org/pdf/2010.10906"]} {"year":"2020","title":"Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks","authors":["AROKM Downey, A Rumshisky"],"snippet":"… Similarity-based approaches. The first baseline represents a given text, a question, and each of the choices as the average of 300-dimensional CommonCrawl FastText word embeddings (Bojanowski et al. 2016) of its constituent words …","url":["https://www.aaai.org/Papers/AAAI/2020GB/AAAI-RogersA.7778.pdf"]} {"year":"2020","title":"Getting Passive Aggressive About False Positives: Patching Deployed Malware Detectors","authors":["E Raff, B Filar, J Holt - arXiv preprint arXiv:2010.12080, 2020"],"snippet":"… We base our testing and results on a representative sample of industry data. Our corpus consisted of 1,101,407 Microsoft Office documents that contained macros. Similar to [33] data was collected from Common Crawl (CC) [34] and VirusTotal [35] …","url":["https://arxiv.org/pdf/2010.12080"]} {"year":"2020","title":"Getting Structured Data from the Internet","authors":["BDP Scale, JM Patel"],"snippet":"… WARC file format ..... 278 Common crawl index ..... 282 … 331 Processing parquet files for a common crawl index ..... 334 …","url":["https://link.springer.com/content/pdf/10.1007/978-1-4842-6576-5.pdf"]} {"year":"2020","title":"Give your Text Representation Models some Love: the Case for Basque","authors":["R Agerri, IS Vicente, JA Campos, A Barrena, X Saralegi… - arXiv preprint arXiv …, 2020"],"snippet":"… Common Crawl word vectors (FastText-officialcommon-crawl) were trained on Common Crawl and Wikipedia using CBOW with position-weights … train our systems to perform the following comparisons: (i) FastText official models …","url":["https://arxiv.org/pdf/2004.00033"]} {"year":"2020","title":"GLEAKE: Global and Local Embedding Automatic Keyphrase Extraction","authors":["JR Asl, JM Banda - arXiv preprint arXiv:2005.09740, 2020"],"snippet":"… doc2vec_news_dbow AP News glove.6B Wikipedia + Gigaword GloVe 50-300 [28] glove.twitter.27B Twitter 25-200 glove.840B Common Crawl 300 TABLE 1 DIFFERENT PRE-TRAINED EMBEDDINGS USED BY GLEAKE Page 5. 5 …","url":["https://arxiv.org/pdf/2005.09740"]} {"year":"2020","title":"Global Under-Resourced MEedia Translation (GoURMET)","authors":["MAAS BBC, JW BBC, B Haddow, AM Barone…"],"snippet":"Page 1. GoURMET H2020–825299 D5.3 Initial Integration Report Global Under-Resourced MEedia Translation (GoURMET) H2020 Research and Innovation Action Number: 825299 D5.3 – Initial Integration Report Nature Report Work Package WP5 …","url":["https://gourmet-project.eu/wp-content/uploads/2020/07/GoURMET_D5_3_Initial_Integration_Report.pdf"]} {"year":"2020","title":"Going Back in Time to Find What Existed on the Web and How much has been Preserved: How much of Palestinian Web has been Archived?","authors":["T Sammar, H Khalilia - مؤتمرات الآداب والعلوم الانسانية والطبيعية, 2020"],"snippet":"‎… References 1. Common crawl url index. url (http://index.commoncrawl.org/). 2. International internet preservation consortium (iipc).http://www.netpreserve.org. 3. Internet archive (https://archive.org/). 4. Internet archive wayback machine. url (https://archive.org/web/) …","url":["http://proceedings.sriweb.org/akn/index.php/art/article/viewFile/410/466"]} {"year":"2020","title":"Goku's Participation in WAT 2020","authors":["D Wang, O Htun - Proceedings of the 7th Workshop on Asian Translation, 2020"],"snippet":"… Secondly, we fine-tuned on the JPO patent corpus using the mBART auto-encoder model (Liu et al., 2020), which has been pre-trained on largescale monolingual CommonCrawl (CC) corpus in 25 languages using the BART objective (Lewis et al., 2020) …","url":["https://www.aclweb.org/anthology/2020.wat-1.16.pdf"]} {"year":"2020","title":"GoodReads Book Recommendation Service","authors":["Y Tian, V Bai, Z Doganata"],"snippet":"… Common Crawl: https://commoncrawl.org/ Models: ● Howard, Jeremy and Ruder, Sebastian. \"Universal Language Model Fine-tuning for Text Classification.\" Paper presented at the meeting of the ACL, 2018. ● Conneau …","url":["http://tianyijun.com/files/GoodReads_Recommendation.pdf"]} {"year":"2020","title":"GottBERT: a pure German Language Model","authors":["R Scheible, F Thomczyk, P Tippmann, V Jaravine… - arXiv preprint arXiv …, 2020"],"snippet":"… dbmz BERT used as source data a German Wikipedia dump, EU Bookshop corpus, Open Subtitles, CommonCrawl, ParaCrawl and News Crawl which … than mBERT, the multilingual XLM-RoBERTa (Conneau et al., 2019) was …","url":["https://arxiv.org/pdf/2012.02110"]} {"year":"2020","title":"GPT-3 AI language tool calls for cautious optimism","authors":["Oxford Analytica - Emerald Expert Briefings"],"snippet":"… The training process leveraged this by exposing GPT-3 to historical sweeps of the internet, known as 'crawls'. One substantial component was the Common Crawl dataset, a multilingual capture of almost 1 trillion words …","url":["https://www.emerald.com/insight/content/doi/10.1108/OXAN-DB256373/full/html"]} {"year":"2020","title":"GPT-3 Creative Fiction","authors":["G Branwen - 2020"],"snippet":"… For GPT-2, I saw finetuning as doing 2 things: Fixing ignorance: missing domain knowledge. GPT-2 didn't know many things about most things—it was just a handful (1.5 billion) of parameters trained briefly on …","url":["https://www.gwern.net/GPT-3"]} {"year":"2020","title":"Grammatical Error Correction in Low Error Density Domains: A New Benchmark and Analyses","authors":["S Flachs, O Lacroix, H Yannakoudakis, M Rei… - arXiv preprint arXiv …, 2020"],"snippet":"… matical errors. The source texts are randomly se- lected from the first 18 dumps of the CommonCrawl4 dataset and represent a wide range of data seen online such as blogs, magazines, corporate or educational websites. These …","url":["https://arxiv.org/pdf/2010.07574"]} {"year":"2020","title":"Graph Attention Network with Memory Fusion for Aspect-level Sentiment Analysis","authors":["L Yuan, J Wang, LC Yu, X Zhang - Proceedings of the 1st Conference of the Asia …, 2020"],"snippet":"Page 1. Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 27–36 December 4 - 7, 2020 …","url":["https://www.aclweb.org/anthology/2020.aacl-main.4.pdf"]} {"year":"2020","title":"Graph Policy Network for Transferable Active Learning on Graphs","authors":["S Hu, Z Xiong, M Qu, X Yuan, MA Côté, Z Liu, J Tang - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Graph Policy Network for Transferable Active Learning on Graphs Shengding Hu Tsinghua University hsd16@mails.tsinghua.edu.cn Zheng Xiong Tsinghua University harryfootball@163.com Meng Qu MILA meng.qu@umontreal.ca …","url":["https://arxiv.org/pdf/2006.13463"]} {"year":"2020","title":"Graphical User Interface Auto-Completion with Element Constraints","authors":["L Brückner - 2020"],"snippet":"Page 1. Aalto University School of Science Master's Programme in ICT Innovation Lukas Brückner Graphical User Interface Auto-Completion with Element Constraints Master's Thesis Espoo, September 25, 2020 Supervisor: Prof …","url":["https://aaltodoc.aalto.fi/bitstream/handle/123456789/47385/master_Br%C3%BCckner_Lukas_2020.pdf?sequence=1"]} {"year":"2020","title":"GraphWalker: An I/O-Efficient and Resource-Friendly Graph Analytic System for Fast and Scalable Random Walks","authors":["R Wang, Y Li, H Xie, Y Xu, JCS Lui"],"snippet":"Page 1. GraphWalker: An I/O-Efficient and Resource-Friendly Graph Analytic System for Fast and Scalable Random Walks Rui Wang1, Yongkun Li1, Hong Xie2, Yinlong Xu1, John CS Lui3 1University of Science and Technology …","url":["https://www.cse.cuhk.edu.hk/~cslui/PUBLICATION/ATC2020.pdf"]} {"year":"2020","title":"GREEK-BERT: The Greeks visiting Sesame Street","authors":["J Koutsikakis, I Chalkidis, P Malakasiotis… - arXiv preprint arXiv …, 2020"],"snippet":"… and (c) the Greek part of OSCAR [25], a clean version of Common Crawl.5 Accents and other diacritics were removed, and all words were … 5https://commoncrawl.org 6https://github.com/google-research/bert 7The …","url":["https://arxiv.org/pdf/2008.12014"]} {"year":"2020","title":"Grounded Compositional Outputs for Adaptive Language Modeling","authors":["N Pappas, P Mulcaire, NA Smith - arXiv preprint arXiv:2009.11523, 2020"],"snippet":"Page 1. Grounded Compositional Outputs for Adaptive Language Modeling Nikolaos Pappas♣ Phoebe Mulcaire♣ Noah A. Smith♣♦ ♣Paul G. Allen School of Computer Science & Engineering, University of Washington ♦Allen …","url":["https://arxiv.org/pdf/2009.11523"]} {"year":"2020","title":"Guided Generation of Cause and Effect","authors":["Z Li, X Ding, T Liu, JE Hu, B Van Durme"],"snippet":"… 3629 Page 2. Processed Common Crawl Corpus Causal Patterns Based Matching and Filtering … Thus we harvest a large causal dataset from the preprocessed large-scale English Common Crawl corpus (5.14 TB) [Buck et al., 2014] …","url":["https://www.ijcai.org/Proceedings/2020/0502.pdf"]} {"year":"2020","title":"Harbsafe-162. A Domain-Specific Data Set for the Intrinsic Evaluation of Semantic Representations for Terminological Data","authors":["S Arndt, D Schnäpp - arXiv preprint arXiv:2005.14576, 2020"],"snippet":"Page 1. Harbsafe-162 – A Domain-Specific Data Set for the Intrinsic Evaluation of Semantic Representations for Terminological Data Susanne Arndt, MA∗ Technische Universität Braunschweig Dieter Schnäpp, MA∗∗ Technische Universität Braunschweig …","url":["https://arxiv.org/pdf/2005.14576"]} {"year":"2020","title":"Hard-Coded Gaussian Attention for Neural Machine Translation","authors":["W You, S Sun, M Iyyer - arXiv preprint arXiv:2005.00742, 2020"],"snippet":"… 10As the full WMT14 En→Fr is too large for us to feasibly train on, we instead follow Akoury et al. (2019) and train on just the Europarl / Common Crawl subset, while evaluating using the full dev/test sets. 11https://github.com/dojoteef/synst Page 5 …","url":["https://arxiv.org/pdf/2005.00742"]} {"year":"2020","title":"Harnessing Multilinguality in Unsupervised Machine Translation for Rare Languages","authors":["X Garcia, A Siddhant, O Firat, AP Parikh - arXiv preprint arXiv:2009.11201, 2020"],"snippet":"… In contrast, for an actual low-resource language, Gujarati, WMT only provides 500 thousand lines of monolingual data (in news domain) and an additional 3.7 million lines of monolingual data from Common Crawl (noisy, generaldomain) …","url":["https://arxiv.org/pdf/2009.11201"]} {"year":"2020","title":"HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection","authors":["B Mathew, P Saha, SM Yimam, C Biemann, P Goyal… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection* Binny Mathew1†, Punyajoy Saha1†, Seid Muhie Yimam2 Chris Biemann2, Pawan Goyal1, Animesh Mukherjee1 1 Indian Institute of Technology …","url":["https://arxiv.org/pdf/2012.10289"]} {"year":"2020","title":"HCA: Hierarchical Compare Aggregate model for question retrieval in community question answering","authors":["MS Zahedi, M Rahgozar, RA Zoroofi - Information Processing & Management, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S030645732030813X"]} {"year":"2020","title":"Hidden in Plain Sight: Building a Global Sustainable Development Data Catalogue","authors":["J Hodson, A Spezzatti - ICT Analysis and Applications, 2020"],"snippet":"… We iteratively re-train our model using 300-dimensional word-embedding features trained on the CommonCrawl web-scale data set 7 with the GLobal VEctors for Word Representation (GloVe) procedure (see [7]). Table 2 shows …","url":["https://link.springer.com/content/pdf/10.1007/978-981-15-8354-4.pdf#page=795"]} {"year":"2020","title":"Hierarchical models vs. transfer learning for document-level sentiment classification","authors":["J Barnes, V Ravishankar, L Øvrelid, E Velldal - arXiv preprint arXiv:2002.08131, 2020"],"snippet":"… Universal Language Model Fine-Tuning (ULMFIT): We use the AWD-LSTM architecture (Merity et al., 2018) and pretrain on Wikipedia data (or Common Crawl in the case of Norwegian) taken from the CONLL 2017 shared task (Zeman et al., 2017) …","url":["https://arxiv.org/pdf/2002.08131"]} {"year":"2020","title":"Hierarchical Multimodal Attention for End-to-End Audio-Visual Scene-Aware Dialogue Response Generation","authors":["H Le, D Sahoo, NF Chen, SCH Hoi - Computer Speech & Language, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0885230820300280"]} {"year":"2020","title":"High Accuracy Phishing Detection Based on Convolutional Neural Networks","authors":["SY Yerima, MK Alzaylaee - 2020"],"snippet":"… Their approach first encodes the URL strings using one-hot encoding and then inputs each encoded character vector into the LSTM neurons for training and testing. Their method achieved an accuracy of 0.935 on the Common …","url":["https://dora.dmu.ac.uk/bitstream/handle/2086/19450/HAPD-CNN-paper-accepted-version.pdf?sequence=1&isAllowed=y"]} {"year":"2020","title":"HitAnomaly: Hierarchical Transformers for Anomaly Detection in System Log","authors":["S Huang, Y Liu, C Fung, R He, Y Zhao, H Yang, Z Luan - IEEE Transactions on …, 2020"],"snippet":"… matrix. Finally, we obtain the word vector of 'terminating' as [16, 2, 11]. LogRobust [10] leverages off-the-shelf word vectors, which were pre-trained on Common Crawl Corpus dataset using the Fast-Text [20] algorithm. We initialize …","url":["https://ieeexplore.ieee.org/abstract/document/9244088/"]} {"year":"2020","title":"How “BERTology” Changed the State-of-the-Art also for Italian NLP","authors":["F Tamburini - Proceedings of the Seventh Italian Conference on …, 2020"],"snippet":"… Page 2. billions of tokens. Also for GilBERTo it is available only the uncased model. • UmBERTo4: the more recent model de- veloped explicitly for Italian, as far as we know, is UmBERTo ('Musixmatch/umbertocommoncrawl-cased-v1' – umC) …","url":["http://ceur-ws.org/Vol-2769/paper_79.pdf"]} {"year":"2020","title":"How Furiously Can Colourless Green Ideas Sleep? Sentence Acceptability in Context","authors":["JH Lau, CS Armendariz, S Lappin, M Purver, C Shu - arXiv preprint arXiv:2004.00881, 2020"],"snippet":"… BERTUCS Transformer Bidir. 340M Uncased 13GB WordPiece Wikipedia, BookCorpus XLNET Transformer Hybrid 340M Cased 126GB SentenceWikipedia, BookCorpus, Giga5 Piece ClueWeb, Common Crawl Table 1: Language models and their configurations …","url":["https://arxiv.org/pdf/2004.00881"]} {"year":"2020","title":"How Human is Machine Translationese? Comparing Human and Machine Translations of Text and Speech","authors":["J van Genabith, E Teich","Y Bizzoni, TS Juzek, C España-Bonet, KD Chowdhury… - Proceedings of the 17th …, 2020"],"snippet":"… German translation and interpreting are both from English. lines de tokens en tokens Ct Cs CommonCrawl 2,212,292 49,870,179 54,140,396 MultiUN 108,387 4,494,608 4,924,596 NewsCommentary 324,388 8,316,081 46,222,416 …","url":["http://www.sfb1102.uni-saarland.de/wp/wp-content/uploads/2020/06/IWSLT-b1-B7-final2020.pdf","https://www.aclweb.org/anthology/2020.iwslt-1.34.pdf"]} {"year":"2020","title":"How Language Shapes Prejudice Against Women: An Examination Across 45 World Languages","authors":["D DeFranza, H Mishra, A Mishra - 2020"],"snippet":"… context in which it occurs. Using text data from Wikipedia and the Common Crawl project … discussing gender issues. Wikipedia and a corpus of web crawl data from over five billion web pages, known as the Common Crawl, serve as our data source …","url":["https://psyarxiv.com/mrbcf/download?format=pdf"]} {"year":"2020","title":"How Many Pages? Paper Length Prediction from the Metadata","authors":["E Çano, O Bojar - arXiv preprint arXiv:2010.15924, 2020"],"snippet":"… We used static word embeddings of 300 dimensions from three sources: the 6 billion tokens collection of Common Crawl4 trained with Glove [23], the 840 billion tokens collection of Common Crawl trained with Glove, and …","url":["https://arxiv.org/pdf/2010.15924"]} {"year":"2020","title":"How Much Self-attention Do We Need? Trading Attention for Feed-forward Layers","authors":["K Irie, A Gerstenberger, R Schlüter, H Ney - ICASSP, Barcelona, Spain, 2020"],"snippet":"… to 6 (in contrast to what we typically observe eg, on LibriSpeech [28]): this is in fact due to some overlap between the common crawl training subset … Then we fine-tune the model on the TED-LIUM 2 transcriptions (2 M words) …","url":["https://www-i6.informatik.rwth-aachen.de/publications/download/1126/Irie-ICASSP-2020.pdf"]} {"year":"2020","title":"How Should Markup Tags Be Translated?","authors":["G Hanneman, G Dinu, AI Amazon"],"snippet":"… especially large or noisy data sets. For EN–DE, we begin with the training data released by the WMT 2020 news task, ignoring the Common Crawl and Paracrawl corpora and heavily filtering WikiMatrix. Our EN–FR training data …","url":["https://assets.amazon.science/fa/f2/640de7fd483a8c385db7a0b5c7cd/how-should-markup-tags-be-translated.pdf"]} {"year":"2020","title":"Human-in-the-Loop AI for Analysis of Free Response Facial Expression Label Sets","authors":["C Butler, H Oster, J Togelius - Proceedings of the 20th ACM International Conference …, 2020"],"snippet":"… 1. GloVe, 300-dimensional vectors trained on Common Crawl [33]: distributional model that learns word vectors by examining word co- occurrences within a text corpus with logbilinear regression, global matrix factorization and local context window methods …","url":["https://dl.acm.org/doi/abs/10.1145/3383652.3423892"]} {"year":"2020","title":"Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning","authors":["R Schuster, T Schuster, Y Meri, V Shmatikov - arXiv preprint arXiv:2001.04935, 2020"],"snippet":"Page 1. Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning* Roei Schuster Tel Aviv University † roeischuster@mail.tau.ac.il Tal Schuster CSAIL, MIT tals@csail.mit.edu Yoav Meri † 111yoav@gmail.com Vitaly …","url":["https://arxiv.org/pdf/2001.04935"]} {"year":"2020","title":"Hungarian layer: A novel interpretable neural layer for paraphrase identification","authors":["H Xiao - Neural Networks, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0893608020302653"]} {"year":"2020","title":"HW-TSC's Participation in the WMT 2020 News Translation Shared Task","authors":["D Wei, H Shang, Z Wu, Z Yu, L Li, J Guo, M Wang…"],"snippet":"… monolingual text from Common Crawl and news crawl 2018 for Km and En, respectively. 2.1.3 Ps/En Similar to Km/En, we also use the Para Crawl v5.1 (1M), Khmer and Pashto parallel data (0.03M) as bitext and select …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.31.pdf"]} {"year":"2020","title":"Hybrid Feature Model for Emotion Recognition in Arabic Text","authors":["N Alswaidan, MEB Menai - IEEE Access, 2020"],"snippet":"… 5https://github.com/minimaxir/char-embeddings 6https://unicode.org/emoji/ charts/full-emoji-list.html • FastText [22]: 300-dimensional word vectors trained on Common Crawl7 using CBOW with position weights … 7https://commoncrawl …","url":["https://ieeexplore.ieee.org/iel7/6287639/8948470/09007420.pdf"]} {"year":"2020","title":"HyCoNN: Hybrid Cooperative Neural Networks for Personalized News Discussion Recommendation","authors":["J Risch, V Künstler, R Krestel"],"snippet":"… For DeepCoNN and also HyCoNN, we use 300-dimensional fastText word embeddings, which were pre-trained on the English-language Common Crawl dataset [25]. The resulting embeddings function as input to a convolutional layer that consists of n neurons …","url":["https://hpi.de/fileadmin/user_upload/fachgebiete/naumann/people/risch/risch2020hyconn.pdf"]} {"year":"2020","title":"Identifying Cognates in English-Dutch and French-Dutch by means of Orthographic Information and Cross-lingual Word Embeddings","authors":["E Lefever, S Labat, P Singh - the 12th Conference on Language Resources and …, 2020"],"snippet":"… The former approach was improved in the following way. Firstly, standard fastText word embeddings, which were pretrained on Common Crawl and Wikipedia and generated with the standard skip-gram model as proposed by Bo- janowski et al …","url":["https://biblio.ugent.be/publication/8662200/file/8662201"]} {"year":"2020","title":"Identifying Phished Website Using Multilayer Perceptron","authors":["A Dev, V Jain - Advances in Distributed Computing and Machine …"],"snippet":"… Phishing Webpage Source: PhishTank, OpenPhish. Legitimate Webpage Source: Alexa, Common Crawl. The main process in the phishing webpage is to work on its features and how effectively it is handling the dataset. Each …","url":["https://link.springer.com/chapter/10.1007/978-981-15-4218-3_37"]} {"year":"2020","title":"Identifying Sensitive URLs at Web-Scale","authors":["M Srdjan, I Costas, S Georgios, N Laoutaris - 2020","S Matic, C Iordanou, G Smaragdakis, N Laoutaris - studies"],"snippet":"… We then use our classifier to search for sensitive URLs in a corpus of 1 Billion URLs collected by the Common Crawl project. We identify more than 155 millions sensitive URLs in more than 4 million domains … Automated …","url":["http://eprints.networks.imdea.org/2187/1/imc20.pdf","http://laoutaris.info/wp-content/uploads/2020/09/imc2020.pdf"]} {"year":"2020","title":"Identifying Tasks from Mobile App Usage Patterns","authors":["Y Tian, K Zhou, M Lalmas, D Pelleg - Proceedings of the 43rd International ACM …, 2020"],"snippet":"Page 1. Identifying Tasks from Mobile App Usage Patterns Yuan Tian University of Nottingham Nottingham, UK yuan.tian@nottingham.ac.uk Ke Zhou University of Nottingham Nottingham, UK ke.zhou@nottingham.ac …","url":["https://dl.acm.org/doi/abs/10.1145/3397271.3401441"]} {"year":"2020","title":"Igbo-English Machine Translation: An Evaluation Benchmark","authors":["I Ezeani, P Rayson, I Onyenwe, C Uchechukwu… - arXiv preprint arXiv …, 2020"],"snippet":"… Page 2. Published as a conference paper at ICLR 2020 texts (eg Wikipedia, CommonCrawl, local government materials, local TV/Radio stations etc). Phase 2: Translation and correction In this phase, the 10,000 sentence pairs …","url":["https://arxiv.org/pdf/2004.00648"]} {"year":"2020","title":"IIU: Specialized Architecture for Inverted Index Search","authors":["J Heo, J Won, Y Lee, S Bharuka, J Jang, TJ Ham…"],"snippet":"Page 1. IIU: Specialized Architecture for Inverted Index Search Jun Heo∗ Jaeyeon Won∗ Yejin Lee∗ Shivam Bharuka†§ Jaeyoung Jang‡ Tae Jun Ham∗ Jae W. Lee∗ ∗Seoul National University, †Facebook, Inc., ‡Sungkyunkwan University …","url":["https://www.cs.princeton.edu/~tae/iiu_asplos2020.pdf"]} {"year":"2020","title":"Imitation Attacks and Defenses for Black-box Machine Translation Systems","authors":["E Wallace, M Stern, D Song - arXiv preprint arXiv:2004.15015, 2020"],"snippet":"… For English→German, we query the source side of the WMT14 training set (≈ 4.5M sentences).3 For Nepali→English, we query the Nepali Language Wikipedia (≈ 100,000 sentences) and approximately two million sentences from Nepali common crawl …","url":["https://arxiv.org/pdf/2004.15015"]} {"year":"2020","title":"Impact of News on the Commodity Market: Dataset and Results","authors":["A Sinha, T Khandait - arXiv preprint arXiv:2009.04202, 2020"],"snippet":"… The GloVe pre-trained word-embeddings are known to capture the meaning of a word through a high dimensional vector [22]. For this research, we used the 300-dimensional vectors which were trained on 840 billion tokens through the common crawl …","url":["https://arxiv.org/pdf/2009.04202"]} {"year":"2020","title":"Impact of sentence length on the readability of web for screen reader users","authors":["BB Kadayat, E Eika - International Conference on Human-Computer …, 2020"],"snippet":"… They used MapReduce for real-time calculation of the readability of more than a billion webpages. The datasets called Common Crawl included 61 million domain-names, 92 million PDF documents, and seven million Word documents …","url":["https://link.springer.com/chapter/10.1007/978-3-030-49282-3_18"]} {"year":"2020","title":"Improved method of word embedding for efficient analysis of human sentiments","authors":["S Sagnika, BSP Mishra, SK Meher - Multimedia Tools and Applications, 2020"],"snippet":"… Designed by Pennington [23] in 2014, it creates a word vector space by training on word-word co-occurrence counts. The models are trained on Wikipedia dumps, Gigaword 5 and Common Crawl texts, and apply …","url":["https://link.springer.com/article/10.1007/s11042-020-09632-9"]} {"year":"2020","title":"Improving Indonesian Text Classification Using Multilingual Language Model","authors":["IF Putra, A Purwarianti - arXiv preprint arXiv:2009.05713, 2020"],"snippet":"… The XLM-R Large also has substantially more parameters and was trained on a larger balanced dataset from the CommonCrawl corpus that contains 100 languages. The XLM-R variant that we use in the experiment is …","url":["https://arxiv.org/pdf/2009.05713"]} {"year":"2020","title":"IMPROVING KNOWLEDGE ACCESSIBILITY ON THE WEB","authors":["R Yu"],"snippet":"Page 1. IMPROVING KNOWLEDGE ACCESSIBILITY ON THE WEB – from Knowledge Base Augmentation to Search as Learning Inaugural dissertation for the attainment of the title of doctor in the Faculty of Mathematics and …","url":["https://docserv.uni-duesseldorf.de/servlets/DerivateServlet/Derivate-56199"]} {"year":"2020","title":"Improving Low Compute Language Modeling with In-Domain Embedding Initialisation","authors":["C Welch, R Mihalcea, JK Kummerfeld - arXiv preprint arXiv:2009.14109, 2020"],"snippet":"… We see that the value of additional data depends on the domain. Gigaword is also news text and is able to improve performance. The larger GloVe datasets use Wikipedia and CommonCrawl data, which is a poorer match and so does not improve performance …","url":["https://arxiv.org/pdf/2009.14109"]} {"year":"2020","title":"Improving Network Security through Collaborative Sharing","authors":["CS Ardi - 2020"],"snippet":"Page 1. IMPROVING NETWORK SECURITY THROUGH COLLABORATIVE SHARING by Calvin Satiawan Ardi A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA …","url":["http://search.proquest.com/openview/9de70dfceaa27a03d073c17f5f071579/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2020","title":"Improving Personal Health Mention Detection on Twitter Using Permutation Based Word Representation Learning","authors":["PI Khan, I Razzak, A Dengel, S Ahmed - International Conference on Neural …, 2020"],"snippet":"… In: Invited Talk at the SIGIR 2012 Workshop on Open-Source Information Retrieval (2012)Google Scholar. 24. Common CrawlCommon crawl corpus (2019). http://commoncrawl.org. Copyright information. © Springer Nature …","url":["https://link.springer.com/chapter/10.1007/978-3-030-63830-6_65"]} {"year":"2020","title":"Improving Ranking in Document based Search Systems","authors":["RRK Menon, J Kaartik, ETK Nambiar, AK TK, A Kumar - 2020 4th International …, 2020"],"snippet":"… The pre-trained word embedding models used were: 1. Fasttext i. – Wiki News 1M 16B Tokens ii. – Common Crawl 2M 600B Tokens 2. GloVe 2.2M 840B Tokens B. Evaluation Metrics The standard metrics include Precision, Recall, and F1 score …","url":["https://ieeexplore.ieee.org/abstract/document/9143047/"]} {"year":"2020","title":"Increasing Accessibility of Electronic Theses and Dissertations (ETDs) Through Chapter-level Classification","authors":["PM Jude - 2020"],"snippet":"Page 1. Increasing Accessibility of Electronic Theses and Dissertations (ETDs) Through Chapter-level Classification Palakh Mignonne Jude Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University …","url":["https://vtechworks.lib.vt.edu/bitstream/handle/10919/99294/Jude_P_T_2020.pdf?sequence=1"]} {"year":"2020","title":"INDEXING OF BIG TEXT DATA AND SEARCHING IN THE INDEXED DATA","authors":["BD KOZÁK"],"snippet":"… enhanced documents. Input data In [14, 8], the input data was CommonCrawl and Wikipedia. The English wikipedia … 8 Page 15. CommonCrawl3is a project that maintains an open repository of web crawl data. For the practical part of …","url":["https://dspace.vutbr.cz/bitstream/handle/11012/192492/final-thesis.pdf?sequence=3"]} {"year":"2020","title":"Indic-Transformers: An Analysis of Transformer Language Models for Indian Languages","authors":["K Jain, A Deshpande, K Shridhar, F Laumann, A Dash - arXiv preprint arXiv …, 2020"],"snippet":"… The Open Super-large Crawled ALMAnaCH coRpus (OSCAR) dataset [48] is a filtered version of the CommonCrawl dataset and has monolingual corpora for 166 languages. Prior to training, we normalize the OSCAR dataset for …","url":["https://arxiv.org/pdf/2011.02323"]} {"year":"2020","title":"IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages","authors":["D Kakwani, A Kunchukuttan, S Golla, A Bhattacharyya…"],"snippet":"… The OSCAR project (Ortiz Suarez et al., 2019), a recent processing of CommonCrawl, also contains much less data for most Indian languages than our crawls. The CCNet () and C4 () projects also provide tools to …","url":["https://indicnlp.ai4bharat.org/papers/arxiv2020_indicnlp_corpus.pdf"]} {"year":"2020","title":"IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding","authors":["B Wilie, K Vincentio, GI Winata, S Cahyawijaya, X Li… - arXiv preprint arXiv …, 2020"],"snippet":"… Page 5. Dataset # Words # Sentences Size Style Source OSCAR (Ortiz Suárez et al., 2019) 2,279,761,186 148,698,472 14.9 GB mixed OSCAR CoNLLu Common Crawl (Ginter et al., 2017) 905,920,488 77,715,412 6.1 GB mixed LINDAT/CLARIAH-CZ …","url":["https://arxiv.org/pdf/2009.05387"]} {"year":"2020","title":"Inducing Language-Agnostic Multilingual Representations","authors":["W Zhao, S Eger, J Bjerva, I Augenstein - arXiv preprint arXiv:2008.09112, 2020"],"snippet":"… XLM-R Contextualized word embeddings (Conneau et al., 2019) are pre-trained on the CommonCrawl corpora of 100 languages, which contain more monolingual data than Wikipedia corpora, with 1) a vocabulary size …","url":["https://arxiv.org/pdf/2008.09112"]} {"year":"2020","title":"Inductive Learning on Commonsense Knowledge Graph Completion","authors":["B Wang, G Wang, J Huang, J You, J Leskovec… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Inductive Learning on Commonsense Knowledge Graph Completion Bin Wang1, Guangtao Wang2, Jing Huang2, Jiaxuan You3, Jure Leskovec3, C.-C. Jay Kuo1 1 University of Southern California 2 JD AI Research …","url":["https://arxiv.org/pdf/2009.09263"]} {"year":"2020","title":"Inexpensive Domain Adaptation of Pretrained Language Models: A Case Study on Biomedical Named Entity Recognition","authors":["N Poerner, U Waltinger, H Schütze - arXiv preprint arXiv:2004.03354, 2020"],"snippet":"… 1 Introduction Pretrained Language Models such as BERT (De- vlin et al., 2019) have spearheaded advances on many NLP tasks. Usually, PTLMs are pretrained on unlabeled general-domain and/or mixed-domain text, such …","url":["https://arxiv.org/pdf/2004.03354"]} {"year":"2020","title":"Inexpensive Domain Adaptation of Pretrained Language Models: Case Studies on Biomedical NER and Covid-19 QA","authors":["N Poerner, U Waltinger, H Schütze"],"snippet":"… Pretrained Language Models (PTLMs) such as BERT (Devlin et al., 2019) have spearheaded ad- vances on many NLP tasks. Usually, PTLMs are pretrained on unlabeled general-domain and/or mixed-domain text, such …","url":["https://web.iiit.ac.in/~rizwan.ali/papers/828.pdf"]} {"year":"2020","title":"Infosys Machine Translation System for WMT20 Similar Language Translation Task","authors":["K Rathinasamy, A Singh, B Sivasambagupta… - Proceedings of WMT, 2020"],"snippet":"… data. 2.2.2 Synthetic data CommonCrawl n-grams raw monolingual files are processed1 to remove sentences with invalid characters, strip leading and trailing whitespaces, and remove duplicate sentences. 3 System Overview …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.52.pdf"]} {"year":"2020","title":"iNLTK: Natural Language Toolkit for Indic Languages","authors":["G Arora - arXiv preprint arXiv:2009.12534, 2020"],"snippet":"… iNLTK results were compared against results reported in (Kunchukuttan et al., 2020) for pre-trained embeddings released by the FastText project trained on Wikipedia (FT-W) (Bo- janowski et al., 2016), Wiki+CommonCrawl …","url":["https://arxiv.org/pdf/2009.12534"]} {"year":"2020","title":"INSET: Sentence Infilling with INter-SEntential Transformer","authors":["Y Huang, Y Zhang, O Elachqar, Y Cheng - Proceedings of the 58th Annual Meeting of …, 2020"],"snippet":"… For the effectiveness of human evaluation, we use the simplest strategy to mask sentences. The Recipe dataset is obtained from (https: //commoncrawl.org), where the metadata is formatted according to Schema.org (https:// schema.org/Recipe) …","url":["https://www.aclweb.org/anthology/2020.acl-main.226.pdf"]} {"year":"2020","title":"Integrating Geospatial Data and Social Media in Bidirectional Long-Short Term Memory Models to Capture Human Nature Interactions","authors":["A Larkin, P Hystad - The Computer Journal, 2020"],"snippet":"… language processing [32]. Tweet texts were transformed into word vector arrays using the Stanford GloVe (Global Vectors for Word Representation) Common Crawl dictionary (https://nlp.stanford.edu/projects/glove/). The GloVe …","url":["https://academic.oup.com/comjnl/advance-article-abstract/doi/10.1093/comjnl/bxaa094/5893915"]} {"year":"2020","title":"Intelligent phishing detection scheme using deep learning algorithms","authors":["MA Adebowale, KT Lwin, MA Hossain - Journal of Enterprise Information …, 2020"],"snippet":"… Half of the data set consisted of phishing sites from PhishTank, which is a site that is used as phishing URL depository, and half of the data set was comprised of legitimate sites from Common Crawl, a corpus of web crawl data …","url":["https://www.emerald.com/insight/content/doi/10.1108/JEIM-01-2020-0036/full/html"]} {"year":"2020","title":"Intermediate Training of BERT for Product Matching","authors":["R Peeters, C Bizer, G Glavaš - small"],"snippet":"… uct Corpus for Large-Scale Product Matching [26]. These datasets are derived from schema.org annotations from thousands of webshops extracted from the Common Crawl. Relying on schema.org annotations of product identifiers …","url":["http://data.dws.informatik.uni-mannheim.de/largescaleproductcorpus/data/v2/papers/DI2KG2020_Peeters.pdf"]} {"year":"2020","title":"Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve","authors":["O Agarwal, Y Yang, BC Wallace, A Nenkova - arXiv preprint arXiv:2004.04564, 2020"],"snippet":"… representations are concatenated. We use 300 dimensional cased GloVe (Pennington et al., 2014) vectors trained on Common Crawl.2 We use the IO labeling scheme and evaluate the systems via micro-F1, at the token level. We use …","url":["https://arxiv.org/pdf/2004.04564"]} {"year":"2020","title":"Interpretable & Time-Budget-Constrained Contextualization for Re-Ranking","authors":["S Hofstätter, M Zlabinger, A Hanbury - arXiv preprint arXiv:2002.01854, 2020"],"snippet":"… The first section contains the traditional baselines; the second contains the neural re-ranking baselines; in the third section we report the results of our TK model with three 6 42B CommonCrawl lower-cased: https://nlp.stanford.edu/projects/glove/ Page 5 …","url":["https://arxiv.org/pdf/2002.01854"]} {"year":"2020","title":"Introduction to Cloud Computing and Amazon Web Services (AWS)","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"… 5 examples. IAM and S3 sections are necessary for Chapters 6 and 7 since we will be using data compiled by a nonprofit called common crawl which is only publicly available on S3 through AWS open registry. You will have …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_3"]} {"year":"2020","title":"Introduction to Common Crawl Datasets","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"The Common Crawl Foundation (https://commoncrawl.org/) is a 501(c)(3) nonprofit involved in providing open access web crawl data going back to over eight years. They perform monthly web crawls which cover over 25 billion pages for each month. This …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_6"]} {"year":"2020","title":"Introduction to Web Scraping","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"… We will introduce natural language processing algorithms in Chapter 4, and we will put them into action in Chapters 6 and 7 on a Common Crawl dataset. The next step is loading the cleaned data from the preceding step into an appropriate database …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_1"]} {"year":"2020","title":"Is Everything Fine, Grandma? Acoustic and Linguistic Modeling for Robust Elderly Speech Emotion Recognition","authors":["G Sogancıoglu, O Verkholyak, H Kaya, D Fedotov… - INTERSPEECH, Shanghai …, 2020","G Soğancıoğlu, O Verkholyak, H Kaya, D Fedotov… - arXiv preprint arXiv …, 2020"],"snippet":"… We use pre-trained 100-dimensional English and German word embeddings [23], which are trained on Common Crawl2, and finetune the pretrained model on our dataset … 1https://cloud.google.com/translate 2http://commoncrawl.org/ Page 3. cording to its POS …","url":["https://arxiv.org/pdf/2009.03432","https://indico2.conference4me.psnc.pl/event/35/contributions/3140/attachments/1218/1261/Wed-SS-1-4-12.pdf"]} {"year":"2020","title":"Is language modeling enough? Evaluating effective embedding combinations","authors":["R Schneider, T Oberhauser, P Grundmann, FA Gers… - 2020"],"snippet":"… 2.1. Universal Text Embeddings Recently, researchers explore universal text embeddings trained on extensive Web corpora, such as the Common Crawl6 (Mikolov et al., 2018; Radford et al., 2019), the billion … 5https …","url":["https://eprints.soton.ac.uk/438613/1/LREC20_LM_TM_27_1_.pdf"]} {"year":"2020","title":"Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation","authors":["B Eikema, W Aziz - arXiv preprint arXiv:2005.10283, 2020"],"snippet":"… For English-Nepali we also use a translated version of the Penn Treebank4 and for English-Sinhala we additionally use Open Subtitles (Lison et al., 2018). We use a filtered crawl of Wikipedia and Common Crawl released in Guzmán et al …","url":["https://arxiv.org/pdf/2005.10283"]} {"year":"2020","title":"Is Wikipedia succeeding in reducing gender bias? Assessing changes in gender bias in Wikipedia using word embeddings","authors":["KG Schmahl, TJ Viering, S Makrodimitris, AN Jahfari… - Proceedings of the Fourth …, 2020"],"snippet":"… These categories have shown significant bias towards male or female words in embeddings from Google News corpora [Mikolov et al., 2013a], Google Books [Jones et al., 2020], as well as a 'Common Crawl' corpus [Caliskan et al., 2017] …","url":["https://www.aclweb.org/anthology/2020.nlpcss-1.11.pdf"]} {"year":"2020","title":"It's the Best Only When It Fits You Most: Finding Related Models for Serving Based on Dynamic Locality Sensitive Hashing","authors":["L Zhou, Z Wang, A Das, J Zou - arXiv preprint arXiv:2010.09474, 2020"],"snippet":"… BAIR CORD-19 LSUN Bedroom iNaturalist (iNat) 2017 ImageNet OpenImagesV4 Wikipedia 1 Billion Word Benchmark CommonCrawl Multillingual Wikipedia Natural Questions 3 15 3 3 8 10 9 7 2 5 58 3 5 2 5 2 CelebA HQ iMet Collection 2019 …","url":["https://arxiv.org/pdf/2010.09474"]} {"year":"2020","title":"Italian Transformers Under the Linguistic Lens","authors":["A Miaschip, G Sartim, D Brunato, F Dell'Orletta… - Proceedings of the Seventh …, 2020"],"snippet":"… For instance, we can notice that, for both the probing models, features related to the distribution of syntactic relations (SyntacticDep) are better predicted by GePpeTto, while GilBERTo and UmBERTo-Commoncrawl are the best …","url":["http://ceur-ws.org/Vol-2769/paper_56.pdf"]} {"year":"2020","title":"JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation","authors":["Z Mao, F Cromieres, R Dabre, H Song, S Kurohashi - arXiv preprint arXiv:2005.03361, 2020"],"snippet":"… Mono Ja Common Crawl 22M En News Crawl 22M Ru News Crawl 22M … 5.1.2. Monolingual data We use monolingual data containing 22M Japanese, 22M English and 22M Russian sentences randomly sub-sampled from Common Crawl dataset and News crawl4 dataset …","url":["https://arxiv.org/pdf/2005.03361"]} {"year":"2020","title":"Joint Multiclass Debiasing of Word Embeddings","authors":["R Popović, F Lemmerich, M Strohmaier - arXiv preprint arXiv:2003.11520, 2020"],"snippet":"… As in previous studies [7], evaluation was done on three pretrained Word Embedding models with vector dimension of 300: FastText2(English we- bcrawl and Wikipedia, 2 million words), GloVe3(Common Crawl, Wikipedia …","url":["https://arxiv.org/pdf/2003.11520"]} {"year":"2020","title":"Joint translation and unit conversion for end-to-end localization","authors":["G Dinu, P Mathur, M Federico, S Lauly, Y Al-Onaizan - arXiv preprint arXiv …, 2020","GDPMMFSL YaserAl-Onaizan, AWS Amazon"],"snippet":"… Europarl (Koehn, 2005) and news commentary data from WMT En→De shared task 2019 totalling 2.2 million sentences.2 Standard translation test sets do not have, however, enough examples of unit conversions and in fact corpora …","url":["https://arxiv.org/pdf/2004.05219","https://assets.amazon.science/b2/a7/e1ada6104b3587401b30ccc8637a/joint-translation-and-unit-conversion-for-end-to-end-localization.pdf"]} {"year":"2020","title":"KBPearl: a knowledge base population system supported by joint entity and relation linking","authors":["X Lin, H Li, H Xin, Z Li, L Chen - Proceedings of the VLDB Endowment, 2020"],"snippet":"Page 1. KBPearl: A Knowledge Base Population System Supported by Joint Entity and Relation Linking Xueling Lin, Haoyang Li, Hao Xin, Zijian Li, Lei Chen Department of Computer Science and Engineering The Hong Kong …","url":["https://dl.acm.org/doi/pdf/10.14778/3384345.3384352"]} {"year":"2020","title":"Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation","authors":["W Zhang, X Li, Y Yang, R Dong, G Luo - Future Internet, 2020"],"snippet":"Recently, the pretraining of models has been successfully applied to unsupervised and semi-supervised neural machine translation. A cross-lingual language model uses a pretrained masked language model to initialize the …","url":["https://www.mdpi.com/1999-5903/12/12/215/pdf"]} {"year":"2020","title":"Kernel compositional embedding and its application in linguistic structured data classification","authors":["H Ganji, MM Ebadzadeh, S Khadivi - Knowledge-Based Systems, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0950705120300460"]} {"year":"2020","title":"Key Phrase Classification in Complex Assignments","authors":["M Ravikiran - arXiv preprint arXiv:2003.07019, 2020"],"snippet":"… Corpus and English Wikipedia used in BERT was found to be useful for training. The additional data included Common Crawl News dataset (76 GB), Web text corpus (38 GB) and Stories from Common Crawl (31 GB). This coupled …","url":["https://arxiv.org/pdf/2003.07019"]} {"year":"2020","title":"Keynote speaker","authors":["M Benjamin"],"snippet":"Skip to content …","url":["https://asling.org/tc42/"]} {"year":"2020","title":"Keyphrase Extraction as Sequence Labeling Using Contextualized Embeddings","authors":["D Sahrawat, D Mahata, H Zhang, M Kulkarni, A Sharma… - Advances in Information …, 2020"],"snippet":"… We also use 300 dimensional fixed embeddings from Glove [20], Word2Vec [19], and FastText [13] (common-crawl, wiki-news). We also compare the proposed architecture against four popular baselines …","url":["https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7148038/"]} {"year":"2020","title":"KGvec2go--Knowledge Graph Embeddings as a Service","authors":["J Portisch, M Hladik, H Paulheim - arXiv preprint arXiv:2003.05809, 2020"],"snippet":"… knowledge graphs. 4.3. WebIsALOD The WebIsA database (Seitner et al., 2016) is a data set which consists of hypernymy relations ex- tracted from the Common Crawl8, a downloadable copy of the Web. The extraction was …","url":["https://arxiv.org/pdf/2003.05809"]} {"year":"2020","title":"KIT's IWSLT 2020 SLT Translation System","authors":["NQ Pham, F Schneider, TN Nguyen, TL Ha, TS Nguyen… - Proceedings of the 17th …, 2020"],"snippet":"… Table 2: Text Training Data Dataset Sentences TED Talks (TED) 220K Europarl (EPPS) 2.2MK CommonCrawl 2.1M Rapid 1.21M ParaCrawl 25.1M OpenSubtitles 12.6M WikiTitle 423K Back-translated News 26M Page 2. 56 3 Simultaneous Speech Translation …","url":["https://www.aclweb.org/anthology/2020.iwslt-1.4.pdf"]} {"year":"2020","title":"KLEJ: Comprehensive Benchmark for Polish Language Understanding","authors":["P Rybak, R Mroczkowski, J Tracz, I Gawlik - arXiv preprint arXiv:2005.00630, 2020"],"snippet":"… word vectors. To evaluate their impact on KLEJ tasks, we initialize word embeddings with fastText (Bojanowski et al., 2016) trained on Common Crawl and Wikipedia for Polish language (Grave et al., 2018). 4.1.3 ELMo ELMo …","url":["https://arxiv.org/pdf/2005.00630"]} {"year":"2020","title":"KLUMSy@ KIPoS: Experiments on Part-of-Speech Tagging of Spoken Italian","authors":["T Proisl, G Lapesa"],"snippet":"… The PAISÀ corpus of Italian texts from the web (Lyding et al., 2014),5 the text of the Italian Wikimedia dumps,6 ie Wiki(pedia|books|news|versity|voyage), as ex- tracted by Wikipedia Extractor,7 and the Italian subset of OSCAR …","url":["http://ceur-ws.org/Vol-2765/paper140.pdf"]} {"year":"2020","title":"Knowledge Augmented Aspect Category Detection for Aspect-based Sentiment Analysis","authors":["K Martinen - 2019"],"snippet":"Page 1. MASTERTHESIS Knowledge Augmented Aspect Category Detection for Aspect-based Sentiment Analysis Kai Martinen 01.12.2019 University of Hamburg MIN-Faculty Department of Computer Science Language Technologies …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/teaching/theses/completed-theses/2019-ma-martinen.pdf"]} {"year":"2020","title":"Knowledge Efficient Deep Learning for Natural Language Processing","authors":["H Wang - arXiv preprint arXiv:2008.12878, 2020"],"snippet":"Page 1. Knowledge Efficient Deep Learning for Natural Language Processing by Hai Wang A thesis submitted in partial fulfillment for the degree of Doctor of Philosophy in Computer Science at the Toyota Technological Institute …","url":["https://arxiv.org/pdf/2008.12878"]} {"year":"2020","title":"Knowledge Graphs Evolution and Preservation--A Technical Report from ISWS 2019","authors":["N Abbas, K Alghamdi, M Alinam, F Alloatti, G Amaral… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Knowledge Graphs Evolution and Preservation A Technical Report from ISWS 2019 December 23, 2020 Bertinoro, Italy arXiv:2012.11936v1 [cs.AI] 22 Dec 2020 Page 2. Authors Main Editors Valentina Anita Carriero …","url":["https://arxiv.org/pdf/2012.11936"]} {"year":"2020","title":"KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding","authors":["J Ham, YJ Choe, K Park, I Choi, H Soh - arXiv preprint arXiv:2004.03289, 2020"],"snippet":"… 20 days). We also use XLM-R (Conneau and Lample, 2019), a publicly available cross-lingual language model that was pre-trained on 2.5TB of Common Crawl corpora in 100 languages including Korean (54GB). Note that …","url":["https://arxiv.org/pdf/2004.03289"]} {"year":"2020","title":"LAMBERT: Layout-Aware language Modeling using BERT for information extraction","authors":["Ł Garncarek, R Powalski, T Stanisławek, B Topolski… - arXiv preprint arXiv …, 2020"],"snippet":"… Dataset pages EDGAR 119 088 RVL-CDIP 90 054 Common Crawl 389 469 cTDaR 782 private 151 074 Total 750 467 Table 1: Sizes of training datasets … Common Crawl PDFs This is a dataset produced by downloading PDF …","url":["https://arxiv.org/pdf/2002.08087"]} {"year":"2020","title":"Language model domain adaptation for automatic speech recognition","authors":["A Prasad, P Motlicek, A Nanchen - 2020"],"snippet":"… By exploring and exploiting various datasets like Common Crawl, Europarl, news and TEDLIUM and by experimenting different techniques in training a model, we achieve the goal of adapting a general purpose LM to a domain like talks …","url":["https://infoscience.epfl.ch/record/275402"]} {"year":"2020","title":"Language Models and Word Sense Disambiguation: An Overview and Analysis","authors":["D Loureiro, K Rezaee, MT Pilehvar… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Language Models and Word Sense Disambiguation: An Overview and Analysis Daniel Loureiro∗ LIAAD - INESC TEC Department of Computer Science - FCUP University of Porto, Portugal Kiamehr Rezaee∗ Department …","url":["https://arxiv.org/pdf/2008.11608"]} {"year":"2020","title":"Language Models are Few-Shot Learners","authors":["TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan… - arXiv preprint arXiv …, 2020"],"snippet":"… of our held-out validation set as an accurate measure of overfitting, and (3) we also added known high-quality reference corpora to the training mix to augment CommonCrawl and increase its diversity. Details of the first two …","url":["https://arxiv.org/pdf/2005.14165"]} {"year":"2020","title":"Language Models are Open Knowledge Graphs","authors":["C Wang, X Liu, D Song - arXiv preprint arXiv:2010.11967, 2020"],"snippet":"… In fact, these pre-trained LMs automatically acquire factual knowledge from large-scale corpora (eg, BookCorpus (Zhu et al., 2015), Common Crawl (Brown et al., 2020)) via pre-training. The learned knowledge in pre-trained LMs is the key to the current success …","url":["https://arxiv.org/pdf/2010.11967"]} {"year":"2020","title":"Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling","authors":["S Bhosale, K Yee, S Edunov, M Auli - arXiv preprint arXiv:2011.07164, 2020"],"snippet":"… Big architecture. The model is trained on de-duplicated Romanian CommonCrawl data consisting of 623M sentences or 21.7B words after normalization and tokenization (Conneau et al., 2019; Wenzek et al., 2020). The German …","url":["https://arxiv.org/pdf/2011.07164"]} {"year":"2020","title":"Language-agnostic BERT Sentence Embedding","authors":["F Feng, Y Yang, D Cer, N Arivazhagan, W Wang - arXiv preprint arXiv:2007.01852, 2020"],"snippet":"… The sentences are filtered using a sentence 1https://commoncrawl.org/ 2https://www.wikipedia.org/ 3Long lines are usually JavaScript or attempts at SEO … Finally, we pretrain on common crawl which is much larger, albeit …","url":["https://arxiv.org/pdf/2007.01852"]} {"year":"2020","title":"Large Scale Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training","authors":["O Agarwal, H Ge, S Shakeri, R Al-Rfou - arXiv preprint arXiv:2010.12688, 2020"],"snippet":"… 3We use only one annotator at the moment but are expanding this evaluation to multiple annotators. Page 7. such as Wikipedia or common crawl. KGs are a rich source of factual information that can serve as additional succint informatopm …","url":["https://arxiv.org/pdf/2010.12688"]} {"year":"2020","title":"Large-Scale Analysis of HTTP Response Headers","authors":["C Leyers, J Paytosh, N Worthy - 2020"],"snippet":"… The data come from the Common Crawl's monthly web crawls that collect responses from what we can consider to be the entire internet … The data come from the Common Crawl's monthly web crawls that collect responses …","url":["https://digitalcommons.winthrop.edu/source/SOURCE_2020/allpresentationsandperformances/101/"]} {"year":"2020","title":"Latte-Mix: Measuring Sentence Semantic Similarity with Latent Categorical Mixtures","authors":["M Li, H Bai, L Tan, K Xiong, J Lin - arXiv preprint arXiv:2010.11351, 2020"],"snippet":"Page 1. Latte-Mix: Measuring Sentence Semantic Similarity with Latent Categorical Mixtures Minghan Li*1, 2 , He Bai1, 3 , Luchen Tan1 , Kun Xiong1 , Ming Li1, 3 , Jimmy Lin1, 3 1RSVP.ai, 2University of Toronto, 3David R. Cheriton …","url":["https://arxiv.org/pdf/2010.11351"]} {"year":"2020","title":"LEAPME: Learning-based Property Matching with Embeddings","authors":["D Ayala Hernández, IC Hernández Salmerón… - ArXiv. org, arXiv …, 2020","D Ayala, I Hernández, D Ruiz, E Rahm - arXiv preprint arXiv:2010.01951, 2020"],"snippet":"… To compute embeddings, we use the pre-trained GloVe approach [43]1, specifically for the uncased Common Crawl corpus that includes 300-dimensional vectors for 1.9 million words, promising a good coverage …","url":["https://arxiv.org/pdf/2010.01951","https://idus.us.es/bitstream/handle/11441/105071/1/LEAPME%20Learning%20based%20Property%20Matching%20with%20Embeddings.pdf?sequence=1"]} {"year":"2020","title":"Learning Accurate Integer Transformer Machine-Translation Models","authors":["E Wu - arXiv preprint arXiv:2001.00926, 2020"],"snippet":"… sor2Tensor v1.12 English-to-German translation task (translate_ende_wmt32k_packed). This dataset has 4.6 million sentence pairs drawn from three WMT18 [Bojar et al., 2018a] parallel corpora: News Commentary V13, Europarl V7, and Common Crawl …","url":["https://arxiv.org/pdf/2001.00926"]} {"year":"2020","title":"Learning and Evaluating Emotion Lexicons for 91 Languages","authors":["S Buechel, S Rücker, U Hahn - arXiv preprint arXiv:2005.05672, 2020"],"snippet":"… We use the fastText embedding models from Grave et al. (2018) trained for 157 languages on the respective WIKIPEDIA and the respective part of COMMONCRAWL. These resources not only greatly facilitate our work …","url":["https://arxiv.org/pdf/2005.05672"]} {"year":"2020","title":"Learning Dynamic Knowledge Graphs to Generalize on Text-Based Games","authors":["A Adhikari, X Yuan, MA Côté, M Zelinka, MA Rondeau… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Learning Dynamic Knowledge Graphs to Generalize on Text-Based Games Ashutosh Adhikari * 1 Xingdi Yuan * 2 Marc-Alexandre Côté * 2 Mikuláš Zelinka 3 Marc-Antoine Rondeau 2 Romain Laroche 2 Pascal Poupart …","url":["https://arxiv.org/pdf/2002.09127"]} {"year":"2020","title":"Learning Engineering Properties with Bag-of-Tricks. For the Automated Evaluation of a Piping Design","authors":["WC Tan, KH Chua, CB Yan, IM Chen - 2020 IEEE 16th International Conference on …, 2020"],"snippet":"… and T. Mikolov, “Learning word vectors for 157 languages,” in Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018), 2018. [28] [Online]. Available: https://commoncrawl.org/ 1280","url":["https://ieeexplore.ieee.org/abstract/document/9217001/"]} {"year":"2020","title":"LEARNING FROM MULTIMODAL WEB DATA","authors":["JM Hessel - 2020"],"snippet":"… (2018); Roemmele et al. (2011); De Marneffe et al. (2019); Clark et al. (2019). 7https://commoncrawl.org/ 5 Page 22. model; they achieve high performance on several video understanding tasks (eg, retrieval), and …","url":["https://jmhessel.com/files/2020/phd_thesis.pdf"]} {"year":"2020","title":"Learning Geometric Word Meta-Embeddings","authors":["P Jawanpuria, NTV Dev, A Kunchukuttan, B Mishra - arXiv preprint arXiv:2004.09219, 2020"],"snippet":"… GloVe (Pennington et al., 2014): has 1 917 494 word embeddings trained on 42B tokens of web data from the common crawl. • fastText (Bojanowski et al., 2017): has 2 000 000 word embeddings trained on common crawl …","url":["https://arxiv.org/pdf/2004.09219"]} {"year":"2020","title":"Learning hierarchical relationships for object-goal navigation","authors":["Y Qiu, A Pal, HI Christensen"],"snippet":"Page 1. Learning hierarchical relationships for object-goal navigation Yiding Qiu ∗ UC San Diego yiqiu@eng.ucsd.edu Anwesan Pal ∗ UC San Diego a2pal@eng.ucsd.edu Henrik I. Christensen UC San Diego hichristensen@eng.ucsd.edu …","url":["https://www.researchgate.net/profile/Anwesan_Pal/publication/346061932_Learning_hierarchical_relationships_for_object-goal_navigation/links/5fb9b05fa6fdcc6cc659d1b2/Learning-hierarchical-relationships-for-object-goal-navigation.pdf"]} {"year":"2020","title":"Learning to Evaluate Translation Beyond English: BLEURT Submissions to the WMT Metrics 2020 Shared Task","authors":["T Sellam, A Pu, HW Chung, S Gehrmann, Q Tan… - arXiv preprint arXiv …, 2020"],"snippet":"… Details of MBERT-WMT pre-training We trained MBERT-WMT model with an MLM loss (Devlin et al., 2019), using a combination of public datasets: Wikipedia, the WMT 2019 News Crawl (Barrault et al.), the C4 variant of Com …","url":["https://arxiv.org/pdf/2010.04297"]} {"year":"2020","title":"Learning to Segment Actions from Observation and Narration","authors":["D Fried, JB Alayrac, P Blunsom, C Dyer, S Clark… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Learning to Segment Actions from Observation and Narration Daniel Fried‡ Jean-Baptiste Alayrac† Phil Blunsom† Chris Dyer† Stephen Clark† Aida Nematzadeh† †DeepMind, London, UK ‡Computer Science Division, UC Berkeley …","url":["https://arxiv.org/pdf/2005.03684"]} {"year":"2020","title":"Learning to summarize from human feedback","authors":["N Stiennon, L Ouyang, J Wu, DM Ziegler, R Lowe… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Learning to summarize from human feedback Nisan Stiennon∗ Long Ouyang∗ Jeff Wu∗ Daniel M. Ziegler∗ Ryan Lowe∗ Chelsea Voss∗ Alec Radford Dario Amodei Paul Christiano∗ OpenAI Abstract As language …","url":["https://arxiv.org/pdf/2009.01325"]} {"year":"2020","title":"Learning unbiased zero-shot semantic segmentation networks via transductive transfer","authors":["H Liu, Y Wang, J Zhao, G Yang, F Lv - arXiv preprint arXiv:2007.00515, 2020"],"snippet":"… IV. EXPERIMENTS Following [7], we use the concatenation of two different word vectors, ie word2vec trained on Google News [12] and fastText trained on Common Crawl [12], to construct the semantic space shared by source and target classes …","url":["https://arxiv.org/pdf/2007.00515"]} {"year":"2020","title":"Learning User Representations for Open Vocabulary Image Hashtag Prediction","authors":["T Durand - Proceedings of the IEEE/CVF Conference on Computer …, 2020"],"snippet":"… We train our model using ADAM [26] during 20 epochs with a start learning rate 5e-5. We use ResNet-50 [22] as the ConvNet and GloVe embeddings [34] as pre-trained word em- beddings. GloVe was trained on Common …","url":["http://openaccess.thecvf.com/content_CVPR_2020/papers/Durand_Learning_User_Representations_for_Open_Vocabulary_Image_Hashtag_Prediction_CVPR_2020_paper.pdf"]} {"year":"2020","title":"Learning Word and Sub-word Vectors for Amharic (Less Resourced Language)","authors":["A Eshetu, G Teshome, T Abebe"],"snippet":"… The large text collection from Wikipedia and common crawl are commonly used data source to train and learn word vectors (Al-Rfou et al., 2013; Bojanowski et al … (2014) released GloVe models trained on Wikipedia …","url":["https://www.academia.edu/download/64390049/39IJAERS-08202035-Learning.pdf"]} {"year":"2020","title":"Leveraging ParsBERT and Pretrained mT5 for Persian Abstractive Text Summarization","authors":["M Farahani, M Gharachorloo, M Manthouri - arXiv preprint arXiv:2012.11204, 2020"],"snippet":"… T5, on the other hand, is a unified Seq2Seq framework that employs Text-to- Text format to address NLP text-based problems. A multilingual variation of the T5 model is called mT5 [16] that covers 101 different …","url":["https://arxiv.org/pdf/2012.11204"]} {"year":"2020","title":"Leveraging Structured Metadata for Improving Question Answering on the Web","authors":["X Du, A Hassan, A Fourney, R Sim, P Bennett… - … of the 1st Conference of the …, 2020"],"snippet":"… website content. The Web Data Commons project (Mühleisen and Bizer, 2012) estimates that 0.9 billion HTML pages out of the 2.5 billion pages (37.1%) in the Common Crawl web corpus1 contain structured metadata. Figure …","url":["https://www.aclweb.org/anthology/2020.aacl-main.55.pdf"]} {"year":"2020","title":"LIG-Health at Adhoc and Spoken IR Consumer Health Search: expanding queries using UMLS and FastText.","authors":["P Mulhem, GG Saez, A Mannion, D Schwab, J Frej - Conference and Labs of the …, 2020"],"snippet":"… The FastText embedding vector of a word is the sum of the vectors of its component ngrams. We used the pre-trained word vectors for English language, trained on Common Crawl and Wikipedia using FastText. The features of the model used are as follows; …","url":["http://www.dei.unipd.it/~ferro/CLEF-WN-Drafts/CLEF2020/paper_129.pdf"]} {"year":"2020","title":"LIMSI@ WMT 2020","authors":["SA Rauf, JC Rosales, I Paris, PM Quang, S Paris…"],"snippet":"… Domain Corpus sents. words words (en) (de) web Paracrawl 50,875 978 919 economy Tilde EESC 2,858 61 58 news Commoncrawl 2,399 51 47 Tilde rapid 940 20 19 News commentary 361 8 8 tourism Tilde tourism 7 …","url":["http://statmt.org/wmt20/pdf/2020.wmt-1.86.pdf"]} {"year":"2020","title":"Linguistic Structure Guided Context Modeling for Referring Image Segmentation","authors":["F Zhang, J Han","T Hui, S Liu, S Huang, G Li, S Yu, F Zhang, J Han"],"snippet":"… rate. CNN is fixed during training. We use batch size 1 and stop training after 700K iterations. GloVe word embeddings [30] pretrained on Common Crawl with 840B tokens are used to replace randomly initialized ones. For fair …","url":["http://colalab.org/media/paper/Linguistic_Structure_Guided_Context_Modeling_for_Referring_Image_Segmentation.pdf","https://link.springer.com/content/pdf/10.1007/978-3-030-58607-2_4.pdf"]} {"year":"2020","title":"Linguistically-aware Attention for Reducing the Semantic-Gap in Vision-Language Tasks","authors":["G KV, A Nambiar, KS Srinivas, A Mittal - arXiv preprint arXiv:2008.08012, 2020"],"snippet":"… The pre-trained word-to-vector networks such as Glove [29] and Bert [30] are inexpensive and rich in making linguistic correlations (since they are already trained on a large textual corpus such as Common Crawl and Wikipedia2014) …","url":["https://arxiv.org/pdf/2008.08012"]} {"year":"2020","title":"LNMap: Departures from Isomorphic Assumption in Bilingual Lexicon Induction Through Non-Linear Mapping in Latent Space","authors":["T Mohiuddin, MS Bari, S Joty - arXiv preprint arXiv:2004.13889, 2020"],"snippet":"… English, Italian, and German em- beddings were trained on WacKy crawling corpora using CBOW (Mikolov et al., 2013b), while Spanish and Finnish embeddings were trained on WMT News Crawl and Common Crawl, respectively. 4.2 Baseline Methods …","url":["https://arxiv.org/pdf/2004.13889"]} {"year":"2020","title":"Localizing Open-Ontology QA Semantic Parsers in a Day Using Machine Translation","authors":["M Moradshahi, G Campagna, SJ Semnani, S Xu… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Localizing Open-Ontology QA Semantic Parsers in a Day Using Machine Translation Mehrad Moradshahi Giovanni Campagna Sina J. Semnani Silei Xu Monica S. Lam Computer Science Department Stanford University …","url":["https://arxiv.org/pdf/2010.05106"]} {"year":"2020","title":"Localizing Q&A Semantic Parsers for Any Language in a Day","authors":["M Moradshahi, G Campagna, S Semnani, S Xu, M Lam - Proceedings of the 2020 …, 2020"],"snippet":"Page 1. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 5970–5983, November 16–20, 2020. c 2020 Association for Computational Linguistics 5970 Localizing Open-Ontology …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.481.pdf"]} {"year":"2020","title":"Locally Constructing Product Taxonomies from Scratch Using Representation Learning","authors":["M Kejriwal, RK Selvam, CC Ni, N Torzec"],"snippet":"… The WDC schema.org project, which relies on the webpages in the Common Crawl, is able to automatically extract schema.org data from webpages due to its unique syntax and make it available as a dataset in Resource Description Framework (RDF) …","url":["https://web.ntpu.edu.tw/~myday/doc/ASONAM2020/ASONAM2020_Proceedings/pdf/papers/080_098_507.pdf"]} {"year":"2020","title":"LOREM: Language-consistent Open Relation Extraction from Unstructured Text","authors":["T Harting, S Mesbah, C Lofi"],"snippet":"… Since these sentences are automatically tagged, we do expect a higher noise level than in the manually tagged test sets. For the language-individual model, we use FastText word em- beddings [11] which are trained on Common Crawl and Wikipedia dataset …","url":["https://pure.tudelft.nl/portal/files/69221720/2020_WWW_LOREM.pdf"]} {"year":"2020","title":"Lost in Embedding Space: Explaining Cross-Lingual Task Performance with Eigenvalue Divergence","authors":["H Dubossarsky, I Vulić, R Reichart, A Korhonen - arXiv preprint arXiv:2001.11136, 2020"],"snippet":"… We prefer Wikipedia as the main embedding training corpus over the larger Common Crawl corpus, because the text in Wikipedia is much cleaner or even hand-curated, and adheres to the rules of standard language (Grave et al., 2018) …","url":["https://arxiv.org/pdf/2001.11136"]} {"year":"2020","title":"Low-Resource Knowledge-Grounded Dialogue Generation","authors":["X Zhao, W Wu, C Tao, C Xu, D Zhao, R Yan - arXiv preprint arXiv:2002.10348, 2020"],"snippet":"Page 1. Published as a conference paper at ICLR 2020 LOW-RESOURCE KNOWLEDGE-GROUNDED DIALOGUE GENERATION Xueliang Zhao1,2, Wei Wu3, Chongyang Tao1, Can Xu3, Dongyan Zhao1,2, Rui Yan1,2,4∗ 1Wangxuan …","url":["https://arxiv.org/pdf/2002.10348"]} {"year":"2020","title":"Low-Resource Text Classification via Cross-lingual Language Model Fine-tuning","authors":["X Li, Z Li, J Sheng, W Slamu"],"snippet":"… XLM - R shows the possibility of training one model for many languages while not sacrificing per-language performance. It is trained on 2.5TB of CommonCrawl data, in 100 languages and uses a large vocabulary size of CCL 2020 …","url":["http://www.cips-cl.org/static/anthology/CCL-2020/CCL-20-092.pdf"]} {"year":"2020","title":"LREC 2020 Workshop Language Resources and Evaluation Conference 11–16 May 2020","authors":["M Kupietz, H Lungen, I Pisetta"],"snippet":"Page 1. LREC 2020 Workshop Language Resources and Evaluation Conference 11–16 May 2020 8th Workshop on Challenges in the Management of Large Corpora (CMLC-8) PROCEEDINGS Editors: Piotr Ba´nski, Adrien Barbaresi, Simon Clematide …","url":["https://ids-pub.bsz-bw.de/files/9811/Banski_Barbaresi_Clematide_Kupietz_Luengen_Pisetta_Proceedings_LREC_2020.pdf"]} {"year":"2020","title":"Machine Bias and Fundamental Rights","authors":["D Amilevičius - Smart Technologies and Fundamental Rights, 2020"],"snippet":"Jump to Content Jump to Main Navigation. English; 中文; français; Deutsch. Access via: Google Googlebot - Web Crawler SEO. Login to my Brill account Create Brill Account. Publications. Subjects. African Studies American Studies …","url":["https://brill.com/view/book/edcoll/9789004437876/BP000019.xml"]} {"year":"2020","title":"Machine Translation for English–Inuktitut with Segmentation, Data Acquisition and Pre-Training","authors":["C Roest, L Edman, G Minnema, K Kelly, J Spenader… - Proceedings of the Fifth …, 2020"],"snippet":"… 2), we train XLM models with the News Crawl data for English and Common Crawl data for Inuktitut, as specified in Table 2. We also use Hansards and Newsdevtrain oversampled 5 times for parallel data. We try both tagging …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.29.pdf"]} {"year":"2020","title":"Machine Translation in Natural Language Processing by Implementing Artificial Neural Network Modelling Techniques: An Analysis","authors":["FA Khan, A Abubakar - International Journal on Perceptive and Cognitive …, 2020"],"snippet":"… The experiment for the proposed model has followed with BERT, to BookCorpus [35] and Wikipedia for English as an initializing point for pretraining. Similarly, other text includes, Giga5 (16Gb) ClueWeb 2012-B and Common Crawl respectively …","url":["https://journals.iium.edu.my/kict/index.php/IJPCC/article/download/134/96"]} {"year":"2020","title":"Machine Translation Reference-less Evaluation using YiSi-2 with Bilingual Mappings of Massive Multilingual Language Model","authors":["C Lo, S Larkin - Proceedings of the Fifth Conference on Machine …, 2020"],"snippet":"… The differencesbetweenXLM-RandBERTare1)XLM-Ris trained on the CommonCrawl corpus which is significantly larger than the Wikipedia training data used by BERT; 2) instead of a uniform data sampling rate used in BERT …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.100.pdf"]} {"year":"2020","title":"Machine Translation System Selection from Bandit Feedback","authors":["J Naradowsky, X Zhang, K Duh - arXiv preprint arXiv:2002.09646, 2020"],"snippet":"… Specifically, we in- clude OpenSubtitles2018 [Lison and Tiedemann, 2016] and WMT 2017 [Bojar et al., 2017], which contains data from eg parliamentary proceedings (Europarl, UN), political/economic news, and web-crawled parallel corpus (Common Crawl) …","url":["https://arxiv.org/pdf/2002.09646"]} {"year":"2020","title":"MAD-X: An Adapter-based Framework for Multi-task Cross-lingual Transfer","authors":["J Pfeiffer, I Vulić, I Gurevych, S Ruder - arXiv preprint arXiv:2005.00052, 2020"],"snippet":"… It is a Transformer-based model pretrained for one hundred languages on large cleaned Common Crawl corpora (Wenzek et al., 2019). For efficiency purposes, we use the XLM-R Base configuration as the basis for all of our experiments …","url":["https://arxiv.org/pdf/2005.00052"]} {"year":"2020","title":"Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation","authors":["N Reimers, I Gurevych - arXiv preprint arXiv:2004.09813, 2020"],"snippet":"Page 1. Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation Nils Reimers and Iryna Gurevych Ubiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science …","url":["https://arxiv.org/pdf/2004.09813"]} {"year":"2020","title":"MantisTable SE: an Efficient Approach for the Semantic Table Interpretation","authors":["M Cremaschi, R Avogadro, A Barazzetti, D Chieregato - Semantic Web Challenge on …, 2020"],"snippet":"… Web Tables: The WebTables system [1] extracts 14.1 billion HTML tables and finds 154 million are high-quality tables (1.1%); – Web Tables: Lehmberg et al. [5] extract 233 million content tables from Common Crawl 2015 (2.25 …","url":["http://ceur-ws.org/Vol-2775/paper8.pdf"]} {"year":"2020","title":"Mapping crime descriptions to law articles using deep learning","authors":["M Vink, N Netten, MS Bargh, S van den Braak… - Proceedings of the 13th …, 2020"],"snippet":"… This is a popular word embedding created by Facebook and available in many languages. The FastText embeddings are trained on Wikipedia texts and the data from the common crawl project. The word embeddings have a vector size of 300 …","url":["https://dl.acm.org/doi/abs/10.1145/3428502.3428507"]} {"year":"2020","title":"Mapping Languages: The Corpus of Global Language Use","authors":["J Dunn"],"snippet":"… language) and 156 countries (again with over 1 million words from each country), all distilled from Common Crawl web data … region: (i) the number of sites indexed by the Common Crawl; (ii) the population's degree of access …","url":["https://publicdata.canterbury.ac.nz/Research/Geocorpus/Documentation/!Paper.Corpus_of_Global_Language_Use.pdf"]} {"year":"2020","title":"Mapping the market for remanufacturing: An application of “Big Data” analytics","authors":["JQF Netoa, M Dutordoira - International Journal of Production Economics, 2020"],"snippet":"… The vectors are created with Global Vectors for Word Representation (GloVe), one of the most well-known word embedding methods (Pennington et al., 2014), and are based on a data set obtained from Common Crawl, a nonprofit …","url":["https://www.sciencedirect.com/science/article/pii/S092552732030181X"]} {"year":"2020","title":"MASK: A flexible framework to facilitate de-identification of clinical texts","authors":["N Milosevic, G Kalappa, H Dadafarin, M Azimaee… - arXiv preprint arXiv …, 2020"],"snippet":"… The first of these ap- proaches, used GLoVe (Global Vector) word embeddings [14]. We used GLoVe embeddings trained on common crawl data containing 840 billion tokens, 2.2 million unique tokens in vocabulary, and 300dimensional vectors …","url":["https://arxiv.org/pdf/2005.11687"]} {"year":"2020","title":"Masked ELMo: An evolution of ELMo towards fully contextual RNN language models","authors":["G Senay, E Salin - arXiv preprint arXiv:2010.04302, 2020"],"snippet":"… It should be noted that ELMo 5.5B is trained on a larger corpus than ELMo and Masked ELMo (Wikipedia: 1.9B and the common crawl from WMT 2008-2012: 3.6B). Moreover, a BERT baseline (BERT*) trained on the same …","url":["https://arxiv.org/pdf/2010.04302"]} {"year":"2020","title":"Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of Yorùbá and Twi","authors":["J Alabi, K Amponsah-Kaakyire, D Adelani… - Proceedings of The 12th …, 2020"],"snippet":"… The resource par excellence is Wikipedia2, an online encyclopedia currently available in 307 languages3. Other initiatives such as Common Crawl4 or the Jehovahs Witnesses site5 are also repositories for multilingual …","url":["https://www.aclweb.org/anthology/2020.lrec-1.335.pdf"]} {"year":"2020","title":"Massively Multilingual Document Alignment with Cross-lingual Sentence-Mover's Distance","authors":["A El-Kishky, F Guzmán - arXiv preprint arXiv:2002.00761, 2020"],"snippet":"… selected for evaluation. Baseline Methods. For comparison, we implemented two existing and intuitive document scoring baselines previously evaluated on this URL-Aligned CommonCrawl dataset [11]. The first method dubbed …","url":["https://arxiv.org/pdf/2002.00761"]} {"year":"2020","title":"Master Thesis: Developing a Cross-Lingual Named Entity Recognition Model","authors":["J Podolak, P Zeinert - 2020"],"snippet":"Page 1. Master Thesis: Developing a Cross-Lingual Named Entity Recognition Model Jowita Podolak1, Philine Zeinert2 1jopo@itu.dk 2phze@itu.dk June 1, 2020 Course Code: KISPECI1SE Page 2. Abstract To build a Cross …","url":["https://www.derczynski.com/itu/docs/xling-ner_jopo_phze.pdf"]} {"year":"2020","title":"Matching Job Applicants to Free Text Job Ads Using Semantic Networks and Natural Language Inference","authors":["A Thun - 2020"],"snippet":"Page 1. IN DEGREE PROJECT COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS , STOCKHOLM SWEDEN 2020 Matching Job Applicants to Free Text Job Ads Using Semantic Networks and Natural Language Inference ANTON THUN …","url":["https://www.diva-portal.org/smash/get/diva2:1467916/FULLTEXT01.pdf"]} {"year":"2020","title":"MDD@ AMI: Vanilla Classifiers for Misogyny Identification","authors":["S El Abassi, S Nisioi - Proceedings of Sixth Evaluation Campaign of Natural …, 2020"],"snippet":"… CBOW embeddings pre-trained on Wikipedia and OSCAR (Common Crawl)3. The second run is trained on English glove em- beddings that surprisingly contain the representation of more than half of our Italian …","url":["http://ceur-ws.org/Vol-2765/paper149.pdf"]} {"year":"2020","title":"MEASURING DIVERGENT THINKING ORIGINALITY WITH HUMAN RATERS AND TEXT-MINING MODELS: A PSYCHOMETRIC COMPARISON OF METHODS","authors":["D Dumasa, P Organisciaka, M Dohertyb"],"snippet":"Page 1. Running Head: MEASURING ORIGINALITY 1 MEASURING DIVERGENT THINKING ORIGINALITY WITH HUMAN RATERS AND TEXT-MINING MODELS: A PSYCHOMETRIC COMPARISON OF METHODS …","url":["https://www.researchgate.net/profile/Denis_Dumas/publication/339364072_Measuring_Divergent_Thinking_Originality_with_Human_Raters_and_Text-Mining_Models_A_Psychometric_Comparison_of_Methods/links/5e4d686892851c7f7f46b607/Measuring-Divergent-Thinking-Originality-with-Human-Raters-and-Text-Mining-Models-A-Psychometric-Comparison-of-Methods.pdf"]} {"year":"2020","title":"Measuring prominence of scientific work in online news as a proxy for impact","authors":["J Ravenscroft, A Clare, M Liakata - arXiv preprint arXiv:2007.14454, 2020"],"snippet":"… In our task we employ pre-trained GloVe4 feature embeddings trained on the Common Crawl dataset5, a multi-petabyte archive of content scraped from the world wide web containing 42 billion tokens and a vocabulary 1.9 million words …","url":["https://arxiv.org/pdf/2007.14454"]} {"year":"2020","title":"Media-Analytics. org: A Resource to Research Language Usage by News Media Outlets","authors":["D Rozado - ITM Web of Conferences, 2020"],"snippet":"… news media outlets. News articles textual content are available in outlet-specific domains and Internet cache repositories such as the Internet Archive Wayback Machine, Google Cache and Common Crawl. Articles' headlines …","url":["https://www.itm-conferences.org/articles/itmconf/pdf/2020/03/itmconf_ictessh2020_03004.pdf"]} {"year":"2020","title":"Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System?","authors":["S Hisamoto, M Post, K Duh"],"snippet":"… Page 3. CommonCrawl subcorpus … We now describe how Carol prepares the data for Alice and Bob. First, Carol se- lects 4 subcorpora for the training data of Al- ice, namely CommonCrawl, Europarl v7, News Commentary v13, and Rapid 2016 …","url":["http://www.cs.jhu.edu/~kevinduh/t/membership-inference.pdf"]} {"year":"2020","title":"Meta-Learning for Few-Shot NMT Adaptation","authors":["A Sharaf, H Hassan, H Daumé III - arXiv preprint arXiv:2004.02745, 2020"],"snippet":"… In all cases, the baseline machine translation system is a neural English to German (En-De) transformer model (Vaswani et al., 2017), initially trained on 5.2M sentences filtered from the the standard parallel data …","url":["https://arxiv.org/pdf/2004.02745"]} {"year":"2020","title":"Method and apparatus for improved automatic subtitle segmentation using an artificial neural network model","authors":["P WILKEN, E Matusov - US Patent App. 16/876,780, 2020"],"snippet":"… These data included all other publicly available training data, including ParaCrawl, CommonCrawl, EUbookshop, JRCAcquis, EMEA, and other corpora from the OPUS collection … This may be done to avoid oversampling …","url":["https://patentimages.storage.googleapis.com/d1/56/2b/7b6e0c087c851d/US20200364402A1.pdf"]} {"year":"2020","title":"Method and system for interactive keyword optimization for opaque search engines","authors":["R Puzis, A ELYASHAR, M REUBEN - US Patent App. 16/840,538, 2020"],"snippet":"… 2018]. The model was trained on Common Crawl (http://commoncrawl.org/) and Wikipedia (https://www.wikipedia.org/) using fastText library (https://fasttext.cc/). For the distance measure, the simple Euclidean distance was used …","url":["https://patents.google.com/patent/US20200327120A1/en"]} {"year":"2020","title":"Method for automatically generating a wrapper for extracting web data, and a computer system","authors":["G Gottlob, E SALLINGER, R FAYZRAKHMANOV… - US Patent App. 16/630,485, 2020"],"snippet":"US20200167393A1 - Method for automatically generating a wrapper for extracting web data, and a computer system - Google Patents. Method for automatically generating a wrapper for extracting web data, and a computer system. Download PDF Info …","url":["https://patents.google.com/patent/US20200167393A1/en"]} {"year":"2020","title":"Methods for morphology learning in low (er)-resource scenarios","authors":["T Bergmanis - 2020"],"snippet":"Page 1. This thesis has been submitted in fulfilment of the requirements for a postgraduate degree (eg PhD, MPhil, DClinPsychol) at the University of Edinburgh. Please note the following terms and conditions of use: This work …","url":["https://era.ed.ac.uk/bitstream/handle/1842/37115/Bergmanis2020_Redacted.pdf?sequence=3&isAllowed=y"]} {"year":"2020","title":"Metrics and tools for exploring toxicity in social media","authors":["PMFN da Silva - 2020"],"snippet":"Page 1. FACULDADE DE ENGENHARIA DA UNIVERSIDADE DO PORTO Metrics and tools for exploring toxicity in social media Pedro Silva Mestrado Integrado em Engenharia Informática e Computação Supervisor: Sérgio …","url":["https://repositorio-aberto.up.pt/bitstream/10216/128545/2/412412.pdf"]} {"year":"2020","title":"Mis-shapes, Mistakes, Misfits: An Analysis of Domain Classification Services","authors":["P Vallina, V Le Pochat, Á Feal, M Paraschiv, J Gamba…"],"snippet":"… service. Their popularity is further reflected by the fact that 47% of the 4.4M domains are indexed in the Chrome User Experience Report [71] and 0.5% by Common Crawl [72], both generated between August and October 2019 …","url":["https://lepoch.at/files/domain-classification-imc20.pdf"]} {"year":"2020","title":"Mitigating Bias in Deep Nets with Knowledge Bases: the Case of Natural Language Understanding for Robots","authors":["M Mensio, E Bastianelli, I Tiddi, G Rizzo"],"snippet":"… The connections in green represent highway connections be- tween the first and the third layer. over the Common Crawl resource3 … For this reason, it can intrinsically provide an ex- planation for the model behavior, as it summarizes a much 3http://commoncrawl.org …","url":["http://ceur-ws.org/Vol-2600/paper20.pdf"]} {"year":"2020","title":"Mitigating Gender Bias in Machine Learning Data Sets","authors":["S Leavy, G Meaney, K Wade, D Greene - arXiv preprint arXiv:2005.06898, 2020"],"snippet":"… evaluation of system accuracy and learned associations in machine learning technologies that underlie many search and recommendation systems [9]. Implicit Association Tests (IATs) were found to be effective in uncovering …","url":["https://arxiv.org/pdf/2005.06898"]} {"year":"2020","title":"Modeling Recurring Concepts in Single-label and Multi-label Streams","authors":["Z Ahmadi"],"snippet":"Page 1. Modeling Recurring Concepts in Single-label and Multi-label Streams A thesis submitted for the degree of DN at the Department of Physics, Mathematics and Computer Science at the Johannes …","url":["https://publications.ub.uni-mainz.de/theses/volltexte/2019/100003220/pdf/100003220.pdf"]} {"year":"2020","title":"Modeling remotely collected speech data: Applications for psychiatry","authors":["TB Holmlund - 2020"],"snippet":"… To base the analysis on a corpus with a wide variety of animal-word sources, we used a set of pretrained word vectors calculated from approximately 42 billion tokens from the entire internet, courtesy of the Common Crawl project (Pennington et al., 2014) …","url":["https://munin.uit.no/bitstream/handle/10037/17098/paper_III.pdf?sequence=8"]} {"year":"2020","title":"Modeling the Music Genre Perception across Language-Bound Cultures","authors":["EV Epure, G Salha, M Moussallam, R Hennequin - arXiv preprint arXiv:2010.06325, 2020"],"snippet":"… representations as described next. Multilingual Static Word Embeddings. The classical word embeddings we study are the multilingual fastText word vectors trained on Wikipedia and Common Crawl (Grave et al., 2018). The model is an …","url":["https://arxiv.org/pdf/2010.06325"]} {"year":"2020","title":"Modeling Word Formation in English–German Neural Machine Translation","authors":["M Weller-Di Marco, A Fraser - Proceedings of the 58th Annual Meeting of the …, 2020"],"snippet":"… We compare four training settings: small (248,730 sentences: newscommentary), large2M (1,956,444 sentences: Europarl + news-commentary), large4M (4,116,215 sentences: Europarl + news-commentary + …","url":["https://www.aclweb.org/anthology/2020.acl-main.389.pdf"]} {"year":"2020","title":"Moral Concerns are Differentially Observable in Language","authors":["B Kennedy, M Atari, AM Davani, J Hoover, A Omrani… - 2020"],"snippet":"… lexical semantic relatedness. We compute text representations by averaging GloVe word embedding vectors (Pennington et al., 2014) which were trained on text from the Common Crawl3. In 2 https://github.com/lda-project/lda …","url":["https://psyarxiv.com/uqmty/download?format=pdf"]} {"year":"2020","title":"Moral Framing and Ideological Bias of News","authors":["K Lerman - … : 12th International Conference, SocInfo 2020, Pisa …","N Mokhberian, A Abeliuk, P Cummings, K Lerman - arXiv preprint arXiv:2009.12979, 2020"],"snippet":"… then and V the m− denote semantic to axis set corresponding to this MF dimension is: Am= mean (V m+)− mean (V m−)(1) For the computations of this part, the embeddings of words are obtained from the pretrained GloVe …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=6tIBEAAAQBAJ&oi=fnd&pg=PA206&dq=commoncrawl&ots=3yMGfdMsr5&sig=MyXXhdRNotNI087l9_nvaNHNjhI","https://arxiv.org/pdf/2009.12979"]} {"year":"2020","title":"Morphological and pseudomorphological effects in English visual word processing: How much can we attribute the statistical structure of the language?","authors":["P Stevens, D Plaut"],"snippet":"… Real-valued 300dimensional semantic vectors generated from the Common Crawl internet text corpus were converted to 200-dimensional binary vectors using a binary multidimensional scaling algorithm (Rohde, 2002) …","url":["https://cognitivesciencesociety.org/cogsci20/papers/0399/0399.pdf"]} {"year":"2020","title":"Morphological Skip-Gram: Using morphological knowledge to improve word representation","authors":["F Santos, H Macedo, T Bispo, C Zanchetting - arXiv preprint arXiv:2007.10055, 2020"],"snippet":"… Keeping the quality of word embeddings and decreasing training time is very important because usually, a corpus to training embeddings is composed of 1B tokens. For example, the Common Crawl corpora contain 820B tokens …","url":["https://arxiv.org/pdf/2007.10055"]} {"year":"2020","title":"mT5: A massively multilingual pre-trained text-to-text transformer","authors":["A Roberts, A Barua, A Siddhant, C Raffel, L Xue… - 2021","L Xue, N Constant, A Roberts, M Kale, R Al-Rfou… - arXiv preprint arXiv …, 2020"],"snippet":"… In this paper, we in- troduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages … mC4 comprises natural text in 101 languages drawn from the public Common Crawl web scrape …","url":["https://arxiv.org/pdf/2010.11934","https://research.google/pubs/pub50316/"]} {"year":"2020","title":"Multi-Label Sentiment Analysis on 100 Languages with Dynamic Weighting for Label Imbalance","authors":["SF Yilmaz, EB Kaynak, A Koç, H Dibeklioğlu, SS Kozat - arXiv preprint arXiv …, 2020"],"snippet":"… 2, we use XLM-RoBERTa pretrained tokenizer and pretrained model [17]. XLM-RoBERTa is pretrained on CommonCrawl corpora of 100 different languages. We first tokenize the input sentence si into subword units via …","url":["https://arxiv.org/pdf/2008.11573"]} {"year":"2020","title":"Multi-model transfer and optimization for cloze task","authors":["J Tang, L Ling, C Ma, H Zhang, J Huang - … on Artificial Intelligence and Robotics 2020, 2020"],"snippet":"… Transformer Enc. PLM ≈ BERT WikiEn+BookCorpus+Giga5 +ClueWeb+Common Crawl RoBERTa Transformer Enc. MLM 355M BookCorpus+CCNews +OpenWebText+STORIES XLM-R Transformer Enc. MLM 550M CommonCrawl ALBERT Transformer Enc …","url":["https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11574/115740T/Multi-model-transfer-and-optimization-for-cloze-task/10.1117/12.2579412.short"]} {"year":"2020","title":"Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity","authors":["I Vulić, S Baker, EM Ponti, U Petti, I Leviant, K Wing… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity https://multisimlex.com/ Ivan Vulic ∗♠ LTL, University of Cambridge Simon Baker ∗♠ LTL, University of Cambridge …","url":["https://arxiv.org/pdf/2003.04866"]} {"year":"2020","title":"Multilingual AMR-to-Text Generation","authors":["A Fan, C Gardent - Proceedings of the 2020 Conference on Empirical …, 2020"],"snippet":"… 4.1 Data Pretraining For encoder pretraining on silver AMR, we take thirty million sentences from the English portion of CCNET 2 (Wenzek et al., 2019), a cleaned version of Common Crawl (an open source version of the web) …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.231.pdf"]} {"year":"2020","title":"Multilingual Denoising Pre-training for Neural Machine Translation","authors":["Y Liu, J Gu, N Goyal, X Li, S Edunov, M Ghazvininejad… - arXiv preprint arXiv …, 2020"],"snippet":"… 2 Multilingual Denoising Pre-training We use a large-scale common crawl (CC) corpus (§2.1) to pre-train BART models (§2.2). Our ex- periments in the later sections involve finetuning a range of models pre-trained on different subsets of the CC languages §2.3) …","url":["https://arxiv.org/pdf/2001.08210"]} {"year":"2020","title":"Multilingual Dependency Parsing from Universal Dependencies to Sesame Street","authors":["J Nivre - International Conference on Text, Speech, and …, 2020"],"snippet":"… pre-trained models provided by Che et al. [3], who train ELMo on 20 million words randomly sampled from raw WikiDump and Common Crawl datasets for 44 languages. For BERT, we employ the pretrained multilingual cased …","url":["https://link.springer.com/chapter/10.1007/978-3-030-58323-1_2"]} {"year":"2020","title":"Multilingual Factual Knowledge Retrieval from Pretrained Language Models","authors":["Z Jiang, A Anastasopoulos, J Araki, H Ding, G Neubig - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models Zhengbao Jiang†, Antonios Anastasopoulos♣,∗, Jun Araki‡, Haibo Ding‡, Graham Neubig† †Languages Technologies Institute …","url":["https://arxiv.org/pdf/2010.06189"]} {"year":"2020","title":"Multilingual Legal Information Retrieval System for Mapping Recitals and Normative Provisions","authors":["AK JOHN - Legal Knowledge and Information Systems: JURIX …, 2020"],"snippet":"… cc/docs/en/crawl-vectors. html): pre-trained on Common Crawl and Wikipedia. We used the word-average method, which divides the sum of the word embeddings in a legal norm by the norm length. The embedding di- mension size was set to 128 …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=fy4NEAAAQBAJ&oi=fnd&pg=PA123&dq=commoncrawl&ots=BUOuNY0sH2&sig=nWyRhj_URXdbdS30qkO5Z2kfHcY"]} {"year":"2020","title":"Multilingual Probing Tasks for Word Representations","authors":["G Gül Şahin, C Vania, I Kuznetsov, I Gurevych - Computational Linguistics"],"snippet":"Page 1. Computational Linguistics Just Accepted MS. https://doi.org/10.1162/ COLI_a_00376 © Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license LINSPECTOR Multilingual Probing Tasks for Word Representations …","url":["https://www.mitpressjournals.org/doi/pdf/10.1162/COLI_a_00376"]} {"year":"2020","title":"Multilingual Stance Detection in Tweets: The Catalonia Independence Corpus","authors":["E Zotova, R Agerri, M Nuñez, G Rigau - … of The 12th Language Resources and …, 2020"],"snippet":"… Initial experimentation showed that the Common Crawl5 models performed better for our particular task. The Common Crawl models are trained using a Continuous Bag-of-Words (CBOW) architecture with position-weights and …","url":["https://www.aclweb.org/anthology/2020.lrec-1.171.pdf"]} {"year":"2020","title":"Multilingual Stance Detection: The Catalonia Independence Corpus","authors":["E Zotova, R Agerri, M Nuñez, G Rigau - arXiv preprint arXiv:2004.00050, 2020"],"snippet":"… task. The Common Crawl models are trained using a Continuous Bag-of-Words (CBOW) architecture with position-weights and 300 di- mensions on a vocabulary of 2M words … information. 5http://commoncrawl.org/ Page 4. 3.5 …","url":["https://arxiv.org/pdf/2004.00050"]} {"year":"2020","title":"Multilingual Translation with Extensible Multilingual Pretraining and Finetuning","authors":["Y Tang, C Tran, X Li, PJ Chen, N Goyal, V Chaudhary… - arXiv preprint arXiv …, 2020"],"snippet":"… challenging to train from scratch. In contrast, monolingual data exists even for low resource languages, particularly in resources such as Wikipedia or Commoncrawl, a version of the web. Thus, leveraging this monolingual …","url":["https://arxiv.org/pdf/2008.00401"]} {"year":"2020","title":"Multilingual Unsupervised Sentence Simplification","authors":["L Martin, A Fan, É de la Clergerie, A Bordes, B Sagot - arXiv preprint arXiv …, 2020"],"snippet":"… We mine a large quantity of paraphrases from the Common Crawl using libraries such as LASER (Artetxe et al., 2018) and faiss (Johnson et al … CCNET is an extraction of Common Crawl,2 an open source snapshot of the web …","url":["https://arxiv.org/pdf/2005.00352"]} {"year":"2020","title":"Multilingual Zero-shot Constituency Parsing","authors":["T Kim, S Lee - arXiv preprint arXiv:2004.13805, 2020"],"snippet":"Page 1. Multilingual Zero-shot Constituency Parsing Taeuk Kim and Sang-goo Lee Department of Computer Science and Engineering Seoul National University, Seoul, Korea {taeuk,sglee}@europa.snu.ac.kr Abstract Zero-shot …","url":["https://arxiv.org/pdf/2004.13805"]} {"year":"2020","title":"MultiMix: A Robust Data Augmentation Strategy for Cross-Lingual NLP","authors":["MS Bari, MT Mohiuddin, S Joty - arXiv preprint arXiv:2004.13240, 2020"],"snippet":"… samples around each selected sample. XLM-R is a multilingual LM that is trained on massive multilingual corpora (2.5 TB of refined Common-Crawl data in 100 languages) with a masked LM (MLM) objective. We chose XLM-R …","url":["https://arxiv.org/pdf/2004.13240"]} {"year":"2020","title":"Multiple Knowledge GraphDB (MKGDB)","authors":["S Faralli, P Velardi, F Yusifli - Proceedings of The 12th Language Resources and …, 2020"],"snippet":"… Crawl11 Web corpus. 7https://www.objectivity.com/products/ thingspan/ thingspanfeatures/ 8https://titan.thinkaurelius.com/ 9https://neo4j.com/ 10the Neo4j platform provides an interface for the development and inclusion of …","url":["https://www.aclweb.org/anthology/2020.lrec-1.283.pdf"]} {"year":"2020","title":"Multiscale System for Alzheimer's Dementia Recognition through Spontaneous Speech","authors":["E Edwards, C Dognin, B Bollepalli, M Singh…"],"snippet":"… Deep Random Forest Setting: We extract features using three pre-trained embeddings: Word2Vec (CBOW) with subword information [29] (pre-trained on Common Crawl), GloVe [30] pre-trained on Common Crawl and Sent2Vec …","url":["https://indico2.conference4me.psnc.pl/event/35/contributions/3302/attachments/1227/1271/Wed-SS-1-6-9.pdf"]} {"year":"2020","title":"MuSe 2020 Challenge and Workshop: Multimodal Sentiment Analysis, Emotion-target Engagement and Trustworthiness Detection in Real-life Media: Emotional Car …","authors":["L Stappen, A Baird, G Rizos, P Tzirakis, X Du, F Hafner… - Proceedings of the 1st …, 2020"],"snippet":"… 4.3 Language FastText [5] is a library for efficient learning of word embeddings. It is based on the skipgram model where a vector representation is associated to each character n-gram. The model is trained on the English Common Crawl corpus (600B tokens) …","url":["https://dl.acm.org/doi/abs/10.1145/3423327.3423673"]} {"year":"2020","title":"MuSe 2020--The First International Multimodal Sentiment Analysis in Real-life Media Challenge and Workshop","authors":["L Stappen, A Baird, G Rizos, P Tzirakis, X Du, F Hafner… - arXiv preprint arXiv …, 2020"],"snippet":"… 4.3 Language FastText [5] is a library for efficient learning of word embeddings. It is based on the skipgram model where a vector representation is associated to each character n-gram. The model is trained on the English Common Crawl corpus (600B tokens) …","url":["https://arxiv.org/pdf/2004.14858"]} {"year":"2020","title":"MWPD2020: Semantic Web Challenge on Mining the Web of HTML-embedded Product Data","authors":["Z Zhang, C Bizer, R Peeters, A Primpeli"],"snippet":"Page 1. MWPD2020: Semantic Web Challenge on Mining the Web of HTML-embedded Product Data Ziqi Zhang1[0000−0002−8587−8618], Christian Bizer2[0000−0003−2367−0237], Ralph Peeters2[0000−0003 …","url":["http://ceur-ws.org/Vol-2720/paper1.pdf"]} {"year":"2020","title":"Névszói kötőhangzók variabilitásának korpuszalapú vizsgálata Corpus-based analysis of the variability of linking vowels in nouns and adjectives","authors":["R Péter, L Dániel"],"snippet":"… 3.1 Corpus The corpus on which we conducted our measurements is the prepublished version of the Webcorpus 2 (Nemeskey, 2020). It is based on the Common Crawl webcorpus, which is a collection of pages …","url":["https://hlt.bme.hu/media/pdf/thesis_levai_ma.pdf"]} {"year":"2020","title":"Named Entity Recognition for Code-Mixed Indian Corpus using Meta Embedding","authors":["R Priyadharshini, BR Chakravarthi, M Vegupatti… - 2020 6th International …, 2020"],"snippet":"… IV. EXPERIMENT We use FastText word embedding trained from Common Crawl and Wikipedia [30] for English and Hindi-Devanagari script (native script for Hindi). We also add the English Twitter GloVe word embeddings since the NER data is from Twitter …","url":["https://ieeexplore.ieee.org/abstract/document/9074379/"]} {"year":"2020","title":"Naming unrelated words reliably predicts creativity","authors":["JA Olson, J Nahas, D Chmoulevitch, ME Webb - PsyArXiv. December, 2020"],"snippet":"… We trained the GloVe model with the Common Crawl corpus, which contains the text of billions of web pages … We chose the GloVe algorithm and the Common Crawl corpus; this combination correlates best with …","url":["https://psyarxiv.com/qvg8b/download/?format=pdf"]} {"year":"2020","title":"Narrative Origin Classification of Israeli-Palestinian Conflict Texts","authors":["J Wei, E Santos Jr - The Thirty-Third International Flairs Conference, 2020"],"snippet":"… For training and testing, we converted text inputs into nu- merical representations using 300-dimensional distributed embeddings pre-trained on the Common Crawl database with the GloVe method (Pennington, Socher, and Manning 2014) …","url":["https://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS20/paper/download/18443/17596"]} {"year":"2020","title":"Natural Language Correction-Thesis Proposal","authors":["J Náplava"],"snippet":"… considered. This table displays top 15 languages with worst results on dictionary baseline. a new pipeline utilizing both clean data from Wikipedia and also not that clean data from general web (utilizing CommonCrawl corpus). The …","url":["http://ufal.mff.cuni.cz/~zabokrtsky/pgs/thesis_proposal/jakub-naplava-proposal.pdf"]} {"year":"2020","title":"Natural Language Generation using Transformer Network in an Open-domain Setting","authors":["AAM Gopinath, P Bhattacharyya","D Varshney, A Ekbal, GP Nagaraja, M Tiwari… - International Conference on …, 2020"],"snippet":"… The embeddings used in our model are trained on Common Crawl dataset with 840B tokens and 2.2M vocab. We use 300-dimensional sized vectors. 3.3 Baseline Models We formulate our task of response generation as a machine translation problem …","url":["https://link.springer.com/chapter/10.1007/978-3-030-51310-8_8","https://www.researchgate.net/profile/Deeksha-Varshney/publication/342238636_Natural_Language_Generation_Using_Transformer_Network_in_an_Open-Domain_Setting/links/60432b74299bf1e0785aff2f/Natural-Language-Generation-Using-Transformer-Network-in-an-Open-Domain-Setting.pdf"]} {"year":"2020","title":"Natural Language Processing (NLP) and Text Analytics","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"In the preceding chapters, we have solely relied on the structure of the HTML documents themselves to scrape information from them, and that is a powerful method to extract information.","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_4"]} {"year":"2020","title":"Natural Language Processing Model for Managing Maintenance Requests in Buildings","authors":["Y Bouabdallaoui, Z Lafhaj, P Yim, L Ducoulombier… - Buildings, 2020"],"snippet":"… In order to overcome the limited corpus of vocabulary in our dataset, a pre-trained word embedding was used. It is based on the FastText model [44] and trained on a large corpus of French vocabulary from Wikipedia and Common Crawl [45] …","url":["https://www.mdpi.com/2075-5309/10/9/160/pdf"]} {"year":"2020","title":"Natural Language Transfer Learning for Physiological Textual Similarity","authors":["V Awatramani, P Gupta - 2020 10th International Conference on Cloud …, 2020"],"snippet":"… Moreover, RoBERTa is trained over 160 GB of text that includes English Wikipedia and BooksCorporus used earlier in BERT and additionally, CommonCrawl News (CC-NEWS) dataset consisting of 63 million articles …","url":["https://ieeexplore.ieee.org/abstract/document/9058216/"]} {"year":"2020","title":"Naver Labs Europe's Participation in the Robustness, Chat, and Biomedical Tasks at WMT 2020","authors":["A Bérard, V Nikoulina, I Calapodescu, J Philip - … of the Fifth Conference on Machine …, 2020"],"snippet":"… Corpus Sents Docs Paracrawl 33.9M – Rapid2019 965k 48.3k Europarl 1.75M 6.7k Commoncrawl 1.97M – Wikimatrix 5.68M – Wikititles … of large monolingual English and German datasets (100M lines in total per language …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.57.pdf"]} {"year":"2020","title":"Nearest Neighbor Machine Translation","authors":["U Khandelwal, A Fan, D Jurafsky, L Zettlemoyer… - arXiv preprint arXiv …, 2020"],"snippet":"… The parallel sentences are mined from cleaned monolingual commoncrawl data created using the ccNet pipeline (Wenzek et al., 2019) … with only the kNN distribution (λ = 1) with beam size 1, retrieving k = 8 neighbors from …","url":["https://arxiv.org/pdf/2010.00710"]} {"year":"2020","title":"NestMSA: a new multiple sequence alignment algorithm","authors":["M Kayed, AA Elngar - The Journal of Supercomputing, 2020"],"snippet":"Multiple sequence alignment (MSA) is a core problem in many applications. Various optimization algorithms such as genetic algorithm and particle swarm opti.","url":["https://link.springer.com/article/10.1007/s11227-020-03206-0"]} {"year":"2020","title":"Neural Aspect-based Text Generation","authors":["H Hayashi - 2020"],"snippet":"Page 1. November 25, 2020 DRAFT Thesis Proposal Neural Aspect-based Text Generation Hiroaki Hayashi November 25, 2020 Language Technologies Institute School of Computer Science Carnegie Mellon University Pittsburgh, PA 15123 Thesis Committee …","url":["https://hiroakih.me/thesis_proposal.pdf"]} {"year":"2020","title":"Neural Databases","authors":["J Thorne, M Yazdani, M Saeidi, F Silvestri, S Riedel… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Neural Databases James Thorne University of Cambridge Facebook AI jt719@cam.ac.uk Majid Yazdani Facebook AI myazdani@fb.com Marzieh Saeidi Facebook AI marzieh@fb.com Fabrizio Silvestri Facebook AI fsilvestri@fb.com …","url":["https://arxiv.org/pdf/2010.06973"]} {"year":"2020","title":"NEURAL MACHINE TRANSLATION WITH UNIVERSAL VISUAL REPRESENTATION","authors":["Z Li, H Zhao"],"snippet":"… We used newsdev2016 as the dev set and newstest2016 as the test set. 2) For the EN-DE translation task, 4.43M bilingual sentence pairs of the WMT14 dataset were used as training data, including Common Crawl, News Commentary, and Europarl v7 …","url":["https://www.researchgate.net/profile/Zhuosheng_Zhang4/publication/339375656_Neural_Machine_Translation_with_Universal_Visual_Representation/links/5e4e27a4299bf1cdb938db20/Neural-Machine-Translation-with-Universal-Visual-Representation.pdf"]} {"year":"2020","title":"Neural Simultaneous Speech Translation Using Alignment-Based Chunking","authors":["P Wilken, T Alkhouli, E Matusov, P Golik - arXiv preprint arXiv:2005.14489, 2020"],"snippet":"Page 1. arXiv:2005.14489v1 [cs.CL] 29 May 2020 Neural Simultaneous Speech Translation Using Alignment-Based Chunking Patrick Wilken, Tamer Alkhouli, Evgeny Matusov, Pavel Golik Applications Technology (AppTek), Aachen …","url":["https://arxiv.org/pdf/2005.14489"]} {"year":"2020","title":"Neural Text Segmentation and Its Application to Sentiment Analysis","authors":["J Li, B Chiu, S Shang, L Shao - IEEE Transactions on Knowledge and Data …, 2020"],"snippet":"Page 1. 1041-4347 (c) 2020 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/9051834/"]} {"year":"2020","title":"Neural Word Embeddings for Sentiment Analysis","authors":["B Naderalvojoud - 2020"],"snippet":"Page 1. 1 Page 2. NEURAL WORD EMBEDDINGS FOR SENTIMENT ANALYSIS DUYGU ANAL˙IZ˙I˙IC¸˙IN S˙IN˙IRSEL S¨OZC¨UK¨OZ YERLES¸˙IKLER˙I BEHZAD NADERALVOJOUD PROF. DR. EBRU SEZER Supervisor Submitted to …","url":["http://www.openaccess.hacettepe.edu.tr:8080/xmlui/bitstream/handle/11655/22784/10350958.pdf?sequence=1"]} {"year":"2020","title":"NewB: 200,000+ Sentences for Political Bias Detection","authors":["J Wei - arXiv preprint arXiv:2006.03051, 2020"],"snippet":"… When inputting sentences into the model, we tokenize each sentence at the word-level and convert each word into a vector using 300dimensional distributed embeddings trained on the Common Crawl database with the GloVe method (Pennington et al., 2014) …","url":["https://arxiv.org/pdf/2006.03051"]} {"year":"2020","title":"News topic classification as a first step towards diverse news recommendation","authors":["O De Clercq, L De Bruyne, V Hoste - Computational Linguistics in the Netherlands …, 2020"],"snippet":"… The RobBERT model was trained on the Dutch part of the 39GB OSCAR corpus, a part of the Common Crawl corpus (Suárez et al. 2019). As sub-word token input, BERTje uses WordPiece, whereas RobBERT uses byte-level Byte Pair Encoding (BPE) …","url":["https://www.clinjournal.org/clinj/article/download/103/92"]} {"year":"2020","title":"NLNDE at CANTEMIST: Neural Sequence Labeling and Parsing Approaches for Clinical Concept Extraction","authors":["L Lange, X Dai, H Adel, J Strötgen - arXiv preprint arXiv:2010.12322, 2020"],"snippet":"… In particular, we use pre-trained fastText embeddings [22] that were trained on articles from Wikipedia and the Common Crawl, as well as domain-speci c fastText embeddings [23] that were pretrained on articles of the Spanish …","url":["https://arxiv.org/pdf/2010.12322"]} {"year":"2020","title":"NLP North at WNUT-2020 Task 2: Pre-training versus Ensembling for Detection of Informative COVID-19 English Tweets","authors":["AG Møller, R van der Goot, B Plank - Proceedings of the 6th Workshop on Noisy …, 2020"],"snippet":"… twitter. ArXiv, abs/2005.07503. Sebastian Nagel. 2016. https://commoncrawl org/2016/10/news-dataset-available/. Dat Quoc Nguyen, Thanh Vu, Afshin Rahimi, Mai Hoang Dao, Linh The Nguyen, and Long Doan. 2020. WNUT …","url":["http://www.robvandergoot.com/doc/wnut2020.pdf"]} {"year":"2020","title":"No computation without representation: Avoiding data and algorithm biases through diversity","authors":["C Kuhlman, L Jackson, R Chunara - arXiv preprint arXiv:2002.11836, 2020"],"snippet":"… Page 3. Dataset Description Sensitive Attribute race gender age other Adult: US Census income data [36]. [43, 57, 72, 77] [2, 3, 25, 27, 43, 57, 64, 77] [57] [2, 57] Common Crawl: Occupation biographies [42]. [34] Comm …","url":["https://arxiv.org/pdf/2002.11836"]} {"year":"2020","title":"Noise Pollution in Hospital Readmission Prediction: Long Document Classification with Reinforcement Learning","authors":["L Xu, J Hogan, RE Patzer, JD Choi - arXiv preprint arXiv:2005.01259, 2020"],"snippet":"… Averaged Word Embedding For the averaged word embedding encoder (AWE; Section 4.2), em- beddings generated by FastText trained on the Common Crawl and the English Wikipedia with the 300 dimension is …","url":["https://arxiv.org/pdf/2005.01259"]} {"year":"2020","title":"Not All Swear Words Are Used Equal: Attention over Word n-grams for Abusive Language Identification","authors":["HJ Jarquín-Vásquez, M Montes-y-Gómez… - Mexican Conference on …, 2020","L Villasenor-Pineda - Pattern Recognition: 12th Mexican Conference, MCPR …"],"snippet":"… On the other hand, for word representation we used pre-trained fastText embeddings [3], trained with subword information on Common Crawl. Table 2. Proposed attention-based deep neural network hyperparameters. Layer …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=UO_rDwAAQBAJ&oi=fnd&pg=PA282&dq=commoncrawl&ots=4BGirDzhwW&sig=0HRX8HVtoXlgmfN76pD6wgHh1_c","https://link.springer.com/chapter/10.1007/978-3-030-49076-8_27"]} {"year":"2020","title":"NOTICE TO BORROWERS","authors":["C Li"],"snippet":"Page 1. DISTRIBUTION AGREEMENT In presenting this thesis as a partial fulfillment of the requirements for an advanced degree from Emory University, I agree that the Library of the University shall make it available for inspection …","url":["https://franklicm.github.io/files/thesis_final.pdf"]} {"year":"2020","title":"Novel Entity Discovery from Web Tables","authors":["S Zhang, E Meij, K Balog, R Reinanda - arXiv preprint arXiv:2002.00206, 2020"],"snippet":"Page 1. Novel Entity Discovery from Web Tables Shuo Zhang Bloomberg London, United Kingdom szhang611@bloomberg.net Edgar Meij Bloomberg London, United Kingdom emeij@bloomberg.net Krisztian Balog University …","url":["https://arxiv.org/pdf/2002.00206"]} {"year":"2020","title":"Novel Opinion mining System for Movie Reviews","authors":["AH AbdulHafiz - International Journal of Intelligent Systems and …, 2020"],"snippet":"… We have adopted the Word2Vec feature representation, the CSG in particular, in our work. It has a pre-trained word vector for the English language trained on 1 million common crawl and Wikipedia documents. Word2vec is a two-layer neural net …","url":["https://151.80.211.128/IJISAE/article/download/1090/621"]} {"year":"2020","title":"NRC Systems for the 2020 Inuktitut–English News Translation Task","authors":["R Knowles, D Stewart, S Larkin, P Littell - Proceedings of the Fifth Conference on …, 2020"],"snippet":"… org/wmt20/translation-task.html Page 3. 157 Wiki Titles or Common Crawl Inuktitut data.9 We incorporated the news portion of the development data in training our models to alleviate the domain mismatch issue (Section 5.1). 4 Preprocessing and Postprocessing …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.13.pdf"]} {"year":"2020","title":"NUIG-DSI at the WebNLG+ challenge: Leveraging Transfer Learning for RDF-to-text generation","authors":["N Pasricha, M Arcan, P Buitelaar - Proceedings of the 3rd WebNLG Workshop on …, 2020"],"snippet":"… middle) and reference lexicalisation (bottom). (Vaswani et al., 2017) and is pre-trained using un- supervised learning on a large corpus of unlabeled data obtained from the Web using the Common Crawl project. It is trained using …","url":["https://webnlg-challenge.loria.fr/files/2020.webnlg-papers.15.pdf"]} {"year":"2020","title":"ODArchive–Creating an Archive for Structured Data from Open Data Portals","authors":["T Weber, J Mitöhner"],"snippet":"… We deem this project particularly useful as a resource for experiments on real-world structured data: to name an example, while large corpora of tabular data from Web tables have been made available via CommonCrawl [6], the …","url":["https://aic.ai.wu.ac.at/~polleres/publications/webe-etal-2020ISWC.pdf"]} {"year":"2020","title":"On Finding Similar Verses from the Holy Quran using Word Embeddings","authors":["S Saeed, S Haider, Q Rajput - 2020 International Conference on Emerging Trends in …, 2020"],"snippet":"… It is an English multitask Convolution Neural Network (CNN)[4] trained on OntoNotes[14], with GloVe[11] vectors trained on Common Crawl[3]. It contains 685,000 keys, 20,000 unique words and each word …","url":["https://ieeexplore.ieee.org/abstract/document/9080691/"]} {"year":"2020","title":"On Multilingual Word Embeddings & their applications in machine translation","authors":["N Jain"],"snippet":"… crosslingual signals. The hard/challenging dataset comprise of English-Italian, English-German, English-Finnish and English-Spanish pairs. These embeddings are trained on Wacky crawling corpora/common crawl corpora. We notice …","url":["https://naman-ntc.github.io/data/Seminar.pdf"]} {"year":"2020","title":"On revealing shared conceptualization among open datasets","authors":["M Bogdanović, N Veljković, MF Gligorijević, D Puflović… - Journal of Web Semantics, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S1570826820300573"]} {"year":"2020","title":"On the comparability of Pre-trained Language Models","authors":["M Aßenmacher, C Heumann - arXiv preprint arXiv:2001.00781, 2020"],"snippet":"… Wikipedia data sets are available in the Tensorflow Datasets-module3. CommonCrawl Among other resources, Yang et al. (2019) used data from CommonCrawl … CommonCrawl https://commoncrawl.org/ Unclear Fully available XLNet ClueWeb 2012-B Callan et al …","url":["https://arxiv.org/pdf/2001.00781"]} {"year":"2020","title":"On the diminishing return of labeling clinical reports","authors":["JB Lamare, T Olatunji, L Yao - arXiv preprint arXiv:2010.14587, 2020"],"snippet":"… linear classifiers. Recent NLP ad- vance pushes the envelop much further by leveraging web-scale data – for instance, the Common Crawl project 1 that produces 20TB of textual data from the Internet each month. To cope …","url":["https://arxiv.org/pdf/2010.14587"]} {"year":"2020","title":"On the Effectiveness of Behavior-Based Ransomware Detection","authors":["J Han, Z Lin, DE Porter - International Conference on Security and Privacy in …, 2020"],"snippet":"… To measure whether partial encryption is effective at withholding user data, we collected 200 different PDF documents from the web using Common Crawl Document Download [4]. We choose PDF documents with a minimum of 10 pages …","url":["https://link.springer.com/chapter/10.1007/978-3-030-63095-9_7"]} {"year":"2020","title":"On the evaluation of retrofitting for supervised short-text classification","authors":["K GHAZI, A TCHECHMEDJIEV, S HARISPE…"],"snippet":"… we considered the 300-dimensional word vectors: (i) Paragram [16], learned from the text content in the paraphrase database PPDB, (ii) Glove [22] learned from Wikipedia and Common Crawl data, (iii) MUSE, a fastText embedding …","url":["http://ceur-ws.org/Vol-2708/donlp2.pdf"]} {"year":"2020","title":"On the impact of publicly available news and information transfer to financial markets","authors":["M Jazbec, B Pásztor, F Faltings, N Antulov-Fantulin… - arXiv preprint arXiv …, 2020"],"snippet":"… To address ihttps://commoncrawl.org iiDetailed statistics about the Common Crawl can found here: https://commoncrawl.github.io/cc-crawl-statistics iiiWe omitted the domain www.nbonews.com. While the most frequently occurring …","url":["https://arxiv.org/pdf/2010.12002"]} {"year":"2020","title":"On the importance of pre-training data volume for compact language models","authors":["V Micheli, M D'Hoffschmidt, F Fleuret - arXiv preprint arXiv:2010.03813, 2020"],"snippet":"… OSCAR 2 (Ortiz Suárez et al., 2019) is a large-scale multilingual open source collection of corpora ob- tained by language classification and filtering of the Common Crawl corpus 3. The whole French part amounts to 138 GB …","url":["https://arxiv.org/pdf/2010.03813"]} {"year":"2020","title":"On the Language Neutrality of Pre-trained Multilingual Representations","authors":["J Libovický, R Rosa, A Fraser - arXiv preprint arXiv:2004.05160, 2020"],"snippet":"… XLM-RoBERTa. Conneau et al. (2019) claim that the original mBERT is under-trained and train a similar model on a larger dataset that consists of two terabytes of plain text extracted from CommonCrawl (Wenzek et al., 2019) …","url":["https://arxiv.org/pdf/2004.05160"]} {"year":"2020","title":"On the Persistence of Persistent Identifiers of the Scholarly Web","authors":["M Klein, L Balakireva - arXiv preprint arXiv:2004.03011, 2020"],"snippet":"… These findings were confirmed in a large scale study by Thompson and Jian [16] based on two samples of the web taken from Common Crawl6 datasets … Thompson, HS, Tong, J.: Can common crawl reliably track persistent identifier (PID) use over time …","url":["https://arxiv.org/pdf/2004.03011"]} {"year":"2020","title":"On the synthesis of metadata tags for HTML files","authors":["P Jiménez, JC Roldán, FO Gallego, R Corchuelo - Software: Practice and Experience"],"snippet":"… Recently, an analysis of the 32.04 million domains in the November 2019 Common Crawl has revealed that only 11.92 million domains provide metadata tags,1 which clearly argues for a method that helps software agents deal with the documents provided by the remaining …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/spe.2886"]} {"year":"2020","title":"On using Product-Specific Schema. org from Web Data Commons: An Empirical Set of Best Practices","authors":["R Kiran Selvam, M Kejriwal - arXiv e-prints, 2020","RK Selvam, M Kejriwal - arXiv preprint arXiv:2007.13829, 2020"],"snippet":"… on e-commerce websites. The Web Data Commons (WDC) project has extracted schema.org data at scale from webpages in the Common Crawl and made it available as an RDF 'knowledge graph' at scale. The portion of this …","url":["https://arxiv.org/pdf/2007.13829","https://ui.adsabs.harvard.edu/abs/2020arXiv200713829K/abstract"]} {"year":"2020","title":"On-The-Fly Information Retrieval Augmentation for Language Models","authors":["H Wang, D McAllester - Proceedings of the First Joint Workshop on Narrative …, 2020"],"snippet":"… News etc. For language modelling we use the NY Times portion because it is written by native English speakers. Since GPT 2.0 is trained on Common Crawl which contains news collections started from 2008. To avoid testing …","url":["https://www.aclweb.org/anthology/2020.nuse-1.14.pdf"]} {"year":"2020","title":"One Belt, One Road, One Sentiment? A Hybrid Approach to Gauging Public Opinions on the New Silk Road Initiative","authors":["JK Chandra, E Cambria, A Nanetti"],"snippet":"… ABSA. We used the Common Crawl GloVe version [44], a pre-trained 300-dimension vector representation database of 840 billion tokens and 2.2 million vocabulary, to convert our preprocessed tweets into word embeddings …","url":["https://sentic.net/one-belt-one-road-one-sentiment.pdf"]} {"year":"2020","title":"Open Information Extraction as Additional Source for Kazakh Ontology Generation","authors":["N Khairova, S Petrasova, O Mamyrbayev, K Mukhsina - Asian Conference on …, 2020"],"snippet":"… also for many others. For example, an experiment was conducted in [19] for assessing the adequacy of measuring the factual density of 50 randomly selected Spanish documents in the CommonCrawl corpus. In a recent study …","url":["https://link.springer.com/chapter/10.1007/978-3-030-41964-6_8"]} {"year":"2020","title":"Open Intent Extraction from Natural Language Interactions","authors":["N Vedula, N Lipka, P Maneriker, S Parthasarathy - Proceedings of The Web …, 2020"],"snippet":"… 2A commercial Customer Relationship Management (CRM) software. Implementation: We use the 300-dimensional GloVe embeddings [50] pre-trained on the Common Crawl dataset3, and character embeddings as per Ma et al [42] …","url":["https://dl.acm.org/doi/pdf/10.1145/3366423.3380268"]} {"year":"2020","title":"Open science-based framework to reveal open data publishing: an experience from using Common Crawl","authors":["A Correa, I Fernandes - ELPUB 24rd edition of the International Conference on …, 2020"],"snippet":"The publishing of open data is considered a key element for civic participation paving the way to the 'public value', a term which underpins the social contribution. A result of that can be seen through the popularity of data portals published all around …","url":["https://hal.archives-ouvertes.fr/hal-02544245/document"]} {"year":"2020","title":"Open source speech recognition on edge devices","authors":["R Peinl, B Rizk, R Szabad - 2020 10th International Conference on Advanced …, 2020"],"snippet":"… To make the comparison as fair as possible we used the KenLM 6-gram language model for all ASR models except DS2, which came with its own word-level language model trained on CommonCrawl (en-00) from the Common Crawl Corpus7 …","url":["https://ieeexplore.ieee.org/abstract/document/9208978/"]} {"year":"2020","title":"Open-Domain Question Answering Goes Conversational via Question Rewriting","authors":["R Anantha, S Vakulenko, Z Tu, S Longpre, S Pulman… - arXiv preprint arXiv …, 2020"],"snippet":"… relevant pages with randomly sampled web pages that constitute 1% of the Common Crawl dataset identified … Wayback Machine and 9.9M random web pages from the Common Crawl dataset … 3 https://commoncrawl.org/2019 …","url":["https://arxiv.org/pdf/2010.04898"]} {"year":"2020","title":"Optimal Subarchitecture Extraction For BERT","authors":["A de Wynter, DJ Perry - arXiv preprint arXiv:2010.10499, 2020"],"snippet":"… In order to have a sufficiently diverse dataset to pre-train Bort, we combined corpora obtained from Wikipedia7, Wiktionary8, OpenWebText (Gokaslan and Cohen, 2019), UrbanDictionary9, One Billion Words (Chelba et al., 2014) …","url":["https://arxiv.org/pdf/2010.10499"]} {"year":"2020","title":"Optimizing Distributed Computing Systems via Machine Learning","authors":["H Wang - 2020"],"snippet":"Page 1. Optimizing Distributed Computing Systems via Machine Learning by Hao Wang A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Electrical …","url":["https://tspace.library.utoronto.ca/bitstream/1807/103710/1/Wang_Hao_202011_PhD_thesis.pdf"]} {"year":"2020","title":"OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings","authors":["S Dev, T Li, JM Phillips, V Srikumar - arXiv preprint arXiv:2007.00049, 2020"],"snippet":"… Our code for reproducing experiments will be released upon publication. Debiasing contextualized embeddings. The operations above are described for a noncontextualized embedding; we use one of the largest such …","url":["https://arxiv.org/pdf/2007.00049"]} {"year":"2020","title":"Out-of-the-Box and into the Ditch? Multilingual Evaluation of Generic Text Extraction Tools","authors":["A Barbaresi, G Lejeune - Proceedings of the 12th Web as Corpus Workshop, 2020"],"snippet":"… Recently, approaches using the CommonCrawl1 have flourished as they allow for faster download and processing by skipping (or more precisely outsourcing) the crawling phase (Habernal et al., 2016; Schäfer, 2016) … 1https://commoncrawl.org …","url":["https://www.aclweb.org/anthology/2020.wac-1.2.pdf"]} {"year":"2020","title":"Overview of the CLEF eHealth 2020 task 2: consumer health search with ad hoc and spoken queries","authors":["L Goeuriot, Z Liu, G Pasi, GG Saez, M Viviani, C Xu - … Notes of Conference and Labs of …, 2020"],"snippet":"… 2.1 Documents The 2018 CLEF eHealth Consumer Health Search document collection was used in this year's IR challenge. As detailed in [17], this collection consists of web pages acquired from the CommonCrawl. An …","url":["http://www.dei.unipd.it/~ferro/CLEF-WN-Drafts/CLEF2020/paper_260.pdf"]} {"year":"2020","title":"Overview of the CLEF eHealth Evaluation Lab 2020","authors":["C Xu - … IR Meets Multilinguality, Multimodality, and Interaction …"],"snippet":"… Task 2. The 2018 CLEF eHealth Consumer Health Search document collection was used in this year's IR challenge. As detailed in [14], this collection consists of web pages acquired from the CommonCrawl. An initial list of websites was identified for acquisition …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=IxP9DwAAQBAJ&oi=fnd&pg=PA255&dq=commoncrawl&ots=BCbV87DfTS&sig=y0JIuiOfa-DKv4aFVa2ZXlCYBic"]} {"year":"2020","title":"Overview of the seventh Dialog System Technology Challenge: DSTC7","authors":["LF D'Haro, K Yoshino, C Hori, TK Marks… - Computer Speech & …, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0885230820300012"]} {"year":"2020","title":"Overview of the Transformer-based Models for NLP Tasks","authors":["A Gillioz, J Casas, E Mugellini, O Abou Khaled"],"snippet":"… they contain. The tokenization is done with SentencePiece [15]. In a few cases, for example, in [16], the authors only used a subset of those datasets (eg Stories [17] is a subset of CommonCrawl dataset). IV. BENCHMARKS …","url":["https://annals-csis.org/Volume_21/drp/pdf/20.pdf"]} {"year":"2020","title":"Overview of Touché 2020: Argument Retrieval","authors":["A Bondarenko, M Fröbe, M Beloucif, L Gienapp… - Working Notes Papers of the …, 2020","H Wachsmuth, M Potthast, M Hagen - … IR Meets Multilinguality, Multimodality, and Interaction …","Y Ajjour, A Panchenko, C Biemann, B Stein… - Experimental IR Meets …"],"snippet":"… Stab et al.[26] retrieve documents from the Common Crawl 3 and then use a topic-dependent neural network to extract arguments from the retrieved documents … This method is shown to outperform several 3 http://commoncrawl. org. Page 398 …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=IxP9DwAAQBAJ&oi=fnd&pg=PA384&dq=commoncrawl&ots=BCbV87DfTS&sig=590FLiaMF0QksqxDhusBxoZQuK4","https://link.springer.com/content/pdf/10.1007/978-3-030-58219-7.pdf#page=395","https://webis.de/downloads/publications/papers/stein_2020v.pdf"]} {"year":"2020","title":"ParaCrawl: Web-Scale Acquisition of Parallel Corpora","authors":["M Banón, P Chen, B Haddow, K Heafield, H Hoang…"],"snippet":"… In an ex- ploratory study, only 5% of a collection of web pages with useful content were found in CommonCrawl. This may have improved with recent more extensive crawls by CommonCrawl but there is still a strong argument for targeted crawling. 4 Crawling …","url":["https://www.neural.mt/papers/edinburgh/paracrawl.pdf"]} {"year":"2020","title":"Parallelograms revisited: Exploring the limitations of vector space models for simple analogies","authors":["JC Peterson, D Chen, TL Griffiths - Cognition, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0010027720302596"]} {"year":"2020","title":"Pedro Javier Ortiz Suárez1, 3 [0000− 0003− 0343− 8852], Yoann Dupont2","authors":["G Lejeune, T Tian"],"snippet":"… For the fixed word embeddings we used the Common Crawl-based FastText embeddings [10] originally trained by Facebook as opposed to the embeddings provided by the HIPE shared task, as we obtained better dev …","url":["http://ceur-ws.org/Vol-2696/paper_203.pdf"]} {"year":"2020","title":"Persistent metadata catalog","authors":["GS Mcpherson, Y Mikhaylyuta, TD Baker, RJ Cole - US Patent 10,853,356, 2020"],"snippet":"… sources. In one embodiment metadata catalog service 120 may host metadata for a collection of data sets that are available in the public domain, such as the 1000 genome, NASA NEX, and the Common Crawl Corpus data sets …","url":["https://www.freepatentsonline.com/10853356.html"]} {"year":"2020","title":"Phishing Detection Using Machine Learning Technique","authors":["J Rashid, T Mahmood, MW Nisar, T Nazir - 2020 First International Conference of …, 2020"],"snippet":"… and June 2017. In particular, we selected 5000 phishing web pages, and all web pages are more stable, especially based on URLs. The fish tank is based entirely on the Alexa URL and Common Crawl archives. B. Step 2 Vocabulary …","url":["https://ieeexplore.ieee.org/abstract/document/9283771/"]} {"year":"2020","title":"PhishingLine: Hybrid Phishing Classifier with Logo Detection","authors":["K Vohra - 2020"],"snippet":"… 29 5.5 Web Capture . . . . . 30 5.5.1 WARC . . . . . 30 5.5.2 Common Crawl . . . . . 30 5.6 Client Server Architecture . . . . . 30 5.6.1 TCP …","url":["https://www.ka.beer/pdf/project.pdf"]} {"year":"2020","title":"Phonemer at WNUT-2020 Task 2: Sequence Classification Using COVID Twitter BERT and Bagging Ensemble Technique based on Plurality Voting","authors":["A Wadhawan - arXiv preprint arXiv:2010.00294, 2020"],"snippet":"… Table 2. 4.2 System Settings For training the CNN, LSTM and BiLSTMs, word vectors for english language pre-trained on Common Crawl2 and Wikipedia3 are downloaded4 and used using FastText5 library. These word vectors …","url":["https://arxiv.org/pdf/2010.00294"]} {"year":"2020","title":"Photo Stream Question Answer","authors":["W Zhang, S Tang, Y Cao, J Xiao, S Pu, F Wu, Y Zhuang - Proceedings of the 28th …, 2020"],"snippet":"Page 1. Photo Stream Question Answer Wenqiao Zhang Zhejiang University wenqiaozhang@zju.edu.cn Siliang Tang* Zhejiang Universety siliang@zju.edu.cn Yanpeng Cao,Jun Xiao Zhejiang University caoyp,junx@zju.edu.cn …","url":["https://dl.acm.org/doi/abs/10.1145/3394171.3413745"]} {"year":"2020","title":"PMap: Ensemble Pre-training Models for Product Matching","authors":["N Kertkeidkachorn, R Ichise"],"snippet":"… In this section, we explain the pre-train models and how to fine-tune them. 5 http://webdatacommons.org/largescaleproductcorpus/v2/index. html 6 http://webdatacommons.org/structureddata/ 7 https://commoncrawl …","url":["http://ceur-ws.org/Vol-2720/paper2.pdf"]} {"year":"2020","title":"PoKED: A Semi-Supervised System for Word Sense Disambiguation","authors":["F Wei"],"snippet":"Page 1. PoKED: A Semi-Supervised System for Word Sense Disambiguation Feng Wei 1 Abstract In this paper, we propose a semi-supervised neural system, named Position-wise Orthogonal Knowledge-Enhanced Disambiguator …","url":["https://proceedings.icml.cc/static/paper_files/icml/2020/1929-Paper.pdf"]} {"year":"2020","title":"Practical Data Science for Information Professionals","authors":["D Stuart - 2020"],"snippet":"Page 1. Practical Data Science for Information Professionals Page 2. Every purchase of a Facet book helps to fund CILIP's advocacy, awareness and accreditation programmes for information professionals. Page 3. Practical Data …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=2TjzDwAAQBAJ&oi=fnd&pg=PP3&dq=commoncrawl&ots=kN5yF0ztQ5&sig=n_c9XhBzAQWnpO6G9vyAev9qRJY"]} {"year":"2020","title":"Pragmatic Aspects of Discourse Production for the Automatic Identification of Alzheimer's Disease","authors":["A Pompili, A Abad, DM de Matos, IP Martins - IEEE Journal of Selected Topics in …, 2020"],"snippet":"… For this purpose, we rely on a pre-trained model of word vector representations containing 2 million word vectors, in 300 dimensions, trained with fastText on Common Crawl [42]. In the process of converting a sentence into its vector …","url":["https://ieeexplore.ieee.org/abstract/document/8963723/"]} {"year":"2020","title":"Pre-indexing Pruning Strategies","authors":["S Altin, R Baeza-Yates, BB Cambazoglu - International Symposium on String …, 2020"],"snippet":"… 4 Experimental Setup. 4.1 Document Collection. As web document collection, we mostly use the open source web collection provided by Common Crawl, CC, in November 2017 … 5 Experimental Results. 5.1 Common Crawl and BM25 …","url":["https://link.springer.com/chapter/10.1007/978-3-030-59212-7_13"]} {"year":"2020","title":"Pre-trained Models for Natural Language Processing: A Survey","authors":["X Qiu, T Sun, Y Xu, Y Shao, N Dai, X Huang - arXiv preprint arXiv:2003.08271, 2020"],"snippet":"Page 1. .Invited Review . Pre-trained Models for Natural Language Processing: A Survey Xipeng Qiu*, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai & Xuanjing Huang School of Computer Science, Fudan University, Shanghai …","url":["https://arxiv.org/pdf/2003.08271"]} {"year":"2020","title":"Pre-training Polish Transformer-based Language Models at Scale","authors":["S Dadas, M Perełkiewicz, R Poświata - arXiv preprint arXiv:2006.04229, 2020"],"snippet":"… the size of the training corpus. 2) We proposed a method for collecting and pre-processing the data from the Common Crawl database to obtain clean, high-quality text corpora. 3) We conducted a comprehensive evaluation …","url":["https://arxiv.org/pdf/2006.04229"]} {"year":"2020","title":"Pre-training via Leveraging Assisting Languages and Data Selection for Neural Machine Translation","authors":["H Song, R Dabre, Z Mao, F Cheng, S Kurohashi… - arXiv preprint arXiv …, 2020"],"snippet":"… Additionally, we used Common Crawl4 monolingual corpora for pre-training … We filled the CurrentDistribution line by line in Common Crawl file if the ratio of the length of current line had been less than the ratio of this length in target length distribution …","url":["https://arxiv.org/pdf/2001.08353"]} {"year":"2020","title":"Pre-training via Leveraging Assisting Languages for Neural Machine Translation","authors":["H Song, R Dabre, Z Mao, F Cheng, S Kurohashi…"],"snippet":"… We used Common Crawl3 monolingual corpora for pre-training … We created a shared sub-word vocabulary us- ing Japanese and English data from ASPEC mixing with Japanese, English, Chinese and French data from Common Crawl …","url":["https://shyyhs.github.io/files/ACL2020SRW_Song_paper.pdf"]} {"year":"2020","title":"Predicting Consumers' Brand Sentiment Using Text Analysis on Reddit","authors":["P Cen"],"snippet":"… with a F-score of 86.25 for NER tasks. The en_core_web_md model is an English multi-task CNN trained on OntoNotes, with GloVe vectors trained on Common Crawl For NER tasks, it assigns various labels to identified entities, such as “CARDINAL,” “ORG,” …","url":["https://repository.upenn.edu/cgi/viewcontent.cgi?article=1097&context=joseph_wharton_scholars"]} {"year":"2020","title":"Predicting Themes within Complex Unstructured Texts: A Case Study on Safeguarding Reports","authors":["A Edwards, D Rogers, J Camacho-Collados… - arXiv preprint arXiv …, 2020"],"snippet":"… Pre-trained word embeddings We leverage two pre-trained 300-dimensional word embedding models: Word2vec (Mikolov et al., 2013) trained on Google news dataset and fastText (Bojanowski et al., 2017) trained with subword information on Common Crawl …","url":["https://arxiv.org/pdf/2010.14584"]} {"year":"2020","title":"Predicting Twitter Engagement With Deep Language Models","authors":["M Volkovs, Z Cheng, M Ravaut, H Yang, K Shen…"],"snippet":"… text comprehension tasks. The majority of published language models are pre-trained on large text corpora such as Wikipedia or CommonCrawl, that typically contain longer and properly worded pieces of text. Tweets on the …","url":["http://www.cs.toronto.edu/~mvolkovs/recsys2020_challenge.pdf"]} {"year":"2020","title":"Prior Guided Feature Enrichment Network for Few-Shot Segmentation","authors":["Z Tian, H Zhao, M Shu, Z Yang, R Li, J Jia - IEEE Annals of the History of Computing, 2020"],"snippet":"… Therefore the prior is not,applicable and we only verify FEM on the baseline with,VGG-16 backbone in the zero-shot setting.,Structural Change,Embeddings of Word2Vec [24] and,FastText [22] are trained …","url":["https://www.computer.org/csdl/journal/tp/5555/01/09154595/1lZzPRFhQqY"]} {"year":"2020","title":"Privacy at Scale: Introducing the PrivaSeer Corpus of Web Privacy Policies","authors":["M Srinath, S Wilson, CL Giles - arXiv preprint arXiv:2004.11131, 2020"],"snippet":"… As a consequence, 2https://commoncrawl.org/ Page 3 … Thus, we selected those URLs which had the word “privacy” or the words “data” and “protection” from the Common Crawl URL archive. We were able to extract 3.9 million URLs that fit this selection criterion …","url":["https://arxiv.org/pdf/2004.11131"]} {"year":"2020","title":"Privacy Policies over Time: Curation andAnalysis of a Million-Document Dataset","authors":["R Amos, G Acar, E Lucherini, M Kshirsagar… - arXiv preprint arXiv …, 2020"],"snippet":"… app privacy policy URLs [23]. Concurrent to this work, Srinath et al. contribute PrivaSeer, a dataset of over 1 million English privacy policies extracted from May 2019 Common Crawl data [14]. Our work advances this area of …","url":["https://arxiv.org/pdf/2008.09159"]} {"year":"2020","title":"Privacy-Preserving Passive DNS","authors":["P Papadopoulos, N Pitropakis, WJ Buchanan, O Lo… - Computers, 2020"],"snippet":"… These sources include but are not limited to Public Blacklists, the Alexa ranking, the Common Crawl project, and various Top Level Domain (TLD) zone files. This system's output is a refined dataset that can be …","url":["https://www.mdpi.com/2073-431X/9/3/64/pdf"]} {"year":"2020","title":"Privacy-Preserving Visual Content Tagging using Graph Transformer Networks","authors":["XS Vu, DT Le, C Edlund, L Jiang, HD Nguyen"],"snippet":"… that local knowledge can be derived from data observations including label semantics or multimedia content se- mantics (eg, optical character recognition); whereas, global knowledge can be drawn from publicly available corpora …","url":["https://people.cs.umu.se/sonvx/files/ACMMM2020_SGTN_CAMREADY_1.pdf"]} {"year":"2020","title":"Probing Task-Oriented Dialogue Representation from Language Models","authors":["CS Wu, C Xiong - arXiv preprint arXiv:2010.13912, 2020"],"snippet":"… is to maximize left-to-right generation likelihood. To ensure diverse and nearly unlimited text sources, they use Common Crawl to obtain 8M documents as its training data. Budzianowski and Vulic (2019) trained GPT2 on task …","url":["https://arxiv.org/pdf/2010.13912"]} {"year":"2020","title":"Probing Tasks for Noised Back-Translation","authors":["N Spring - 2020"],"snippet":"Page 1. Bachelor's thesis presented to the Faculty of Arts and Social Sciences of the University of Zurich for the degree of Bachelor of Arts UZH Probing Tasks for Noised Back-Translation Author: Nicolas Spring Student …","url":["https://www.cl.uzh.ch/dam/jcr:34ea0877-26f8-405b-88a5-1191536986db/spring_ba_probing_tasks.pdf"]} {"year":"2020","title":"Probing Text Models for Common Ground with Visual Representations","authors":["G Ilharco, R Zellers, A Farhadi, H Hajishirzi - arXiv preprint arXiv:2005.00619, 2020"],"snippet":"… GloVe embeddings (Pennington et al., 2014). For such, we use embeddings trained on 840 billion tokens of web data from Common Crawl, with dL = 300 and a vocabulary size of 2.2 million2. Models trained on text and images …","url":["https://arxiv.org/pdf/2005.00619"]} {"year":"2020","title":"Programming in Natural Language with fuSE: Synthesizing Methods from Spoken Utterances Using Deep Natural Language Understanding","authors":["S Weigelt, V Steurer, T Hey, WF Tichy - Proceedings of the 58th Annual Meeting of …, 2020"],"snippet":"… on the Common Crawl dataset4 by Facebook Research (Mikolov et al., 3Note that we do not discuss the influence of varying epoch numbers, since we used early stopping, ie the training stops when the validation loss stops …","url":["https://www.aclweb.org/anthology/2020.acl-main.395.pdf"]} {"year":"2020","title":"Projecting Heterogeneous Annotations for Named Entity Recognition","authors":["R Agerri, G Rigau","R Agerri, G Rigau - Proceedings of the Iberian Languages Evaluation …, 2020"],"snippet":"… Page 3. of Common Crawl text … The biggest update that XLM-Roberta offers is a significantly increased amount of training data, 2.5TB of Common Crawl clean data [6]. As for BERT, in this paper we use the base version of XLM-RoBERTa …","url":["http://ceur-ws.org/Vol-2664/capitel_paper2.pdf","https://ragerri.github.io/files/ixaera-capitel2020.pdf"]} {"year":"2020","title":"Projecting named entity recognizers from resource-rich to resource-poor languages without annotated or parallel corpora","authors":["J Hou"],"snippet":"Page 1. Projecting named entity recognizers from resource-rich to resourcepoor languages without annotated or parallel corpora Hou, Jue Helsinki October 20, 2019 UNIVERSITY OF HELSINKI Department of Computer …","url":["https://helda.helsinki.fi/bitstream/handle/10138/310012/Jue_Hou-Master_s_Thesis-v2.1.pdf?sequence=2&isAllowed=y"]} {"year":"2020","title":"PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data","authors":["D Carmo, M Piau, I Campiotti, R Nogueira, R Lotufo - arXiv preprint arXiv:2008.09144, 2020"],"snippet":"… The original T5 vocabulary uses the SentencePiece library [7] using English, German, French, and Romanian web pages from Common Crawl.1 We use a similar procedure to create our Portuguese vocabulary: we train …","url":["https://arxiv.org/pdf/2008.09144"]} {"year":"2020","title":"PublishInCovid19 at the FinSBD-2 Task: Sentence and List Extraction in Noisy PDF Text Using a Hybrid Deep Learning and Rule-Based Approach","authors":["J Singh - Proceedings of the Second Workshop on Financial …, 2020"],"snippet":"… For the pretrained word embeddings, we use GloVE 6 which are trained on large Common Crawl dataset and can effectively represent 84 billion cased tokens. To train the BERT for our task, we fine-tune a pre-trained model, namely bert-base-cased …","url":["https://www.aclweb.org/anthology/2020.finnlp-1.pdf#page=63"]} {"year":"2020","title":"Punctuation Prediction in Spontaneous Conversations: Can We Mitigate ASR Errors with Retrofitted Word Embeddings?","authors":["Ł Augustyniak, P Szymanski, M Morzy, P Zelasko… - arXiv preprint arXiv …, 2020"],"snippet":"… In this work, we use pre-trained GloVe embeddings trained on the Common Crawl dataset consisting of 2.6 billion textual documents … In our case, the punctuation in conversational transcripts is substantially different from the …","url":["https://arxiv.org/pdf/2004.05985"]} {"year":"2020","title":"PyChain: A Fully Parallelized PyTorch Implementation of LF-MMI for End-to-End ASR","authors":["Y Shao, Y Wang, D Povey, S Khudanpur - arXiv preprint arXiv:2005.09824, 2020"],"snippet":"… We hope that our experience with PYCHAIN will inspire other efforts to build next-generation hybrid ASR tools. 812k hours AM train set and common crawl LM. 9Data augmentation and pre-trained on LibriSpeech …","url":["https://arxiv.org/pdf/2005.09824"]} {"year":"2020","title":"Quality and Relevance Metrics for Selection of Multimodal Pretraining Data","authors":["R Rao, S Rao, E Nouri, D Dey, A Celikyilmaz, B Dolan - Proceedings of the IEEE/CVF …, 2020"],"snippet":"… The GloVe vectors used are pretrained on 840 billion tokens from Common Crawl. Let o ∈ Oi be the set of objects de- tected by the RCNN for a given image i, w ∈ d be the set 0 100 200 300 400 500 ConceptualCaptions Ngram …","url":["http://openaccess.thecvf.com/content_CVPRW_2020/papers/w56/Rao_Quality_and_Relevance_Metrics_for_Selection_of_Multimodal_Pretraining_Data_CVPRW_2020_paper.pdf"]} {"year":"2020","title":"Quality Estimation for Machine Translation with Multi-granularity Interaction⋆","authors":["K Tian, J Zhang"],"snippet":"… (7) 4 Experiments 4.1 Dataset The bilingual parallel corpus that we use for pre-trained multilingual BERT is officially released by the WMT17 Shared Task: Machine Translation of News1, in- cluding Europarl v7, Common Crawl …","url":["http://sc.cipsc.org.cn/mt/conference/2020/papers/T20-1005.pdf"]} {"year":"2020","title":"Quality Evaluation","authors":["JM Gomez-Perez, R Denaux, A Garcia-Silva - A Practical Guide to Hybrid Natural …, 2020"],"snippet":"… Besides the embeddings trained by us, we also include, as part of our study, several pre-trained embeddings, notably the GloVe embeddings for CommonCrawl— code glove_840B provided by Stanford11 —fastText …","url":["https://link.springer.com/chapter/10.1007/978-3-030-44830-1_7"]} {"year":"2020","title":"Query focused abstractive summarization using BERTSUM model","authors":["DM Abdullah - 2020"],"snippet":"… Conneau et al. (2019) have introduced a multilingual masked language model from Facebook AI. This model has been trained on 2.5 TB of newly created clean CommonCrawl (Wenzek et al., 2019) data in 100 languages. The model has shown state-of-the-art results …","url":["https://opus.uleth.ca/bitstream/handle/10133/5760/ABDULLAH_DEEN_MOHAMMAD_MSC_2020.pdf?sequence=1"]} {"year":"2020","title":"Question Answering for Comparative Questions with GPT-2","authors":["B Sievers"],"snippet":"… https://www.microsoft.com/en-us/research/blog/ turing-nlg-a-17-billion-parameter-languagemodel-by-microsoft/, accessed: 6.3.2020 3. Bevendorff, J., Stein, B., Hagen, M., Potthast, M.: Elastic chatnoir: search engine for the clueweb and the common crawl …","url":["http://ceur-ws.org/Vol-2696/paper_213.pdf"]} {"year":"2020","title":"Question Answering When Knowledge Bases are Incomplete","authors":["C Pradel, D Sileo, Á Rodrigo, A Peñas, E Agirre - International Conference of the …, 2020","E Agirre - … IR Meets Multilinguality, Multimodality, and Interaction …"],"snippet":"… with bag of word embeddings. We use FastText CommonCrawl word embeddings [10] 4 and a max pooling to produce the continuous bag of word representations of table columns and the question text. The column bag of words …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=IxP9DwAAQBAJ&oi=fnd&pg=PA43&dq=commoncrawl&ots=BCbV87DfTS&sig=kVIo_AYLn9xgMPpxB-rDuk1jzEg","https://link.springer.com/chapter/10.1007/978-3-030-58219-7_4"]} {"year":"2020","title":"Question Type Classification Methods Comparison","authors":["T Seidakhmetov - arXiv preprint arXiv:2001.00571, 2020"],"snippet":"… The GLoVe vectors were pre-trained using 840 billion tokens from Common Crawl, and each token is mapped into a 300-dimensional vector [3]. Xembeddings = GloveEmbedding( Xword) ∈ RNxDword where Dword is a number of dimensions of a word vector …","url":["https://arxiv.org/pdf/2001.00571"]} {"year":"2020","title":"Questioning the Use of Bilingual Lexicon Induction as an Evaluation Task for Bilingual Word Embeddings","authors":["B Marie, A Fujita"],"snippet":"… gual word embeddings. In fact, this corpus was significantly smaller than the Wikipedia corpora for all the other languages, and than the Finnish Common Crawl corpus used to train Finnish Vecmap-emb. Another finding is …","url":["https://www.anlp.jp/proceedings/annual_meeting/2020/pdf_dir/P5-14.pdf"]} {"year":"2020","title":"REALTOXICITYPROMPTS: Evaluating Neural Toxic Degeneration in Language Models","authors":["S Gehman, S Gururangan, M Sap, Y Choi, NA Smith - arXiv preprint arXiv …, 2020","SGS Gururangan, MSY Choi, NA Smith"],"snippet":"… GPT-2 (specifically, GPT-2-small; Radford et al., 2019), is a similarly sized model pretrained on OPENAI-WT, which contains 40GB of English web text and is described in §6.7 GPT-3 (Brown et al., 2020) is pretrained on a mix …","url":["https://arxiv.org/pdf/2009.11462","https://homes.cs.washington.edu/~msap/pdfs/gehman2020realtoxicityprompts.pdf"]} {"year":"2020","title":"Recent Trends in the Use of Deep Learning Models for Grammar Error Handling","authors":["M Naghshnejad, T Joshi, VN Nair - arXiv preprint arXiv:2009.02358, 2020"],"snippet":"Page 1. 1 Recent Trends in the Use of Deep Learning Models for Grammar Error Handling Mina Naghshnejad1, Tarun Joshi, and Vijayan N. Nair Corporate Model Risk, Wells Fargo2 Abstract Grammar error handling (GEH) is …","url":["https://arxiv.org/pdf/2009.02358"]} {"year":"2020","title":"Recipes for Adapting Pre-trained Monolingual and Multilingual Models to Machine Translation","authors":["AC Stickland, X Li, M Ghazvininejad - arXiv preprint arXiv:2004.14911, 2020"],"snippet":"Page 1. Recipes for Adapting Pre-trained Monolingual and Multilingual Models to Machine Translation Asa Cooper Stickland♣ Xian Li♠ ♣ University of Edinburgh, ♠ Facebook AI a.cooper.stickland@ed.ac.uk, {xianl,ghazvini}@fb.com Marjan Ghazvininejad♠ Abstract …","url":["https://arxiv.org/pdf/2004.14911"]} {"year":"2020","title":"ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning","authors":["W Yu, Z Jiang, Y Dong, J Feng - arXiv preprint arXiv:2002.04326, 2020"],"snippet":"Page 1. Published as a conference paper at ICLR 2020 RECLOR: AREADING COMPREHENSION DATASET REQUIRING LOGICAL REASONING Weihao Yu∗, Zihang Jiang∗, Yanfei Dong & Jiashi Feng National University …","url":["https://arxiv.org/pdf/2002.04326"]} {"year":"2020","title":"Recognai's Working Notes for CANTEMIST-NER Track","authors":["DC Fidalgo, D Vila-Suero, FA Montes - … of the Iberian Languages Evaluation Forum …, 2020"],"snippet":"… light-weight solution regarding the pretrained component. For this reason we chose the pretrained Spanish word vectors provided by FastText[4]. These vectors encompass 2 million words that were trained on Common Crawl4 …","url":["http://ceur-ws.org/Vol-2664/cantemist_paper4.pdf"]} {"year":"2020","title":"Reducing Language Biases in Visual Question Answering with Visually-Grounded Question Encoder","authors":["G KV, A Mittal - arXiv preprint arXiv:2007.06198, 2020"],"snippet":"Page 1. Reducing Language Biases in Visual Question Answering with Visually-Grounded Question Encoder Gouthaman KV and Anurag Mittal Indian Institute of Technology Madras, India {gkv,amittal}@cse.iitm.ac.in Abstract …","url":["https://arxiv.org/pdf/2007.06198"]} {"year":"2020","title":"Referring Image Segmentation via Cross-Modal Progressive Comprehension","authors":["S Huang, T Hui, S Liu, G Li, Y Wei, J Han, L Liu, B Li - … of the IEEE/CVF Conference on …, 2020"],"snippet":"Page 1. Referring Image Segmentation via Cross-Modal Progressive Comprehension Shaofei Huang1,2∗ Tianrui Hui1,2∗ Si Liu3† Guanbin Li4 Yunchao Wei5 Jizhong Han1,2 Luoqi Liu6 Bo Li3 1 Institute of Information Engineering …","url":["http://openaccess.thecvf.com/content_CVPR_2020/papers/Huang_Referring_Image_Segmentation_via_Cross-Modal_Progressive_Comprehension_CVPR_2020_paper.pdf"]} {"year":"2020","title":"Refinement of Unsupervised Cross-Lingual Word Embeddings","authors":["M Biesialska, MR Costa-jussà - arXiv preprint arXiv:2002.09213, 2020"],"snippet":"… Finnish. Monolingual embeddings of 300 di- mensions were created using Word2Vec3 [18] and were trained on WMT News Crawl (Spanish), WacKy crawling corpora (English, German), and Common Crawl (Finnish). To evaluate …","url":["https://arxiv.org/pdf/2002.09213"]} {"year":"2020","title":"ReINTEL: A Multimodal Data Challenge for Responsible Information Identification on Social Network Sites","authors":["DT Le, XS Vu, ND To, HQ Nguyen, TT Nguyen, L Le… - arXiv preprint arXiv …, 2020"],"snippet":"… Word2VecVN (Vu, 2016) x Trained on 7GB texts of Vietnamese news FastText (Vietnamese version) (Joulin et al., 2016) x Trained on Vietnamese texts of the CommonCrawl corpus ETNLP (Vu et al., 2019) x Trained on 1GB texts of Vietnamese Wikipedia …","url":["https://arxiv.org/pdf/2012.08895"]} {"year":"2020","title":"Related Tasks can Share! A Multi-task Framework for Affective language","authors":["KS Deep, MS Akhtar, A Ekbal, P Bhattacharyya - arXiv preprint arXiv:2002.02154, 2020"],"snippet":"… 3. Character-level embeddings2: Character-level embeddings are trained over common crawl glove corpus providing 300 dimensional vectors for each character (used in case if word is not present in other two embeddings) …","url":["https://arxiv.org/pdf/2002.02154"]} {"year":"2020","title":"Relational and Fine-Grained Argument Mining","authors":["D Trautmann, M Fromm, V Tresp, T Seidl, H Schütze - Datenbank-Spektrum, 2020"],"snippet":"… Crowdworkers had the task of selecting argumentative spans for a given set of topics and topic related sentences. The sentences were from textual data extracted from Common Crawl Footnote 6 for a predefined list of eight …","url":["https://link.springer.com/article/10.1007/s13222-020-00341-z"]} {"year":"2020","title":"Relational Databases and SQL Language","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"… itself. We are mainly discussing PostgreSQL here so that you can scale up to 64 TB if you decide to index large portions of common crawl datasets for creating a backlinks and news database in Chapters 6 and 7, respectively …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_5"]} {"year":"2020","title":"Representation learning for input classification via topic sparse autoencoder and entity embedding","authors":["D Li, J Zhang, P Li - US Patent App. 16/691,554, 2020"],"snippet":"US20200184339A1 - Representation learning for input classification via topic sparse autoencoder and entity embedding - Google Patents. Representation learning for input classification via topic sparse autoencoder and entity embedding. Download PDF Info …","url":["https://patents.google.com/patent/US20200184339A1/en"]} {"year":"2020","title":"Reproducible Extraction of Cross-lingual Topics (rectr)","authors":["CH Chan, J Zeng, H Wessler, M Jungblut, K Welbers… - … Methods and Measures, 2020"],"snippet":"… 17 Researchers at Facebook Research created these aligned word embeddings by first training fastText word embeddings (Bojanowski et al., 2017) using Wikipedia and Common Crawl articles of each individual …","url":["https://www.tandfonline.com/doi/abs/10.1080/19312458.2020.1812555"]} {"year":"2020","title":"Research Challenges in Designing Differentially Private Text Generation Mechanisms","authors":["O Feyisetan, A Aggarwal, Z Xu, N Teissier - arXiv preprint arXiv:2012.05403, 2020"],"snippet":"… A natural way to “bias” an exponential mechanism without changing its privacy properties is to modulate it with a public “prior” µ(z). For example, such a prior can be constructed over a publicly available corpus such as Wikipedia or Common Crawl …","url":["https://arxiv.org/pdf/2012.05403"]} {"year":"2020","title":"Residual Energy-Based Models for Text Generation","authors":["Y Deng, A Bakhtin, M Ott, A Szlam, MA Ranzato - arXiv preprint arXiv:2004.11714, 2020"],"snippet":"… Page 6. Published as a conference paper at ICLR 2020 genres, totaling about half a billion words. The latter is a de-duplicated subset of the English portion of the CommonCrawl news dataset (Nagel, 2016), which totals around 16 Billion words …","url":["https://arxiv.org/pdf/2004.11714"]} {"year":"2020","title":"Rethinking embedding coupling in pre-trained language models","authors":["HW Chung, T Févry, H Tsai, M Johnson, S Ruder - arXiv preprint arXiv:2010.12821, 2020"],"snippet":"Page 1. Preprint. Under review. RETHINKING EMBEDDING COUPLING IN PRE-TRAINED LANGUAGE MODELS Hyung Won Chung∗† Google Research hwchung@google.com Thibault Févry∗† thibaultfevry@gmail …","url":["https://arxiv.org/pdf/2010.12821"]} {"year":"2020","title":"Rethinking Evaluation in ASR: Are Our Models Robust Enough?","authors":["T Likhomanenko, Q Xu, V Pratap, P Tomasello, J Kahn… - arXiv preprint arXiv …, 2020"],"snippet":"… only; for TL – both train transcriptions and provided LM data. We also train a 4-gram LM on Common Crawl (CC) data with 200k top words and pruning of all 3,4-grams appearing once. Perplexity of all LMs is shown in Table 2 …","url":["https://arxiv.org/pdf/2010.11745"]} {"year":"2020","title":"Retrieving Comparative Arguments using Deep Pre-trained Language Models and NLU","authors":["V Chekalina, A Panchenko"],"snippet":"… ChatNoir is an Elasticsearch-based5 engine providing access to nearly 3 billion web pages from ClueWeb and Common Crawl corpora … ACM. 2. J. Bevendorff, B. Stein, M. Hagen, and M. Potthast. Elastic ChatNoir: Search …","url":["http://ceur-ws.org/Vol-2696/paper_210.pdf"]} {"year":"2020","title":"Reusing a Pretrained Language Model on Languages with Limited Corpora for Unsupervised NMT","authors":["A Chronopoulou, D Stojanovski, A Fraser - arXiv preprint arXiv:2009.07610, 2020"],"snippet":"… We use 68M En sentences from NewsCrawl, 2.4M Mk and 4M Sq, both from CommonCrawl and Wikipedia … Second, in En- De, we use high-quality corpora for both languages (NewsCrawl), whereas Mk and Sq are trained on low-quality CommonCrawl data …","url":["https://arxiv.org/pdf/2009.07610"]} {"year":"2020","title":"Review of the Recent Techniques for Learning Commonsense Knowledge applied to the Winograd Schema Challenge","authors":["A Koleva"],"snippet":"… 4 A Combined Approach Prakash et al. [2], to the best of our knowledge, are the first ones to combine methods from KRR with methods from ML such as language models. They propose a framework in which four different …","url":["http://ecai2020.eu/papers/1513_paper.pdf"]} {"year":"2020","title":"Review rating prediction framework using deep learning","authors":["BH Ahmed, AS Ghabayen - Journal of Ambient Intelligence and Humanized …, 2020"],"snippet":"… word representation) embedding. There are several Glove embeddings from different sources, such as Twitter, Wikipedia or the common crawl. We utilized we utilize the Glove embedding trained by (Pennington et al. 2014) on …","url":["https://link.springer.com/article/10.1007/s12652-020-01807-4"]} {"year":"2020","title":"Revisiting Round-Trip Translation for Quality Estimation","authors":["J Moon, H Cho, EL Park - arXiv preprint arXiv:2004.13937, 2020"],"snippet":"Page 1. Revisiting Round-Trip Translation for Quality Estimation Jihyung Moon Naver Papago Hyunchang Cho Naver Papago {jihyung.moon, hyunchang.cho, lucypark}@navercorp.com Eunjeong L. Park Naver Papago Abstract …","url":["https://arxiv.org/pdf/2004.13937"]} {"year":"2020","title":"RobBERT: a Dutch RoBERTa-based Language Model","authors":["P Delobelle, T Winters, B Berendt - arXiv preprint arXiv:2001.06286, 2020"],"snippet":"… 3.1 Data We pre-trained our model on the Dutch section of the OSCAR corpus, a large multilingual corpus which was obtained by language classification in the Common Crawl corpus (Ortiz Suárez et al., 2019). This Dutch corpus …","url":["https://arxiv.org/pdf/2001.06286"]} {"year":"2020","title":"Robust Cross-lingual Embeddings from Parallel Sentences","authors":["A Sabet, P Gupta, JB Cordonnier, R West, M Jaggi - arXiv preprint arXiv:1912.12481, 2019"],"snippet":"… MUSE 0.38 0.30 0.74 0.64 RCSLS 0.38 0.30 0.74 0.64 FASTTEXTCommon Crawl 0.49 0.32 0.75 0.57 BIVEC 0.40 0.36 0.70 0.60 … We also include FASTTEXT monolingual vectors trained on CommonCrawl data (Grave et …","url":["https://arxiv.org/pdf/1912.12481"]} {"year":"2020","title":"Robust Prediction of Punctuation and Truecasing for Medical ASR","authors":["M Sunkara, S Ronanki, K Dixit, S Bodapati, K Kirchhoff - Proceedings of the First …, 2020","MSSRK Dixit, SBK Kirchhoff - ACL 2020, 2020"],"snippet":"… But just like any other model, these Language Models are biased by their training data. In particular, they are typically trained on data that is easily available in large quantities on the internet eg Wikipedia, CommonCrawl etc …","url":["https://www.aclweb.org/anthology/2020.nlpmc-1.8.pdf","https://www.aclweb.org/anthology/2020.nlpmc-1.pdf#page=65"]} {"year":"2020","title":"Robust Prediction of Punctuation and Truecasingfor Medical ASR","authors":["M Sunkara, S Ronanki, K Dixit, S Bodapati, K Kirchhoff - arXiv preprint arXiv …, 2020"],"snippet":"… But just like any other model, these Language Models are biased by their training data. In particular, they are typically trained on data that is easily available in large quantities on the internet eg Wikipedia, CommonCrawl etc …","url":["https://arxiv.org/pdf/2007.02025"]} {"year":"2020","title":"Russian-English Bidirectional Machine Translation System","authors":["A Xv, W Chao - Ariel"],"snippet":"… For the monolingual data we use English and Russian Newscrawl as well as a filtered part of Commoncrawl in Russian … Russian is relatively smaller than that of English, we have to augment the Newscrawl data for Russian …","url":["http://statmt.org/wmt20/pdf/2020.wmt-1.35.pdf"]} {"year":"2020","title":"Samsung R&D Institute Poland submission to WMT20 News Translation Task","authors":["M Krubinski, M Chochowski, B Boczek, M Koszowski… - Proceedings of the Fifth …, 2020"],"snippet":"… Pretraining a complete encoder-decoder model allows for later direct fine-tuning on the translation ob- jective, with parallel corpora. In our experiment, we sampled 250M sentences from CommonCrawl for Czech, English and Polish (ie 750M in total) …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.16.pdf"]} {"year":"2020","title":"SandiDoc at CLEF 2020-Consumer Health Search: AdHoc IR Task","authors":["S Seneviratne, E Daskalaki, M Zakir - Conference and Labs of the Evaluation (CLEF) …, 2020"],"snippet":"… respectively. 2 Resources 2.1 Dataset The document collection used in the document retrieval task was acquired by the common crawl dump of 2018-19. This included web pages of the formats such as HTML, XHTML, XML. The …","url":["http://www.dei.unipd.it/~ferro/CLEF-WN-Drafts/CLEF2020/paper_160.pdf"]} {"year":"2020","title":"SardiStance@ EVALITA2020: Overview of the Task on Stance Detection in Italian Tweets","authors":["AT Cignarella, M Lai, C Bosco, V Patti, P Rosso - … of the 7th Evaluation Campaign of …, 2020"],"snippet":"… In particular, they trained three classifiers based respectively on SENTIPOLC 2016 (Barbieri et al., 2016) for sentiment analysis classification, on HaSpeeDe 2018 (Bosco et al., 2018) 4https://huggingface.co/Musixmatch/ umberto-commoncrawl-cased-v1 …","url":["http://ceur-ws.org/Vol-2765/paper159.pdf"]} {"year":"2020","title":"SberQuAD–Russian Reading Comprehension Dataset: Description and Analysis","authors":["P Braslavski - … IR Meets Multilinguality, Multimodality, and Interaction …"],"snippet":"… We tokenized text using spaCy. 12 To initialize the embedding layer for BiDAF, DocQA, DrQA, and R-Net we use Russian casesensitive fastText embeddings trained on Common Crawl and Wikipedia. 13 This initialization is used for both questions and paragraphs …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=IxP9DwAAQBAJ&oi=fnd&pg=PA3&dq=commoncrawl&ots=BCbV87DfTS&sig=E3mWPymZbBAvDLn8azUpJimpaMs"]} {"year":"2020","title":"Scalable Cross Lingual Pivots to Model Pronoun Gender for Translation","authors":["K Webster, E Pitler - arXiv preprint arXiv:2006.08881, 2020"],"snippet":"… glish was most recently contested at WMT'132 (Bojaretal., 2013), which offered participants 14,980,513 sentence pairs from Eu- roparl3 (Habash et al., 2017), Common Crawl (Smith et al., 2013), the United Nations cor …","url":["https://arxiv.org/pdf/2006.08881"]} {"year":"2020","title":"Scalable, Multi-Constraint, Complex-Objective Graph Partitioning","authors":["GM Slota, C Root, K Devine, K Madduri… - IEEE Transactions on …, 2020"],"snippet":"Page 1. Scalable, Multi-Constraint, Complex-Objective Graph Partitioning GeorgeM.Slota ,CameronRoot,KarenDevine ,KameshMadduri , and Sivasankaran Rajamanickam Abstract—We introduce XTRAPULP, a distributed-memory …","url":["https://ieeexplore.ieee.org/abstract/document/9115834/"]} {"year":"2020","title":"Scaling Laws for Neural Language Models","authors":["J Kaplan, S McCandlish, T Henighan, TB Brown… - arXiv preprint arXiv …, 2020","OEIAY Need"],"snippet":"01/23/20 - We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with mode...","url":["https://arxiv.org/pdf/2001.08361","https://deepai.org/publication/scaling-laws-for-neural-language-models"]} {"year":"2020","title":"Scientific Question Answering with AAN","authors":["K Mueller"],"snippet":"… have easy to understand language as well as little background knowledge required. They then used web data provided by Common Crawl to find supporting web documents from which to draw information and construct …","url":["https://zoo.cs.yale.edu/classes/cs290/19-20a/mueller.keaton.kim6/Keaton_Mueller_CPSC290_Final_Report.pdf"]} {"year":"2020","title":"Scones: Towards Conversational Authoring of Sketches","authors":["F Huang, D Ha, E Schoop, J Canny"],"snippet":"… detail in Section 4.1. For each text token t, we use a 300-dimensional GLoVe vector trained on 42B tokens from the Common Crawl dataset [22] to semantically represent these words in the instructions. To train the Transformer …","url":["http://people.eecs.berkeley.edu/~eschoop/docs/scones.pdf"]} {"year":"2020","title":"Scoring Dimension-Level Job Performance From Narrative Comments: Validity and Generalizability When Using Natural Language Processing","authors":["AB Speer - Organizational Research Methods, 2020"],"snippet":"Performance appraisal narratives are qualitative descriptions of employee job performance. This data source has seen increased research attention due to the ability to efficiently derive insights u...","url":["https://journals.sagepub.com/doi/abs/10.1177/1094428120930815"]} {"year":"2020","title":"SDRS: A new lossless dimensionality reduction for text corpora","authors":["IV de Mendizabal, V Basto-Fernandes, E Ezpeleta… - Information Processing & …, 2020"],"snippet":"… Clueweb 12, Web (html) pages, English, unknown, 870M. Common Crawl Data, Web (html) pages, multilingual, 100% spam, 9 Billion in 2014 and increasing. YouTube Comments Dataset, Youtube comments, multilingual, 7% spam, 6M …","url":["https://www.sciencedirect.com/science/article/pii/S0306457319314694"]} {"year":"2020","title":"Searching the Web for Cross-lingual Parallel Data","authors":["A El-Kishky, P Koehn, H Schwenk - Proceedings of the 43rd International ACM SIGIR …, 2020"],"snippet":"… and Tokenization – CommonCrawl Preprocessing Code • Open-source Code for Generating Cross-lingual Datasets – Code for Generating High-quality Monolingual Data from CommonCrawl – Code for Generating …","url":["https://dl.acm.org/doi/abs/10.1145/3397271.3401417"]} {"year":"2020","title":"SEDAR: a Large Scale French-English Financial Domain Parallel Corpus","authors":["A Ghaddar, P Langlais - Proceedings of The 12th Language Resources and …, 2020"],"snippet":"… It contains 40.8M sentence pairs extracted from five datasets that cover various domains: EUROPARL V7 (Koehn, 2005), UNITED NATIONS CORPUS (Eisele and Chen, 2010), COMMON CRAWL CORPUS, NEWS COMMENTARY, and 109 FRENCH-ENGLISH corpus …","url":["https://www.aclweb.org/anthology/2020.lrec-1.442.pdf"]} {"year":"2020","title":"Seeking Meaning: Examining a Cross-situational Solution to Learn Action Verbs Using Human Simulation Paradigm","authors":["Y Zhang, A Amatuni, E Cain, C Yu"],"snippet":"… of words in a given corpus. We used the GloVe model pretrained on 840B tokens of Common Crawl text to create semantic distance measures (Pennington, Socher & Manning, 2014). We discovered that both semantic knowledge …","url":["https://cognitivesciencesociety.org/cogsci20/papers/0705/0705.pdf"]} {"year":"2020","title":"Self-training Improves Pre-training for Natural Language Understanding","authors":["J Du, E Grave, B Gunel, V Chaudhary, O Celebi, M Auli… - arXiv preprint arXiv …, 2020"],"snippet":"… As a large-scale external bank of unannotated sentences, we extract and filter text from CommonCrawl 1 (Wenzek et al., 2019) … CommonCrawl data contains a wide variety of domains and text styles which makes it a good general-purpose corpus …","url":["https://arxiv.org/pdf/2010.02194"]} {"year":"2020","title":"Semantic image retrieval","authors":["T Berg, PN Belhumeur - US Patent 10,769,502, 2020","T Berg, PN Belhumeur - US Patent App. 16/999,616, 2020"],"snippet":"… GloVe. Common Crawl is a public repository of web crawl data (eg, blogs, news, and comments) available on the internet in the commoncrawl.org domain, the entire contents of which is hereby incorporated by reference. GloVe …","url":["https://patents.google.com/patent/US20200380320A1/en","https://www.freepatentsonline.com/10769502.html"]} {"year":"2020","title":"Semantic Matching: Dynamic Composition of Matcher Ensembles for Ontology Alignment","authors":["A Vennesland - 2020"],"snippet":"Page 1. ISBN 978-82-326-4842-9 (printed ver.) ISBN 978-82-326-4843- 6 (electronic ver.) ISSN 1503-8181 Doctoral theses at NTNU, 2020:247 Audun Vennesland Semantic Matching Dynamic Composition of Matcher …","url":["https://ntnuopen.ntnu.no/ntnu-xmlui/bitstream/handle/11250/2674337/Audun%20Vennesland_PhD.pdf?sequence=1"]} {"year":"2020","title":"Semantic Networks for Engineering Design: A Survey","authors":["J Han, S Sarica, F Shi, J Luo - arXiv preprint arXiv:2012.07060, 2020"],"snippet":"… relations No Pre-trained word2vec (Mikolov et al., 2013) Unsupervised Google News Cosine similarity No Pre-trained GloVe (Pennington et al., 2014) Unsupervised Wikipedia, Gigaword, Common Crawl Cosine similarity No B-Link …","url":["https://arxiv.org/pdf/2012.07060"]} {"year":"2020","title":"Semantic Norm Extrapolation is a Missing Data Problem","authors":["B Snefjella, I Blank - 2020"],"snippet":"Page 1. DRAFT Running head: SEMANTIC NORM EXTRAPOLATION 1 Semantic Norm Extrapolation is a Missing Data Problem Bryor Snefjella & Idan Blank University of California, Los Angeles Department of Psychology Author Note …","url":["https://psyarxiv.com/y2gav/download?format=pdf"]} {"year":"2020","title":"Semantic Recommendations of Books Using Recurrent Neural Networks","authors":["M Nitu, S Ruseti, M Dascalu, S Tomescu - Ludic, Co-design and Tools Supporting Smart …"],"snippet":"… We conducted the experiments using pre-trained FastText embeddings for Romanian language. The embedding model consists of 2 million word vectors trained on Common Crawl and Wikipedia (approx. 600 billion tokens). Page 239. 240 M. Nitu et al. Fig …","url":["https://link.springer.com/content/pdf/10.1007/978-981-15-7383-5.pdf#page=234"]} {"year":"2020","title":"Semantic-Based Algorithm for Scoring Alternative Uses Tests (AUT)","authors":["C Stevenson"],"snippet":"… Page 6. Each response was mapped to a word vector in 300 dimensions using a fastText pretrained model for Dutch. This model was trained on Wikipedia and Common Crawl data, using CBOW model with character …","url":["http://modelingcreativity.org/blog/wp-content/uploads/2020/07/Tsai_Y_BDS_Thesis_report_11695986_PML.pdf"]} {"year":"2020","title":"Semantical Search Term Clustering for Performance Prediction","authors":["R Coenders"],"snippet":"Page 1. Eindhoven University of Technology MASTER Semantical search term clustering for performance prediction Coenders, R. Award date: 2019 Link to publication Disclaimer This document contains a student thesis (bachelor's …","url":["https://research.tue.nl/files/139495213/Thesis_RikCoenders_Aug2019.pdf"]} {"year":"2020","title":"Semi-autonomous methodology to validate and update customer needs database through text data analytics","authors":["AM Bigorra, O Isaksson, M Karlberg - International Journal of Information …, 2020"],"snippet":"… Two different pre-trained word vectors based on 1 and 2 million English words from Wikipedia and Common Crawl are considered and they are referred along the rest of the presented paper as emb1 and emb2, respectively 3 …","url":["https://www.sciencedirect.com/science/article/pii/S0268401219300817"]} {"year":"2020","title":"SemSeq: A Regime for Training Widely-Applicable Word-Sequence Encoders","authors":["H Tsuyuki, TY Ogawa, HTB Kobayashi - … : 16th International Conference of the Pacific …, 2020"]} {"year":"2020","title":"Sense Inventories for Arabic Texts","authors":["M Alian, A Awajan"],"snippet":"… E. Fasttext pre-trained embeddings Arabic Fasttext embeddings are provided by Grave et al. [16]. These embeddings are resulted from training on Wikipedia and Common Crawl corpus. They have used an extension of the Fasttext model with subword information …","url":["https://www.researchgate.net/profile/Marwah_Alian/publication/346785930_Sense_Inventories_for_Arabic_Texts/links/5fd0a6a745851568d14da099/Sense-Inventories-for-Arabic-Texts.pdf"]} {"year":"2020","title":"Sentence Matching with Deep Self-attention and Co-attention Features","authors":["Z Wang, D Yan - 2020","Z Wang, D Yan - … Conference on Knowledge Science, Engineering and …, 2021"],"snippet":"… 4.1 Implementation Details. In our experiments, word embedding vectors are initialized with 300d GloVe vectors pre-trained from the 840B Common Crawl corpus. Embeddings of out of the vocabulary of GloVe is initialized …","url":["https://link.springer.com/chapter/10.1007/978-3-030-82147-0_45","https://openreview.net/pdf?id=EEV7-ruXM5H"]} {"year":"2020","title":"Sentence-Embedding and Similarity via Hybrid Bidirectional-LSTM and CNN Utilizing Weighted-Pooling Attention","authors":["D HUANG, A AHMED, SY ARAFAT, KI RASHID… - IEICE TRANSACTIONS on …, 2020"],"snippet":"… bag-of-words architecture. • A GloVe is a 300-dimensional word embedding model learned on aggregated global word co-occurrence statistics from Common Crawl (840 billion to- kens) [32]. 4.2 Datasets The datasets are concisely …","url":["https://www.jstage.jst.go.jp/article/transinf/E103.D/10/E103.D_2018EDP7410/_pdf"]} {"year":"2020","title":"Sentiment Analysis Approach Based on Combination of Word Embedding Techniques","authors":["I Kaibi, H Satori - Embedded Systems and Artificial Intelligence, 2020"],"snippet":"… The fastText pre-trained word vectors is a high-quality word representation for 157 languages, two sources of data are used to train fastText pre-trained models: the free online encyclopedia Wikipedia and data from the common crawl project …","url":["https://link.springer.com/chapter/10.1007/978-981-15-0947-6_76"]} {"year":"2020","title":"Sentiment Analysis based Multi-person Multi-criteria Decision Making Methodology: Using Natural Language Processing and Deep Learning for Decision Aid","authors":["C Zuheros, E Martínez-Cámara, E Herrera-Viedma… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. arXiv:2008.00032v1 [cs.CL] 31 Jul 2020 Sentiment Analysis based Multi-person Multi-criteria Decision Making Methodology: Using Natural Language Processing and Deep Learning for Decision Aid Cristina Zuheros1 …","url":["https://arxiv.org/pdf/2008.00032"]} {"year":"2020","title":"Sentiment analysis for customer relationship management: an incremental learning approach","authors":["N Capuano, L Greco, P Ritrovato, M Vento - Applied Intelligence, 2020"],"snippet":"… text corpora. In particular, Universal Dependencies and WikiNER corpora [31] were used for Italian, while OntoNotes [32] and Common Crawl Footnote 3 corpora were used for English. The classification model. Once the WEs …","url":["https://link.springer.com/article/10.1007/s10489-020-01984-x"]} {"year":"2020","title":"Sentiment Analysis for Hinglish Code-mixed Tweets by means of Cross-lingual Word Embeddings","authors":["P Singh, E Lefever - LREC 2020–4th Workshop on Computational …, 2020"],"snippet":"… This can probably be attributed to the quality of the monolingual embeddings, since the English embeddings were trained on the vast Common Crawl data while the Code-Mixed embeddings were trained on a little more than 100,000 scraped tweets …","url":["https://biblio.ugent.be/publication/8662137/file/8662140"]} {"year":"2020","title":"Sentiment Aware Word Embeddings Using Refinement and Senti-Contextualized Learning Approach","authors":["B Naderalvojoud, EA Sezer - Neurocomputing, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0925231220304811"]} {"year":"2020","title":"Sentiment detection with FedMD: Federated Learning via Model Distillation","authors":["PTG Momcheva"],"snippet":"… value due to the nature of tweets - they are short, have little context and contain misspelled and shortened words, all of which stands in general contrast to the GloVe training data and logic, which was based on structured …","url":["http://ceur-ws.org/Vol-2656/paper24.pdf"]} {"year":"2020","title":"Seq2Seq Models for Recommending Short Text Conversations","authors":["J Torres, C Vaca, L Terán, CL Abad - Expert Systems with Applications, 2020"],"snippet":"… to a lower-dimensional representation ( w ∈ R d ). For the initialization of the word embeddings, we use the pre-trained vectors provided by Mikolov, Grave, Bojanowski, Puhrsch, and Joulin (2018), which consist of 2 …","url":["https://www.sciencedirect.com/science/article/pii/S0957417420300956"]} {"year":"2020","title":"Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue","authors":["B Kim, J Ahn, G Kim - arXiv preprint arXiv:2002.07510, 2020"],"snippet":"Page 1. Published as a conference paper at ICLR 2020 SEQUENTIAL LATENT KNOWLEDGE SELECTION FOR KNOWLEDGE-GROUNDED DIALOGUE Byeongchang Kim Jaewoo Ahn Gunhee Kim Department of Computer …","url":["https://arxiv.org/pdf/2002.07510"]} {"year":"2020","title":"Sequential Neural Networks for Noetic End-to-End Response Selection","authors":["Q Chen, W Wang - Computer Speech & Language, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S088523082030005X"]} {"year":"2020","title":"Sequential Transfer Learning for Event Detection and Key Sentence Extraction","authors":["A Ollagnier, H Williams"],"snippet":"… Training is performed on the base version trained on cased text. XLNet [22] 12-layer, 768-hidden, 12-heads 110M parameters Pre-trained models are based on English texts from Wikipedia, BooksCorpus, Giga5, ClueWeb, and Common Crawl …","url":["https://www.researchgate.net/profile/Ollagnier_Anais/publication/344440133_Sequential_Transfer_Learning_for_Event_Detection_and_Key_Sentence_Extraction/links/5f75ad1a299bf1b53e0397ce/Sequential-Transfer-Learning-for-Event-Detection-and-Key-Sentence-Extraction.pdf"]} {"year":"2020","title":"Shopify in Germany: An analysis of a Canadian e-commerce platform's marketing strategy and activities in an international market.","authors":["K Howe-Patterson, I Schuiling"],"snippet":"Page 1. Available at: http://hdl.handle.net/2078.1/thesis:25661 [Downloaded 2020/09/25 at 09:14:22 ] \"Shopify in Germany: An analysis of a Canadian e-commerce platform's marketing strategy and activities in an international …","url":["https://dial.uclouvain.be/downloader/downloader.php?pid=thesis%3A25661&datastream=PDF_01&cover=cover-mem"]} {"year":"2020","title":"Sidecar: Augmenting Word Embedding Models with Expert Knowledge","authors":["M Lemay, D Shapiro, MK MacPherson, K Yee… - Future of Information and …, 2020"],"snippet":"… spaCy's en_core_web_lg GloVe model 8 , trained on Common Crawl 9. Facebook's crawl-300d-2M fastText model 10 , also trained on Common Crawl. All three models produce vectors of size 300. To generate vectors for the …","url":["https://link.springer.com/chapter/10.1007/978-3-030-39442-4_39"]} {"year":"2020","title":"Sights, titles and tags: mining a worldwide photo database for sightseeing","authors":["A Luberg, J Pindis, T Tammet"],"snippet":"… Explosion AI spaCy pretrained model [7]. We use a medium sized web code model. The model is based on Common Crawl and OntoNotes 5 [22] sources … Facebook fastText pretrained model [8]. The model is pretrained on Common Crawl and Wikipedia data …","url":["http://wims2020.sigappfr.org/wp-content/uploads/2020/06/WIMS'20/p149-Luberg.pdf"]} {"year":"2020","title":"SimAlign: High Quality Word Alignments without Parallel Training Data using Static and Contextualized Embeddings","authors":["MJ Sabet, P Dufter, H Schütze - arXiv preprint arXiv:2004.08728, 2020"],"snippet":"… In addition, we use XLM-RoBERTa base (Conneau et al., 2019), which is pretrained on 100 languages on CommonCrawl data. We denote alignments obtained using the embeddings from the i-th layer by XLM-R[i] …","url":["https://arxiv.org/pdf/2004.08728"]} {"year":"2020","title":"Similarity judgment within and across categories: A comprehensive model comparison","authors":["R Richie, S Bhatia - 2020"],"snippet":"… Google News 100B 300 None Magnitude Librarya fastText 600B Common Crawl FastText with Continuous Bag of Words (CBOW) … GloVe 840B Common Crawl GloVe Common Crawl 840B 300 None Magnitude Library Glove 840B Common Crawl, Paragram …","url":["https://psyarxiv.com/5pa9r/download"]} {"year":"2020","title":"Simulation Induces Durable, Extensive Changes to Self-knowledge","authors":["J Rubin-McGregor, Z Zhao, D Tamir - PsyArXiv. December, 2020"],"snippet":"Page 1. SIMULATION CHANGES SELF-KNOWLEDGE 1 1 2 3 4 5 6 Simulation Induces Durable, Extensive Changes to Self-Knowledge 7 Jordan Rubin-McGregora, Zidong Zhaoa, and Diana Tamira 8 aDepartment of …","url":["https://psyarxiv.com/m2wgk/download/?format=pdf"]} {"year":"2020","title":"SINAI at eHealth-KD Challenge 2020: Combining Word Embeddings for Named Entity Recognition in Spanish Medical Records","authors":["P López-Úbedaa, JM Perea-Ortegab…"],"snippet":"… we have used two specific pre-trained word embeddings: BETO [28], which follows a BERT model trained on a big Spanish corpus, and XLM-RoBERTa [29], which were generated by using a large multilingual language model …","url":["http://ceur-ws.org/Vol-2664/eHealth-KD_paper7.pdf"]} {"year":"2020","title":"Siva at WNUT-2020 Task 2: Fine-tuning Transformer Neural Networks for Identification of Informative Covid-19 Tweets","authors":["S Sai - Proceedings of the Sixth Workshop on Noisy User …, 2020"],"snippet":"… The base version of RoBERTa has 125M parameters, and the large version has 355M parameters. XLM-RoBERTa XLM-RoBERTa(Conneau et al., 2019) is a multilingual model trained on 2.5 TB data from CommonCrawl. This …","url":["https://www.aclweb.org/anthology/2020.wnut-1.45.pdf"]} {"year":"2020","title":"SJTU-NICT's Supervised and Unsupervised Neural Machine Translation Systems for the WMT20 News Translation Task","authors":["Z Li, H Zhao, R Wang, K Chen, M Utiyama, E Sumita - arXiv preprint arXiv …, 2020"],"snippet":"… In the supervised PL→EN translation direction, we based on the XLM framework to pre-train a Polish language model using common crawl and news crawl monolingual data, and proposed the XLM enhanced NMT model …","url":["https://arxiv.org/pdf/2010.05122"]} {"year":"2020","title":"SMAN: Stacked Multi-Modal Attention Network for Cross-Modal Image-Text Retrieval","authors":["BR Loss"],"snippet":"Page 1. warwick.ac.uk/lib-publications Manuscript version: Author's Accepted Manuscript The version presented in WRAP is the author's accepted manuscript and may differ from the published version or Version of Record. Persistent …","url":["https://pdfs.semanticscholar.org/7588/90bef9a1a85a25a1f6831a58f00a462476af.pdf"]} {"year":"2020","title":"SML: Semantic Meta-learning for Few-shot Semantic Segmentation","authors":["AK Pambala, T Dutta, S Biswas - arXiv preprint arXiv:2009.06680, 2020"],"snippet":"… Word2vec (Mikolov et al. 2013) is trained on Google News dataset (Wang, Ye, and Gupta 2018) which contains 3-million words; (2) FastText (Joulin et al. 2016) is trained on Common-Crawl dataset (Mikolov et al. 2018). We use these …","url":["https://arxiv.org/pdf/2009.06680"]} {"year":"2020","title":"SNK@ DANKMEMES: Leveraging Pretrained Embeddings for Multimodal Meme Detection","authors":["S Fiorucci - Proceedings of Seventh Evaluation Campaign of …, 2020"],"snippet":"… et al., 2018). Word embeddings are trained on Common Crawl and Wikipedia, using CBOW with positionweights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. We calculated the …","url":["http://ceur-ws.org/Vol-2765/paper121.pdf"]} {"year":"2020","title":"Social biases in word embeddings and their relation to human cognition","authors":["A Caliskan, M Lewis - 2020"],"snippet":"… state-of-the-art word embeddings is the vast amount of training data available. GloVe is trained on 840 billion tokens and more than 2 million unique words of Common Crawl data which is a crawl of the entire world wide web. Similarly, Word2vec is trained on a …","url":["https://psyarxiv.com/d84kg/download?format=pdf"]} {"year":"2020","title":"Social Media Attributions in the Context of Water Crisis","authors":["R Sarkar, H Sarkar, S Mahinder, AR KhudaBukhsh - arXiv preprint arXiv:2001.01697, 2020"],"snippet":"… used embedding in this preprocessing step. We used the 300 dimensional GloVe model trained on 840 billion tokens of the CommonCrawl corpus, having a vocabulary size of 2.2 million. While calculating the embedding of …","url":["https://arxiv.org/pdf/2001.01697"]} {"year":"2020","title":"Sociolinguistic Properties of Word Embeddings","authors":["A Arseniev-Koehler, JG Foster - SocArXiv. August, 2020"],"snippet":"… These studies use large, commonly available pre-trained embeddings or their training corpora, such as Google News, web data (Common Crawl), and Google Books … They replicated results using a pretrained model on Common Crawl data …","url":["https://osf.io/b8kud/download"]} {"year":"2020","title":"Software for creating and analyzing semantic representations","authors":["FÅ Nielsen, LK Hansen - Statistical Semantics, 2020"],"snippet":"… This package provides models for the tagger, parser, named-entity recognizer and distributional semantic vectors trained on OntoNotes Release 5 and the Common Crawl dataset … 10 K–50 K. 300. 29 languages. GloVe. Common …","url":["https://link.springer.com/chapter/10.1007/978-3-030-37250-7_3"]} {"year":"2020","title":"Spoken words as biomarkers: using machine learning to gain insight into communication as a predictor of anxiety","authors":["G Demiris, KL Corey Magan, D Parker Oliver… - Journal of the American …, 2020"],"snippet":"… The validity of using cosine distance in an embedding space to measure text similarity depends largely on how well the embedding space represents the semantic concepts present in the text. In our case, the word embeddings …","url":["https://academic.oup.com/jamia/advance-article-abstract/doi/10.1093/jamia/ocaa049/5831105"]} {"year":"2020","title":"SPONTANEOUS STEREOTYPE CONTENT: MEASUREMENT AIMING TOWARD THEORETICAL INTEGRATION AND DISCOVERY","authors":["G Nicolas Ferreira - 2020","GN Ferreira - 2020"],"snippet":"Page 1. SPONTANEOUS STEREOTYPE CONTENT: MEASUREMENT AIMING TOWARD THEORETICAL INTEGRATION AND DISCOVERY GANDALF NICOLAS FERREIRA A DISSERTATION PRESENTED TO THE FACULTY OF PRINCETON UNIVERSITY IN …","url":["http://search.proquest.com/openview/41d33da8e87d459690442733f719668f/1?pq-origsite=gscholar&cbl=18750&diss=y","https://dataspace.princeton.edu/bitstream/88435/dsp01zp38wg55d/1/NicolasFerreira_princeton_0181D_13366.pdf"]} {"year":"2020","title":"Stanza: A Python Natural Language Processing Toolkit for Many Human Languages","authors":["P Qi, Y Zhang, Y Zhang, J Bolton, CD Manning - arXiv preprint arXiv:2003.07082, 2020"],"snippet":"… For the character-level language models in the NER component, we pretrained them on a mix of the Common Crawl and Wikipedia dumps, and the news corpora released by the WMT19 Shared Task (Barrault et al., 2019), with …","url":["https://arxiv.org/pdf/2003.07082"]} {"year":"2020","title":"STIL--Simultaneous Slot Filling, Translation, Intent Classification, and Language Identification: Initial Results using mBART on MultiATIS++","authors":["JGM FitzGerald - arXiv preprint arXiv:2010.00760, 2020"],"snippet":"… The mBART.cc25 model was trained on 25 languages for 500k steps using a 1.4 TB corpus of scraped website data taken from Common Crawl (Wenzek et al., 2019). The model was trained to reconstruct masked tokens and to rearrange scrambled sentences …","url":["https://arxiv.org/pdf/2010.00760"]} {"year":"2020","title":"STILTool: A Semantic Table Interpretation evaLuation Tool","authors":["E Jimenez-Ruiz, A Maurino - The Semantic Web: ESWC 2020 Satellite Events …","M Cremaschi, A Siano, R Avogadro, E Jimenez-Ruiz…"],"snippet":"… In order to size the spread of tabular data, 2.5 M tables have been identified within the Common Crawl repository1 [3]. The current snapshot of Wikipedia contains more than 3.23 M tables from more than 520k Wikipedia articles …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=C0UIEAAAQBAJ&oi=fnd&pg=PA61&dq=commoncrawl&ots=OcUKD8orbe&sig=5EUZjTQOLRGuwqaWXWRmrck1S50","https://preprints.2020.eswc-conferences.org/posters_demos/paper_293.pdf"]} {"year":"2020","title":"Structured deep neural network with low complexity","authors":["S Liao - 2020"],"snippet":"Page 1. STRUCTURED DEEP NEURAL NETWORK WITH LOW COMPLEXITY By SIYU LIAO A dissertation submitted to the School of Graduate Studies Rutgers, The State University of New Jersey in partial fulfillment of the …","url":["https://rucore.libraries.rutgers.edu/rutgers-lib/64996/PDF/1/"]} {"year":"2020","title":"Study and Creation of Datasets for Comparative Questions Classification","authors":["S Stahlhacke"],"snippet":"… The data used by the system is a preprocessed version of the Common Crawl Text Corpus8, which crawled from the world wide web … Which one is better suited for me, Xbox One or PS4? 8https://commoncrawl.org/ 4 Page 11. CHAPTER 1. INTRODUCTION …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/teaching/theses/completed-theses/2020-ma-stahlhacke.pdf"]} {"year":"2020","title":"Studying the Evolution of Greek Words via Word Embeddings","authors":["V Barzokas, E Papagiannopoulou, G Tsoumakas - 11th Hellenic Conference on …, 2020"],"snippet":"… Despite the limited size of the Greek corpus compared to Common Crawl and Wikipedia used for the pre-trained fastText embeddings, we didn't detect any notable difference in the quality of our models in comparison with the pre-trained one …","url":["https://dl.acm.org/doi/abs/10.1145/3411408.3411425"]} {"year":"2020","title":"Substance over Style: Document-Level Targeted Content Transfer","authors":["A Hegel, S Rao, A Celikyilmaz, B Dolan - arXiv preprint arXiv:2010.08618, 2020"],"snippet":"Page 1. Substance over Style: Document-Level Targeted Content Transfer Allison Hegel1∗ Sudha Rao2 Asli Celikyilmaz2 Bill Dolan2 1Lexion, Seattle, WA, USA 2Microsoft Research, Redmond, WA, USA allison@lexion.ai {sudhra,aslicel,billdol}@microsoft.com Abstract …","url":["https://arxiv.org/pdf/2010.08618"]} {"year":"2020","title":"Subword Segmentation and a Single Bridge Language Affect Zero-Shot Neural Machine Translation","authors":["A Rios, M Müller, R Sennrich - arXiv preprint arXiv:2011.01703, 2020","AR Gonzales, M Müller, R Sennrich - Proceedings of the Fifth Conference on …, 2020"],"snippet":"… Page 3. 530 corpora training dev test Language Pairs with English: de↔en Commoncrawl, Europarl-v9, Wikititles-v1 5M 250 2000 cs↔en Europarl-v9, CzEng1.7 5M 250 2000 fr↔en Commoncrawl, Europarl-v7 …","url":["https://arxiv.org/pdf/2011.01703","https://www.aclweb.org/anthology/2020.wmt-1.64.pdf"]} {"year":"2020","title":"Suggesting Citations for Wikidata Claims based on Wikipedia's External References","authors":["P Curotto, A Hogan"],"snippet":"… Offline: Given that some Wikidata items do not have an associated Wikipedia article, that many Wikipedia articles have few references, etc., it would be interesting to develop a broader corpus with more documents from the Web, perhaps from the Common Crawl …","url":["http://aidanhogan.com/docs/wikidata-references.pdf"]} {"year":"2020","title":"Supervised Understanding of Word Embeddings","authors":["HZ Yerebakan, P Bhatia, Y Shinagawa"],"snippet":"… In our experiments, we have used scikit-learn linear logistic regression model with a positive class weight of 2 to enhance the effect of positive words. We have used top 250k words of Fasttext Common Crawl word …","url":["https://rcqa-ws.github.io/papers/paper8.pdf"]} {"year":"2020","title":"Surface pattern-enhanced relation extraction with global constraints","authors":["H Jiang, JT Liu, S Zhang, D Yang, Y Xiao, W Wang - Knowledge and Information …, 2020"],"snippet":"Relation extraction is one of the most important tasks in information extraction. The traditional works either use sentences or surface patterns (ie, the.","url":["https://link.springer.com/article/10.1007/s10115-020-01502-y"]} {"year":"2020","title":"Survey on RNN and CRF models for de-identification of medical free text","authors":["JL Leevy, TM Khoshgoftaar, F Villanustre - Journal of Big Data, 2020"],"snippet":"The increasing reliance on electronic health record (EHR) in areas such as medical research should be addressed by using ample safeguards for patient privacy. These records often tend to be big data, and given that a significant …","url":["https://journalofbigdata.springeropen.com/articles/10.1186/s40537-020-00351-4"]} {"year":"2020","title":"SYMPTOM EXTRACTION FROM ATRIAL FIBRILLATION PATIENT CLINICAL NOTES USING DEEP LEARNING","authors":["TET van Putten"],"snippet":"Page 1. Eindhoven University of Technology MASTER Symptom extraction from atrial fibrillation patient clinical notes using deep learning van Putten, TE Award date: 2020 Link to publication Disclaimer This document contains …","url":["https://pure.tue.nl/ws/portalfiles/portal/163432620/Master_Thesis_Tim_van_Putten.pdf"]} {"year":"2020","title":"Syntax Role for Neural Semantic Role Labeling","authors":["Z Li, H Zhao, S He, J Cai - arXiv preprint arXiv:2009.05737, 2020"],"snippet":"Page 1. Syntax Role for Neural Semantic Role Labeling Zuchao Li Shanghai Jiao Tong University Department of Computer Science and Engineering charlee@sjtu. edu.cn Hai Zhao∗ Shanghai Jiao Tong University Department …","url":["https://arxiv.org/pdf/2009.05737"]} {"year":"2020","title":"System and method for model derivation for entity prediction","authors":["FI Wyss, A Ganapathiraju, P Buduguppa - US Patent App. 16/677,989, 2020"],"snippet":"… The dense representation can be used to capture the contextual information. For an NER system, the information can be encoded in the form of “world knowledge' by using a corpus such as the Wikipedia corpus or Google's common crawl data …","url":["https://patents.google.com/patent/US20200151248A1/en"]} {"year":"2020","title":"Systematic Mapping on Embedded Semantic Markup Validated with Data Mining Techniques","authors":["R Navarrete, C Montenegro, L Recalde - … Conference on Applied Human Factors and …, 2020"],"snippet":"… Markup format: microdata, rdfa, jsonld. Approach of the Research: ads, commerce, commoncrawl, crawl, deploy, education, egovernment, entity, error, extract, extraction, fix, government, learning, lod, mistake, owl, pld, plds, rdf, video, wdc, webdatacommons …","url":["https://link.springer.com/chapter/10.1007/978-3-030-51328-3_53"]} {"year":"2020","title":"SYSTEMS AND METHODS FOR LEARNING USER REPRESENTATIONS FOR OPEN VOCABULARY DATA SETS","authors":["T Durand, G Mori - US Patent App. 16/826,215, 2020"],"snippet":"Systems and methods adapted for training a machine learning model to predict data labels are described. The approach includes receiving a first data set comprising first data objects and associated fi.","url":["https://www.freepatentsonline.com/y2020/0302340.html"]} {"year":"2020","title":"TabEAno: Table to Knowledge Graph Entity Annotation","authors":["P Nguyen, N Kertkeidkachorn, R Ichise, H Takeda - arXiv preprint arXiv:2010.01829, 2020"],"snippet":"… Note that, tables in this study refer to relational vertical tables. A 3 Open Data Vision: https://opendatabarometer.org 4 Common Crawl: http://commoncrawl.org/ arXiv:2010.01829v1 [cs.AI] 5 Oct 2020 Page 2. 2 Phuc Nguyen et al …","url":["https://arxiv.org/pdf/2010.01829"]} {"year":"2020","title":"TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data","authors":["P Yin, G Neubig, W Yih, S Riedel - arXiv preprint arXiv:2005.08314, 2020"],"snippet":"… Page 5. interesting avenue for future work. Specifically, we collect tables and their surrounding NL text from English Wikipedia and the WDC WebTable Corpus (Lehmberg et al., 2016), a large-scale table collection from CommonCrawl …","url":["https://arxiv.org/pdf/2005.08314"]} {"year":"2020","title":"Tagging Reading Comprehension Materials with Document Extraction Attention Networks","authors":["B Sun, Y Zhu, R Xiao, Y Xiao, YG Wei - IEEE Transactions on Learning …, 2020"],"snippet":"Page 1. 1939-1382 (c) 2020 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/9079601/"]} {"year":"2020","title":"Tagging with Weak Labels. Paper presented at AAAI Conference on","authors":["E Simpson, J Pfeiffer, I Gurevych"],"snippet":"… For FAMULUS, we use 300-dimensional German fastText embeddings (Grave et al. 2018), and for NER and PICO we use 300-dimensional English GloVe 3 embeddings trained on 840 billion tokens from Common Crawl. To …","url":["https://research-information.bris.ac.uk/files/225055068/AAAI_Low_Resource_Sequence_Tagging_with_Weak_Labels.pdf"]} {"year":"2020","title":"Tailored retrieval of health information from the web for facilitating communication and empowerment of elderly people","authors":["M Alfano, B Lenzitti, D Taibi, M Helfert - 2020"],"snippet":"… contain all Microformat, Microdata and RDFa (Resource Description Framework in Attributes) data extracted from the open repository of Web crawl data named Common Crawl (CC)9 … 8 http://webdatacommons.org/ 9 …","url":["http://doras.dcu.ie/24469/2/ICT4AWE_2020_40_CR.pdf"]} {"year":"2020","title":"Target driven visual navigation exploiting object relationships","authors":["Y Qiu, A Pal, HI Christensen - arXiv preprint arXiv:2003.06749, 2020"],"snippet":"… For the word embeddings, we used the 300-D GloVe vectors pretrained on 840 billion tokens of Common Crawl [49]. The A3C model is based on [50], and the model hyperparameters used were: learning rate …","url":["https://arxiv.org/pdf/2003.06749"]} {"year":"2020","title":"Targeted Poisoning Attacks on Black-Box Neural Machine Translation","authors":["C Xu, J Wang, Y Tang, F Guzman, BIP Rubinstein… - arXiv preprint arXiv …, 2020"],"snippet":"… 3We find that the crawling services commonly used for parallel data collection, eg, Common Crawl (commoncrawl.org), are also fetching news articles from self-publishing sources like blogs (eg, with a subdomain of blogspot.com) …","url":["https://arxiv.org/pdf/2011.00675"]} {"year":"2020","title":"Tell and guess: cooperative learning for natural image caption generation with hierarchical refined attention","authors":["W Zhang, S Tang, J Su, J Xiao, Y Zhuang - Multimedia Tools and Applications, 2020"],"snippet":"… Implementation details. We use Tensorflow to implement our model and its variants. Given a textual caption, we employ the word2vec model (ie, GloVe word embedding [22] ) which is pre-trained on the Common Crawl dataset [25] …","url":["https://link.springer.com/article/10.1007/s11042-020-08832-7"]} {"year":"2020","title":"Tell Me Why You Feel That Way: Processing Compositional Dependency for Tree-LSTM Aspect Sentiment Triplet Extraction (TASTE)","authors":["A Sutherland, S Bensch, T Hellström, S Magg… - International Conference on …, 2020"],"snippet":"… aligned}$$. (6). $$\\begin{aligned}&{h}_{j} = o_j \\odot tanh(c_j). \\end{aligned}$$. (7). Words in a sentence are represented as Word Embeddings from the pre-trained Common-Crawl 840 B data 2 before they are fed to the DTLSTM. To …","url":["https://link.springer.com/chapter/10.1007/978-3-030-61609-0_52"]} {"year":"2020","title":"Tencent AI Lab machine translation systems for the WMT20 chat translation task","authors":["L Wang, Z Tu, X Wang, L Ding, L Ding, S Shi - Proceedings of the Fifth Conference on …, 2020"],"snippet":"… Out-of-domain Parallel Data The participants are allowed to use all the training data in the News shared task.4 Thus, we combine six corpora including Euporal, ParaCrawl, CommonCrawl, TildeRapid, NewsCommentary and WikiMatrix …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.60.pdf"]} {"year":"2020","title":"Testing pre-trained Transformer models for Lithuanian news clustering","authors":["L Stankevičius, M Lukoševičius - arXiv preprint arXiv:2004.03461, 2020"],"snippet":"… Specifically, we will use well known baselines – multilingual BERT and recently published XLM-R, trained on more than two terabytes of filtered CommonCrawl data. We chose clustering task to also try to advance the field of data mining …","url":["https://arxiv.org/pdf/2004.03461"]} {"year":"2020","title":"Text as data: a machine learning-based approach to measuring uncertainty","authors":["R Nyman, P Ormerod - arXiv preprint arXiv:2006.06457, 2020"],"snippet":"… The authors assemble a very large corpus of words from various sources. We use the one described on the GloVe website as Common Crawl (glove.42B.300d.zip). A co-occurrence matrix is constructed, which describes …","url":["https://arxiv.org/pdf/2006.06457"]} {"year":"2020","title":"Text Classification: Exploiting the Social Network","authors":["SBM Alkhereyf"],"snippet":"Page 1. Text Classification: Exploiting the Social Network Sakhar Alkhereyf Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy under the Executive Committee of the Graduate School of …","url":["https://academiccommons.columbia.edu/doi/10.7916/d8-t2jv-sb09/download"]} {"year":"2020","title":"Text mining-based construction site accident classification using hybrid supervised machine learning","authors":["MY Cheng, D Kusoemo, RA Gosno - Automation in Construction, 2020"],"snippet":"… There are various pre-trained databases in the GloVe website that is open for public, such as Wikipedia database consists of 6 billion words and 100 dimension, Common Crawl database consists of 42 billion words and 300 …","url":["https://www.sciencedirect.com/science/article/pii/S092658051931341X"]} {"year":"2020","title":"Text-based classification of interviews for mental health--juxtaposing the state of the art","authors":["JV Wouts - arXiv preprint arXiv:2008.01543, 2020"],"snippet":"… Model name Pretrain corpus Tokenizer type Acc Sentiment analysis belabBERT Common Crawl Dutch (non-shuffled) BytePairEncoding 95.92∗ % RobBERT Common Crawl Dutch (shuffled) BytePairEncoding 94.42 …","url":["https://arxiv.org/pdf/2008.01543"]} {"year":"2020","title":"TextSETTR: Label-Free Text Style Extraction and Tunable Targeted Restyling","authors":["P Riley, N Constant, M Guo, G Kumar, D Uthus… - arXiv preprint arXiv …, 2020"],"snippet":"… Furthermore, we demonstrate that a single model trained on unlabeled Common Crawl data is capable of transferring along multiple dimensions including dialect, emotiveness, formality, politeness, and sentiment. 1 INTRODUCTION …","url":["https://arxiv.org/pdf/2010.03802"]} {"year":"2020","title":"TF-CR: Weighting Embeddings for Text Classification","authors":["A Zubiaga - arXiv preprint arXiv:2012.06606, 2020"],"snippet":"… Page 6. • cglove: GloVe embeddings trained from Common Crawl. • wglove: GloVe embeddings trained from Wikipedia.6 We use two different classifiers for these experiments, SVM and Logistic Regression, which are known …","url":["https://arxiv.org/pdf/2012.06606"]} {"year":"2020","title":"The 2019 BBN Cross-lingual Information Retrieval System","authors":["DK Le Zhang, W Hartmann, M Srivastava, L Tarlin… - LREC 2020 Language Resources …","L Zhang, D Karakos, W Hartmann, M Srivastava… - Proceedings of the …, 2020"],"snippet":"… 4.1. Training Data The primary data source for constructing MT models is parallel data from the build language pack, augmented with a variety of web data, such as CommonCrawl2 and open parallel corpus (Tiedemann …","url":["http://www.lrec-conf.org/proceedings/lrec2020/workshops/CLSSTS2020/CLSSTS-2020.pdf#page=49","https://www.aclweb.org/anthology/2020.clssts-1.8.pdf"]} {"year":"2020","title":"The 2020 bilingual, bi-directional webnlg+ shared task overview and evaluation results (webnlg+ 2020)","authors":["TC Ferreira, C Gardent, C van der Lee, N Ilinykh… - Proceedings of the 3rd …, 2020"],"snippet":"… 3.3 Mono-task, Bilingual Approaches cuni-ufal. The mBART model (Liu et al., 2020) is pre-trained for multilingual denoising on the large-scale multilingual CC25 corpus extracted from Common Crawl, which contains …","url":["https://webnlg-challenge.loria.fr/files/2020.webnlg-papers.7.pdf"]} {"year":"2020","title":"THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES","authors":["M Toshevska, F Stojanovska, J Kalajdjieski"],"snippet":"… architectures [25]. In our experiments, we have used pre-trained models both trained with subword information on Wikipedia 2017 (16B tokens) and trained with subword information on Common Crawl (600B tokens)4. 2https …","url":["http://www.academia.edu/download/63915170/120200714-10552-nn915u.pdf"]} {"year":"2020","title":"The ADAPT Centre's neural MT systems for the WAT 2020 document-level translation task","authors":["W Jooste, R Haque, A Way - 2020"],"snippet":"… Finally, source-language monolingual data with n-grams similar to that of the documents in the test set was mined from the Common Crawl Corpus6 to be used as a source-side original synthetic corpus (SOSC) for fine-tuning the NMT model parameters …","url":["http://doras.dcu.ie/25205/1/WAT_2020.pdf"]} {"year":"2020","title":"The afrl wmt20 news-translation systems","authors":["J Gwinnup, T Anderson - Proceedings of the Fifth Conference on Machine …, 2020"],"snippet":"… Page 2. 207 corpus unfiltered lines filtered lines percent remain commoncrawl 723,256 655,069 90.57% newscommentaryv15 319,242 286,947 89.88% yandex 1,000,000 901,318 90.13 … 2013. Dirt cheap webscale parallel text from the common crawl …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.20.pdf"]} {"year":"2020","title":"The Art of Reproducible Machine Learning","authors":["V Novotný - RASLAN 2020 Recent Advances in Slavonic Natural …, 2020"],"snippet":"… We then use the initializations to reproduce 14 the results of Mikolov et al.(2018)[19, Table 2] us- ing the subword cbow model of Bojanowski et al.(2017)[2] and the 2017 English Wikipedia 15 training corpus (4% of the …","url":["https://nlp.fi.muni.cz/raslan/raslan20.pdf#page=63"]} {"year":"2020","title":"The birth of Romanian BERT","authors":["SD Dumitrescu, AM Avram, S Pyysalo - arXiv preprint arXiv:2009.08712, 2020"],"snippet":"… In total, the OPUS corpus contains around 4GB of Romanian text. OSCAR OSCAR (Ortiz Suárez et al., 2019), or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language …","url":["https://arxiv.org/pdf/2009.08712"]} {"year":"2020","title":"The Case For Alternative Web Archival Formats To Expedite The Data-To-Insight Cycle","authors":["X Wang, Z Xie - arXiv preprint arXiv:2003.14046, 2020"],"snippet":"… Large-scale, comprehensive web archiving initiatives include the Internet Archive [8], the Common Crawl [16], and many programs at national libraries and archives. These … 5.2 Data We chose to use Common Crawl's web …","url":["https://arxiv.org/pdf/2003.14046"]} {"year":"2020","title":"The Challenge of Diacritics in Yoruba Embeddings","authors":["TP Adewumi, F Liwicki, M Liwicki - arXiv preprint arXiv:2011.07605, 2020"],"snippet":"… (2018) are tabulated: Wiki, U_Wiki, C3 & CC, representing embeddings from the cleaned Wikipedia dump, its undiacritized (normalized) version, the diacritized data from Alabi et al. (2020) and the Common Crawl embedding by Grave et al. (2018), respectively …","url":["https://arxiv.org/pdf/2011.07605"]} {"year":"2020","title":"The Danish Gigaword Project","authors":["L Strømberg-Derczynski, R Baglini, MH Christiansen… - arXiv preprint arXiv …, 2020"],"snippet":"… Similarly, other huge monolithic datasets such as the Common Crawl Danish data suffer from large amounts of non-Danish content, possibly due to the pervasive confusion between Danish and Norwegian Bokmål … Common …","url":["https://arxiv.org/pdf/2005.03521"]} {"year":"2020","title":"The ELTE. DH Pilot Corpus–Creating a Handcrafted Gigaword Web Corpus with Metadata","authors":["B Indig, Á Knap, Z Sárközi-Lindner, M Timári, G Palkó - … of the 12th Web as Corpus …, 2020"],"snippet":"… Nowadays, large corpora are utilising the Common Crawl archive like the OSCAR corpus (Ortiz Suárez et al., 2019) with 5.16 billion (2.33 … All of these corpora – except the ones based on Common Crawl – have the same …","url":["https://www.aclweb.org/anthology/2020.wac-1.5.pdf"]} {"year":"2020","title":"The Emergence, Advancement and Future of Textual Answer Triggering","authors":["KN Acheampong, W Tian, EB Sifah… - Science and Information …, 2020"],"snippet":"… A similar observation is realized (\\(A_1 \\approx 0.8986\\); \\(A_2 \\approx 0.7352\\); difference in margin \\(\\approx 0.1634 \\)) when the model the 300-dimensional word vectors trained on Common Crawl with GloVe from spaCy is used …","url":["https://link.springer.com/chapter/10.1007/978-3-030-52246-9_50"]} {"year":"2020","title":"The Geometry of Distributed Representations for Better Alignment, Attenuated Bias, and Improved Interpretability","authors":["S Dev - arXiv preprint arXiv:2011.12465, 2020"],"snippet":"… 39 5 RMSE variation with word frequency in (a) GloVe Wiki to GloVe Common Crawl and (b) word2vec to GloVe evaluated for Wiki dataset. All words were used for tests in lower case as listed in the table. . . . . 40 …","url":["https://arxiv.org/pdf/2011.12465"]} {"year":"2020","title":"The NiuTrans Machine Translation Systems for WMT20","authors":["Y Zhang, Z Wang, R Cao, B Wei, W Shan, S Zhou… - Proceedings of the Fifth …, 2020"],"snippet":"… mentary, Common Crawl , TED Talks 4 Japanese monolingual data corpus about 1.7 billion. After the data filter, 12 million parallel data was left and 11 million selected by the neural language model was used as training data …","url":["https://www.aclweb.org/anthology/2020.wmt-1.37.pdf"]} {"year":"2020","title":"The POLAR Framework: Polar Opposites Enable Interpretability of Pre-Trained Word Embeddings","authors":["B Mathew, S Sikdar, F Lemmerich, M Strohmaier - arXiv preprint arXiv:2001.09876, 2020"],"snippet":"… (2) GloVe embeddings [27]3 trained on Web data from Common Crawl … 3We used the Common Crawl embeddings with 42B tokens: https://nlp. stanford.edu/ projects/glove/ 4The datasets are available here: https://github …","url":["https://arxiv.org/pdf/2001.09876"]} {"year":"2020","title":"The POLUSA Dataset: 0.9 M Political News Articles Balanced by Time and Outlet Popularity","authors":["L Gebhard, F Hamborg - arXiv preprint arXiv:2005.14024, 2020"],"snippet":"… RoBERTa: A Robustly Optimized BERT Pretraining Ap- proach. arXiv: 1907.11692 [cs] [6] Sebastian Nagel. 2016. Common Crawl – News Dataset Available. Retrieved May 8, 2020 from https://commoncrawl.org/2016/10 …","url":["https://arxiv.org/pdf/2005.14024"]} {"year":"2020","title":"The presence of occupational structure in online texts based on word embedding NLP models","authors":["Z Kmetty, J Koltai, T Rudas - arXiv preprint arXiv:2005.08612, 2020"],"snippet":"… We used three pre-trained vector spaces in the analysis. The first vector model we used was trained on the English language texts of the Common Crawl (CC) corpus1, a huge web archive … 2016) 1 http://commoncrawl.org 2 …","url":["https://arxiv.org/pdf/2005.08612"]} {"year":"2020","title":"The role of affective meaning, semantic associates, and orthographic neighbours in modulating the N400 in single words","authors":["F Blomberg, M Roll, J Frid, M Lindgren, M Horne - The Mental Lexicon, 2020"],"snippet":"… of fastText compared to other popular implementations (such as Word2Vec and Glove) is that it already has a model for Swedish trained on millions of words taken from the Swedish version of the free online encyclopedia …","url":["https://www.jbe-platform.com/content/journals/10.1075/ml.19021.blo"]} {"year":"2020","title":"The Two-Pass Softmax Algorithm","authors":["M Dukhan, A Ablavatski - arXiv preprint arXiv:2001.04438, 2020"],"snippet":"Page 1. The Two-Pass Softmax Algorithm Marat Dukhan ∗1,2 and Artsiom Ablavatski1 1Google Research 2Georgia Institute of Technology Abstract The softmax (also called softargmax) function is widely used in machine learning …","url":["https://arxiv.org/pdf/2001.04438"]} {"year":"2020","title":"The University of Edinburgh's English-Tamil and English-Inuktitut Submissions to the WMT20 News Translation Task","authors":["R Bawden, A Birch, R Dobreva, A Oncevay… - 5th Conference on Machine …, 2020"],"snippet":"… The only additional monolingual Inuktitut data was 163k sentences of common-crawl data, which we backtranslated for the English→Inuktitut system … Synthetic (from en Europarl) en-iu 650k Synthetic (from en News …","url":["https://hal.archives-ouvertes.fr/hal-02981153/document"]} {"year":"2020","title":"The University of Edinburgh's submission to the German-to-English and English-to-German Tracks in the WMT 2020 News Translation and Zero-shot Translation …","authors":["U Germann - 2020"],"snippet":"… High-quality parallel data Europarl ca. 1.79 M Rapid ca. 1.45 M News Commentary ca. 0.35 M Crawled parallel data ParaCrawl 5.1 ca. 34.37 M CommonCrawl ca. 2.40 M WikiMatrix ca. 6.22 M WikiTitles ca. 1.38 M Monolingual crawled news data German ca …","url":["http://statmt.org/wmt20/pdf/2020.wmt-1.18.pdf"]} {"year":"2020","title":"The University of Helsinki and Aalto University submissions to the WMT 2020 news and lowresource translation tasks","authors":["Y Scherrer, SA Grönroos, S Virpioja - the Fifth Conference on Machine Translation, 2020"],"snippet":"… NewsDiscuss 2019 2 000 000 1 000 000 CommonCrawl 80 244 80 244 … In terms of monolingual Inuktitut data, besides the unaligned NH data, the organizers only provided a CommonCrawl dump. This corpus was again backtranslated to English and filtered …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.134.pdf"]} {"year":"2020","title":"The University of Helsinki submission to the IWSLT2020 Offline Speech Translation Task","authors":["R Vázquez, M Aulamo, U Sulubacak, J Tiedemann - The 17th International …, 2020"],"snippet":"… filter out noisy translations. OpenSubtitles2018, which consists of subtitle translations, and corpora gathered by crawling the internet, Common Crawl and ParaCrawl, are especially likely to contain noisy data. For filtering the …","url":["https://tuhat.helsinki.fi/ws/files/137272405/uni_helsinki_submission_iwslt_2020.pdf"]} {"year":"2020","title":"The Unreasonable Effectiveness of Machine Learning in Moldavian versus Romanian Dialect Identification","authors":["M Găman, RT Ionescu - arXiv preprint arXiv:2007.15700, 2020"],"snippet":"… We note that these representations are learned from Romanian corpora, such as the corpus for contemporary Romanian language (CoRoLa) (Mititelu, Tufis, and Irimia 2018; Pais and Tufis 2018), Common Crawl (CC) and Wikipedia (Grave et al …","url":["https://arxiv.org/pdf/2007.15700"]} {"year":"2020","title":"The Voice and Speech Processing within Language Technology Applications: Perspective of the Russian Data Protection Law","authors":["I Ilin"],"snippet":"… of collecting, systematizing and annotating language data are various language datasets such as Open Subtitles43, the Common Crawl da- taset44, the … 43 Available at: https://www.opensubtitles.org/ru (accessed: 18.05.2020) …","url":["https://www.researchgate.net/profile/Ilya_Ilin/publication/345237982_The_Voice_and_Speech_Processing_within_Language_Technology_Application_Perspective_of_the_Russian_Data_Protection_Law/links/5fa11750458515b7cfb5ce68/The-Voice-and-Speech-Processing-within-Language-Technology-Application-Perspective-of-the-Russian-Data-Protection-Law.pdf"]} {"year":"2020","title":"The Volctrans Machine Translation System for WMT20","authors":["L Wu, X Pan, Z Lin, Y Zhu, M Wang, L Li - arXiv preprint arXiv:2010.14806, 2020"],"snippet":"… We use all parallel data available: Eu- roparl v10, ParaCrawl v5.1, Common Crawl corpus, News Commentary v15, Wiki Titles v2, Tilde Rapid corpus and WikiMatrix corpus … Each part contains 10M common crawl sentences and 3M Newscrawl sentences …","url":["https://arxiv.org/pdf/2010.14806"]} {"year":"2020","title":"TheNorth@ HaSpeeDe 2: BERT-based Language Model Fine-tuning for Italian Hate Speech Detection","authors":["E Lavergne, R Saini, G Kovács, K Murphy"],"snippet":"… ERT. AlBERTo was pretrained on TWITA, that is a collection of Italian tweets (Polignano et al., 2019b). UmBERTo was pretrained on Commoncrawl ITA exploiting OSCAR Italian large corpus (Parisi et al., 2020). Finally, PoliBERT …","url":["http://ceur-ws.org/Vol-2765/paper135.pdf"]} {"year":"2020","title":"This is a post-peer-review, pre-copyedit version of an article in press in Motivation and Emotion. The final authenticated version is available online at: http://dx. doi. org …","authors":["JS Pang, H Ring"],"snippet":"… datasets. Based on these experiments we decided to use Facebook's FastText subword embeddings of 300 dimensions trained on Common Crawl (600 billion tokens).5 This is the set of pre-trained vectors that we …","url":["http://www.academia.edu/download/63571037/Pang_Ring-2020-Automating_implicit_motive_coding-ME_AAM.pdf"]} {"year":"2020","title":"This is a post-peer-review, pre-copyedit version of an article in press in Motivation and Emotion. The final authenticated version will be available online at: http://dx. doi …","authors":["JS Pang, H Ring"],"snippet":"… datasets. Based on these experiments we decided to use Facebook's FastText subword embeddings of 300 dimensions trained on Common Crawl (600 billion tokens).5 This is the set of pre-trained vectors that we …","url":["https://osf.io/b7d96/download"]} {"year":"2020","title":"Tight Integrated End-to-End Training for Cascaded Speech Translation","authors":["P Bahar, T Bieschke, R Schlüter, H Ney - arXiv preprint arXiv:2011.12167, 2020"],"snippet":"… For MT training on En→De, we utilize the parallel data allowed for the IWSLT 2020. After filtering the noisy corpora, namely ParaCrawl, CommonCrawl, Rapid and OpenSubtitles2018, we end up with almost 27M bilingual text sentences …","url":["https://arxiv.org/pdf/2011.12167"]} {"year":"2020","title":"Tilde at WMT 2020: News Task Systems","authors":["R Krišlauks, M Pinnis - arXiv preprint arXiv:2010.15423, 2020"],"snippet":"… translation. In order to make use of the Polish CommonCrawl corpus, we scored sentences using the in-domain language models and selected top-scoring sentences as additional monolingual data for back-translation. Many …","url":["https://arxiv.org/pdf/2010.15423"]} {"year":"2020","title":"Tired of Topic Models? Clusters of Pretrained Word Embeddings Make for Fast and Good Topics too!","authors":["S Sia, A Dalmia, SJ Mielke - arXiv preprint arXiv:2004.14914, 2020"],"snippet":"… 0.177 FastText 2B (Wikipedia) -0.561 -0.657 -0.419 0.225 0.142 0.196 -0.382 -0.187 0.212 0.235 0.240 0.253 Glove 840B (Common Crawl) -0.436 -0.111 -0.299 0.182 0.213 0.155 -0.043 0.179 0.233 0.219 0.237 0.240 BERT …","url":["https://arxiv.org/pdf/2004.14914"]} {"year":"2020","title":"To BERT or Not to BERT: Comparing Task-specific and Task-agnostic Semi-Supervised Approaches for Sequence Tagging","authors":["K Bhattacharjee, M Ballesteros, R Anubhai, S Muresan… - arXiv preprint arXiv …, 2020","KBMBR Anubhai, S Muresan, JMFLY Al, OA AI"],"snippet":"… Cloze (Baevski et al., 2019) and BERT-MRC+DSC (Li et al., 2019) are SOTA baselines for CONLL-2003 and CONLL-2012, respectively, for this task. Baevski et al. (2019) also use subsampled Common Crawl and News Crawl …","url":["https://arxiv.org/pdf/2010.14042","https://assets.amazon.science/79/37/7a3f91804693baaaadc5062a9821/to-bert-or-not-to-bert-comparing-task-specific-and-task-agnostic-semi-supervised-approaches-for-sequence-tagging.pdf"]} {"year":"2020","title":"Tohoku-AIP-NTT at WMT 2020 News Translation Task","authors":["S Kiyono, T Ito, R Konno, M Morishita, J Suzuki - … of the Fifth Conference on Machine …, 2020"],"snippet":"… 2.2 Monolingual Corpus The origins of the monolingual corpus in our system are the Europarl, NewsCommentary, and en- tire NewsCrawl (2008-2019) corpora for English and German, and the Europarl …","url":["http://www.statmt.org/wmt20/pdf/2020.wmt-1.12.pdf"]} {"year":"2020","title":"Topics in Sequence-to-Sequence Learning for Natural Language Processing","authors":["R Aharoni"],"snippet":"Page 1. Topics in Sequence-to-Sequence Learning for Natural Language Processing Roee Aharoni Ph.D. Thesis Submitted to the Senate of Bar-Ilan University Ramat Gan, Israel May 2020 Page 2. This work was …","url":["http://www.roeeaharoni.com/Phd_Thesis.pdf"]} {"year":"2020","title":"Touché: First Shared Task on Argument Retrieval","authors":["A Bondarenko, M Hagen, M Potthast, H Wachsmuth…"],"snippet":"… Systems. pp. 44–52 (2012) 2. Bevendorff, J., Stein, B., Hagen, M., Potthast, M.: Elastic ChatNoir: Search Engine for the ClueWeb and the Common Crawl. In: Proceedings of the 40th European Conference on IR Research (ECIR). pp …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2020-bondarenkoetal-ecir-touche.pdf"]} {"year":"2020","title":"Toward building recommender systems for the circular economy: Exploring the perils of the European Waste Catalogue","authors":["G van Capelleveen, C Amrit, H Zijm, DM Yazan, A Abdi - Journal of Environmental …"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0301479720313554"]} {"year":"2020","title":"Towards Context-Aware Opinion Summarization for Monitoring Social Impact of News","authors":["A Ramón-Hernández, A Simón-Cuevas, MMG Lorenzo… - Information, 2020"],"snippet":"… Specifically, those vectors are generated by using the word2vec pre-trained model included in the es_core_news_md model of the spaCy library, which includes 300-dimensional vectors trained using FastText CBOW on Wikipedia …","url":["https://www.mdpi.com/2078-2489/11/11/535/pdf"]} {"year":"2020","title":"Towards countering hate speech against journalists on social media","authors":["P Charitidis, S Doropoulos, S Vologiannidis… - Online Social Networks and …, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S2468696420300124"]} {"year":"2020","title":"Towards Effective Utilization of Pretrained Language Models—Knowledge Distillation from BERT","authors":["L Liu - 2020"],"snippet":"… For non-contextual embeddings, there are multiple pre-trained word vectors, such as word2vec trained on Google News, GloVe [60] trained on Wikipedia/Gigaword/Common Crawl, and fastText[67] trained on Wikipedia/Common Crawl …","url":["https://uwspace.uwaterloo.ca/bitstream/handle/10012/16225/Liu_Linqing.pdf?sequence=3"]} {"year":"2020","title":"Towards Efficient and Reproducible Natural Language Processing","authors":["J Dodge - 2020"],"snippet":"… multiple epochs is standard). For example, the July 2019 Common Crawl contains 242 TB of uncompressed data,8 so even storing the data is expensive … 7https://opensource.google.com/projects/open-images-dataset …","url":["https://www.lti.cs.cmu.edu/sites/default/files/dodge%2C%20jesse%20-%20May%202020.pdf"]} {"year":"2020","title":"Towards Generalized Neural Semantic Parsing","authors":["P Yin - 2020"],"snippet":"Page 1. April 27, 2020 DRAFT Thesis Proposal Towards Generalized Neural Semantic Parsing Pengcheng Yin April 27, 2020 Language Technologies Institute School of Computer Science Carnegie Mellon University Pittsburgh, PA 15123 Thesis Committee …","url":["http://pcyin.me/thesis_proposal.pdf"]} {"year":"2020","title":"Towards IP-based Geolocation via Fine-grained and Stable Webcam Landmarks","authors":["Z Wang, Q Li, J Song, H Wang, L Sun - Proceedings of The Web Conference 2020, 2020"],"snippet":"Page 1. Towards IP-based Geolocation via Fine-grained and Stable Webcam Landmarks Zhihao Wang Institute of Information Engineering Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy …","url":["https://dl.acm.org/doi/pdf/10.1145/3366423.3380216"]} {"year":"2020","title":"Towards Orthographic and Grammatical Clinical Text Correction: a First Approach","authors":["S Lima López - 2020"],"snippet":"… Their application to GEC is based on the idea that correct sequences are bound to have a higher probability score than incorrect ones. They are very dependent on the data that is used to build them, and so large corpora …","url":["https://addi.ehu.es/bitstream/handle/10810/48624/MAL-Salvador_Lima.pdf?sequence=1"]} {"year":"2020","title":"Towards Useful Word Embeddings","authors":["V Novotný, M Štefánik, D Lupták, P Sojka"],"snippet":"… The size of our dataset is only 4% of the Common Crawl dataset used by Mikolov et al … We will also train our word vector models using larger corpora such as Common Crawl to enable meaningful comparison to sota results. Acknowledgments …","url":["https://www.fi.muni.cz/usr/sojka/papers/raslan-2020-novotny-stefanik-luptak-sojka.pdf"]} {"year":"2020","title":"Towards Visual Dialog for Radiology","authors":["O Kovaleva, C Shivade, S Kashyap, K Kanjaria, J Wu… - Proceedings of the 19th …, 2020"],"snippet":"… the models, (b) domain-independent GloVe Common Crawl embeddings (Pennington et al., 2014), and (c) domain-specific fastText embeddings trained by (Romanov and Shivade, 2018). The latter are initialized with GloVe …","url":["https://www.aclweb.org/anthology/2020.bionlp-1.6.pdf"]} {"year":"2020","title":"Traceability Support for Multi-Lingual Software Projects","authors":["Y Liu, J Lin, J Cleland-Huang - arXiv preprint arXiv:2006.16940, 2020"],"snippet":"Page 1. Traceability Support for Multi-Lingual Software Projects Yalin Liu, Jinfeng Lin, Jane Cleland-Huang University of Notre Dame Notre Dame, IN yliu26@nd.edu, jlin6@nd.edu,JaneHuang@nd.edu ABSTRACT Software …","url":["https://arxiv.org/pdf/2006.16940"]} {"year":"2020","title":"Tracing the emergence of gendered language in childhood","authors":["B Prystawski, E Grant, A Nematzadeh, SWS Lee…"],"snippet":"… We used three commonly-used sets of pre-trained word em- beddings: the word2vec embeddings trained on the Google News corpus (Mikolov et al., 2013a), the GloVe embeddings trained on the Common Crawl corpus, and …","url":["https://cognitivesciencesociety.org/cogsci20/papers/0190/0190.pdf"]} {"year":"2020","title":"Train Hard, Finetune Easy: Multilingual Denoising for RDF-to-Text Generation","authors":["Z Kasner, O Dušek - Proceedings of the 3rd WebNLG Workshop on Natural …, 2020"],"snippet":"… al., 2019). Adopting BART's objective and architecture, mBART (Liu et al., 2020) is pre-trained on the large-scale CC25 corpus extracted from Common Crawl, which contains data in 25 languages (Wenzek et al., 2020). The …","url":["https://webnlg-challenge.loria.fr/files/2020.webnlg-papers.20.pdf"]} {"year":"2020","title":"Transformer based Deep Intelligent Contextual Embedding for Twitter sentiment analysis","authors":["U Naseem, I Razzak, K Musial, M Imran - Future Generation Computer Systems, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0167739X2030306X"]} {"year":"2020","title":"Transforming machine translation: a deep learning system reaches news translation quality comparable to human professionals","authors":["M Popel, M Tomkova, J Tomek, Ł Kaiser, J Uszkoreit… - Nature Communications, 2020"],"snippet":"The quality of human translation was long thought to be unattainable for computer translation systems. In this study, we present a deep-learning system, CUBBITT, which challenges this view. In a context-aware blind evaluation …","url":["https://www.nature.com/articles/s41467-020-18073-9"]} {"year":"2020","title":"Translation Artifacts in Cross-lingual Transfer Learning","authors":["M Artetxe, G Labaka, E Agirre - arXiv preprint arXiv:2004.04721, 2020"],"snippet":"… We first collect the premises from a filtered version of CommonCrawl (Buck et al., 2014), taking a subset of 5 websites that represent a diverse set of genres: a newspaper, an economy forum, a celebrity magazine, a literature blog, and a consumer magazine …","url":["https://arxiv.org/pdf/2004.04721"]} {"year":"2020","title":"Translation System and Method","authors":["N Bertoldi, D Caroselli, MA Farajian, M Federico… - US Patent App. 16/118,273, 2020"],"snippet":"US20200073947A1 - Translation System and Method - Google Patents. Translation System and Method. Download PDF Info. Publication number US20200073947A1. US20200073947A1 US16/118,273 US201816118273A US2020073947A1 …","url":["https://patents.google.com/patent/US20200073947A1/en"]} {"year":"2020","title":"TransQuest: Translation Quality Estimation with Cross-lingual Transformers","authors":["T Ranasinghe, C Orasan, R Mitkov - arXiv preprint arXiv:2011.01536, 2020"],"snippet":"… to acquire. Instead, XLM-R trains RoBERTa(Liu et al., 2019) on a huge, multilingual dataset at an enormous scale: unlabelled text in 104 languages is extracted from CommonCrawl datasets, totalling 2.5TB of text. It is trained …","url":["https://arxiv.org/pdf/2011.01536"]} {"year":"2020","title":"Triclustering in Big Data Setting","authors":["D Egurnov, DI Ignatov, D Tochilkin - arXiv preprint arXiv:2010.12933, 2020"],"snippet":"Page 1. Triclustering in Big Data Setting Dmitry Egurnov, Dmitry I. Ignatov, and Dmitry Tochilkin Abstract In this paper, we describe versions of triclustering algorithms adapted for efficient calculations in distributed environments …","url":["https://arxiv.org/pdf/2010.12933"]} {"year":"2020","title":"Triple E-Effective Ensembling of Embeddings and Language Models for NER of Historical German.","authors":["S Schweter, L März"],"snippet":"… We use the FastText embeddings trained on Wikipedia (FastText Wiki) and Common Crawl (FastText CC) in a ”classic” word embeddings manner, that means we do not use subwords … BPE MultiBPEmb Wikipedia < 7000 …","url":["http://ceur-ws.org/Vol-2696/paper_173.pdf"]} {"year":"2020","title":"TULIP: A Five-Star Table and List-from Machine-Readable to Machine-Understandable Systems","authors":["J Nandakwang, P Chongstitvatana - Linked Open Data-Applications, Trends and …, 2020"],"snippet":"Currently, Linked Data is increasing at a rapid rate as the growth of the Web. Aside from new information that has been created exclusively as Semantic Web-ready, part of them comes from the transformation of existing structural …","url":["https://www.intechopen.com/online-first/tulip-a-five-star-table-and-list-from-machine-readable-to-machine-understandable-systems"]} {"year":"2020","title":"TweetBERT: A Pretrained Language Representation Model for Twitter Text Analysis","authors":["MMA Qudar, V Mago - arXiv preprint arXiv:2010.11091, 2020"],"snippet":"… It has been pre-trained on an extremely large, five different types of corpora: BookCorpus, English Wikipedia, CC-News (collected from CommonCrawl News) dataset, OpenWebText, a WebText corpus [23], and Stories, a dataset containing story-like content [23] …","url":["https://arxiv.org/pdf/2010.11091"]} {"year":"2020","title":"Two-Level Transformer and Auxiliary Coherence Modeling for Improved Text Segmentation","authors":["G Glavaš, S Somasundaran - arXiv preprint arXiv:2001.00891, 2020"],"snippet":"… In all our experiments we use 300dimensional monolingual FASTTEXT word embeddings pretrained on the Common Crawl corpora of respective languages: EN, CS, FI, and TR.9 We induce a cross-lingual word embedding …","url":["https://arxiv.org/pdf/2001.00891"]} {"year":"2020","title":"UHH-LT & LT2 at SemEval-2020 Task 12: Fine-Tuning of Pre-Trained Transformer Networks for Offensive Language Detection","authors":["G Wiedemann, SM Yimam, C Biemann - arXiv preprint arXiv:2004.11493, 2020"],"snippet":"… languages at once (Conneau et al., 2019). The model itself is equivalent to RoBERTa, but the training data consists of texts from more than 100 languages filtered from the CommonCrawl1 dataset. ALBERT – A Lite BERT for Self …","url":["https://arxiv.org/pdf/2004.11493"]} {"year":"2020","title":"Uncertainty-Aware Machine Support for Paper Reviewing on the Interspeech 2019 Submission Corpus","authors":["L Stappen, G Rizos, M Hasan, T Hain, BW Schuller"],"snippet":"… compressed to 300 dimensions. FastText and GloVe are based on the Common Crawl (1.9 M unique words, 840 B tokens) and Word2Vec on the GoogleNews (3 M unique words, 100 B total) dataset. We additionally experimented …","url":["https://indico2.conference4me.psnc.pl/event/35/contributions/3133/attachments/305/328/Tue-1-9-1.pdf"]} {"year":"2020","title":"Underlying Cause of Death Identification from Death Certificates using Reverse Coding to Text and a NLP Based Deep Learning Approach","authors":["V Della Mea, MH Popescu, K Roitero - Informatics in Medicine Unlocked, 2020"],"snippet":"… XLM-R (a variation of XLM), trained on one hundred languages using more than two terabytes of filtered CommonCrawl data, outperformed multilingual BERT (mBERT) on a variety of cross-lingual benchmarks [5]. 3.3.4. XLNet …","url":["https://www.sciencedirect.com/science/article/pii/S2352914820306067"]} {"year":"2020","title":"Understanding phishers' strategies of mimicking uniform resource locators to leverage phishing attacks: A machine learning approach","authors":["JS Tharani, NAG Arachchilage - arXiv preprint arXiv:2007.00489, 2020"],"snippet":"… webpage data downloaded from PhishTank2, OpenPhish3 and Legitimate ones are downloaded from Alexa4 and Common Crawl5 According … (4) 4 https://www.alexa.com/topsites/category/Computers/Internet/OntheW eb/W …","url":["https://arxiv.org/pdf/2007.00489"]} {"year":"2020","title":"Understanding Word Embeddings and Language Models","authors":["JM Gomez-Perez, R Denaux, A Garcia-Silva - A Practical Guide to Hybrid Natural …, 2020"],"snippet":"… 1) pre-trained contextualized word embeddings (ELMo), (2) pre-trained context-independent word embeddings learnt from Common Crawl (fastText), Twitter … Another version of this classifier using in addition fastText embeddings …","url":["https://link.springer.com/chapter/10.1007/978-3-030-44830-1_3"]} {"year":"2020","title":"UniBO@ KIPoS: Fine-tuning the Italian “BERTology” for PoS-tagging Spoken Data","authors":["F Tamburini"],"snippet":"… project. Also for GilBERTo it is available only the uncased model. • UmBERTo4: the more recent model de- veloped explicitly for Italian, as far as we know, is UmBERTo ('Musixmatch/umbertocommoncrawl-cased-v1' – umC). As …","url":["http://ceur-ws.org/Vol-2765/paper94.pdf"]} {"year":"2020","title":"UninaStudents@ SardiStance: Stance Detection in Italian Tweets-Task A","authors":["M Moraca, G Sabella, S Morra - Proceedings of the 7th Evaluation Campaign of …, 2020"],"snippet":"… As Master students, we approached these NLP topics for the first time. Therefore, we are aware 5https://huggingface.co/Musixmatch/umbertocommoncrawl-cased-v1 that our results are not at the state of the art in the field. However …","url":["http://ceur-ws.org/Vol-2765/paper146.pdf"]} {"year":"2020","title":"Unit Test Case Generation with Transformers","authors":["M Tufano, D Drain, A Svyatkovskiy, SK Deng… - arXiv preprint arXiv …, 2020"],"snippet":"… It has been pre-trained on the Common Crawl dataset [32] constituting nearly a trillion words, an expanded version of the WebText [33] dataset, two internet-based books corpora (Books1 and Books2), and English-language Wikipedia …","url":["https://arxiv.org/pdf/2009.05617"]} {"year":"2020","title":"Uniting Plain Language, Cognitive Fluency, and Believability","authors":["SI Johnson - 2020"],"snippet":"… considering at least four linguistic features (Romanyshyn, 2018). Similar to Randall (2019), Romanyshyn (2018) considers the frequency of a word using Common Crawl, a large corpus of web content. Taking this approach one step further, she lemmatizes the 1 …","url":["http://search.proquest.com/openview/9b0f9b3644e2372cabfc4aedb4849573/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2020","title":"UNITOR@ Sardistance2020: Combining Transformer-based Architectures and Transfer Learning for Robust Stance Detection","authors":["S Giorgioni, M Politi, S Salman, D Croce, R Basili - … of the 7th Evaluation Campaign of …, 2020"],"snippet":"… with 3. 2https://huggingface.co/Musixmatch/ umberto-commoncrawlcased-v1 3We discarded the few available messages with mixed po- larity, to simplify the final classification task. Irony Detection. We speculate …","url":["http://ceur-ws.org/Vol-2765/paper99.pdf"]} {"year":"2020","title":"Unsupervised Cross-Lingual Part-of-Speech Tagging for Truly Low-Resource Scenarios","authors":["R Eskander, S Muresan, M Collins - Proceedings of the 2020 Conference on …, 2020"],"snippet":"… essential when the domain of the training data is different from the one of the pre-trained em- beddings, which is the case in our learning setup, where we use the Bible data for training, while the XLM-R model is trained on text …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.391.pdf"]} {"year":"2020","title":"Unsupervised Cross-lingual Representation Learning for Speech Recognition","authors":["A Conneau, A Baevski, R Collobert, A Mohamed… - arXiv preprint arXiv …, 2020"],"snippet":"… For comparison with [29] only, we train 4-gram n-gram language models on CommonCrawl data [25, 50] for Assamese (140MiB of text data), Swahili (2GiB), Tamil (4.8GiB) and Lao (763MiB); for this experiment only we report word error rate (WER). 3.2 Training details …","url":["https://arxiv.org/pdf/2006.13979"]} {"year":"2020","title":"Unsupervised Domain Clusters in Pretrained Language Models","authors":["R Aharoni, Y Goldberg - arXiv preprint arXiv:2004.02105, 2020"],"snippet":"… exact requirements from such data with respect to all the aforementioned aspects. On top of that, domain labels are usually unavailable – eg in large-scale web-crawled data like Common Crawl1 which was recently used to …","url":["https://arxiv.org/pdf/2004.02105"]} {"year":"2020","title":"Unsupervised Evaluation of Human Translation Quality","authors":["Y Zhou, D Bollegala - 2019"],"snippet":"… publicly available monolingual word embeddings. Specifically, we first use the monolingual word embeddings, which are trained on Wikipedia and Common Crawl using fastText (Grave et al., 2018). Because our dataset contains …","url":["https://pdfs.semanticscholar.org/2735/60e715ee0dfaae2dee75fbb7484f811816d2.pdf"]} {"year":"2020","title":"Unsupervised Label Refinement Improves Dataless Text Classification","authors":["Z Chu, K Stratos, K Gimpel - arXiv preprint arXiv:2012.04194, 2020"],"snippet":"… We use the 300 dimensional GloVe vectors7 trained on Common Crawl.8 We experiment with two distance functions when using GloVe: cosine and L2 … 7http://nlp.stanford.edu/ data/glove.840B.300d.zip 8https://commoncrawl.org/ ROBERTA Dual Encoder …","url":["https://arxiv.org/pdf/2012.04194"]} {"year":"2020","title":"Unsupervised Question Decomposition for Question Answering","authors":["E Perez, P Lewis, W Yih, K Cho, D Kiela"],"snippet":"… Specifically, by leveraging >10M questions from Common Crawl, we learn to map from the distribution of multi-hop questions to the distribution of single-hop subquestions … We retrieve candidates from a corpus of 10M simple …","url":["https://rcqa-ws.github.io/papers/paper9.pdf"]} {"year":"2020","title":"UPB at GermEval-2020 Task 3: Assessing Summaries for German Texts using BERTScore and Sentence-BERT","authors":["A Paraschiv"],"snippet":"… bert-base-german-europeana-uc 2 Uncased Europeana newspapers bert-base-germanuc2 Uncased Wikipedia, Subtitles, News, Commoncrawl literary-german-bert3 Uncased German Fiction Literature bert-adapted-german-press4 Uncased Newspapers …","url":["http://ceur-ws.org/Vol-2624/germeval-task3-paper2.pdf"]} {"year":"2020","title":"Upgrading the Newsroom: An Automated Image Selection System for News Articles","authors":["F Liu, R Lebret, D Orel, P Sordet, K Aberer - arXiv preprint arXiv:2004.11449, 2020"],"snippet":"Page 1. 1 Upgrading the Newsroom: An Automated Image Selection System for News Articles FANGYU LIU∗, Language Technology Lab (LTL), University of Cambridge, United Kingdom RÉMI LEBRET, Distributed Information …","url":["https://arxiv.org/pdf/2004.11449"]} {"year":"2020","title":"UR NLP@ HaSpeeDe 2 at EVALITA 2020: Towards Robust Hate Speech Detection with Contextual Embeddings","authors":["J Hoffmann, U Kruschwitz"],"snippet":"… XLM-R is based on XLM and RoBERTa. It is trained on data covering 100 languages in a very large (2TB) CommonCrawl. Transformer document embeddings are obtained from (the large version of) XLM-R. In addition Page 3 …","url":["http://ceur-ws.org/Vol-2765/paper105.pdf"]} {"year":"2020","title":"Urban Dictionary Embeddings for Slang NLP Applications","authors":["S Wilson, W Magdy, B McGillivray, K Garimella… - Proceedings of The 12th …, 2020"],"snippet":"… with the goal of producing generally applicable word embeddings, many popular pre-trained word embeddings have been fit to large and diverse corpora of text from the web such as the Common Crawl.3 In … 2 http://smash …","url":["https://www.aclweb.org/anthology/2020.lrec-1.586.pdf"]} {"year":"2020","title":"URL-based Phishing Attack Detection by Convolutional Neural Networks","authors":["J Nowak, M Korytkowski, P Najgebauer, M Wozniak…"],"snippet":"… The database downloaded during the article writing contained 10,604 records. To obtain legitimate websites, the second part of the training dataset was downloaded from the Common Crawl Foundation (http://commoncrawl.org/) …","url":["http://ajiips.com.au/papers/V15.2/v15n2_64-71.pdf"]} {"year":"2020","title":"Using Natural Language Preprocessing Architecture (NLPA) for Big Data Text Sources","authors":["M Novo-Lourés, R Pavón, R Laza, D Ruano-Ordas… - Scientific Programming, 2020"],"snippet":"Journals; Publish with us; Publishing partnerships; About us; Blog. Scientific Programming. +Journal Menu. PDF. Journal overview. For authorsFor reviewersFor editorsTable of Contents Special Issues.","url":["https://www.hindawi.com/journals/sp/2020/2390941/"]} {"year":"2020","title":"Using Natural Language Processing to Identify Similar Patent Documents","authors":["J Navrozidis, H Jansson - LU-CS-EX, 2020"],"snippet":"Page 1. MASTER'S THESIS 2020 Using Natural Language Processing to Identify Similar Patent Documents Hannes Jansson, Jakob Navrozidis ISSN 1650-2884 LU-CS-EX: 2020-05 DEPARTMENT OF COMPUTER …","url":["https://lup.lub.lu.se/student-papers/record/9008699/file/9026407.pdf"]} {"year":"2020","title":"Using Probabilistic Soft Logic to Improve Information Extraction in the Legal Domain","authors":["B Kirsch, S Giesselbach, T Schmude, M Völkening…"],"snippet":"… spaCy Classifier: This architecture is based on a CNN with mean pooling and a final feed-forward layer. The network is fed with pretrained word embeddings trained on the German Wikipedia and the German common crawl (Ortiz Suárez et al., 2019).9 …","url":["http://ceur-ws.org/Vol-2738/LWDA2020_paper_29.pdf"]} {"year":"2020","title":"Using Publisher Partisanship for Partisan News Detection","authors":["CL Yeh"],"snippet":"Page 1. Using Publisher Partisanship for Partisan News Detection A Comparison of Performance between Annotation Levels Chia-Lun Yeh Page 2. Using Publisher Partisanship for Partisan News Detection A …","url":["https://pdfs.semanticscholar.org/604f/233a21249d44085e41e7415ed9741fc69d5e.pdf"]} {"year":"2020","title":"Using Sentences as Semantic Representations in Large Scale Zero-Shot Learning","authors":["YL Cacheux, HL Borgne, M Crucianu - arXiv preprint arXiv:2010.02959, 2020"],"snippet":"… For the same reason, we used FastText and Glove models pre-trained on Common Crawl We used a 300-dimension version for all three … Fasttext: https://fasttext.cc/docs/en/englishvectors.html (version trained on Common Crawl with 600B tokens, no subword information) …","url":["https://arxiv.org/pdf/2010.02959"]} {"year":"2020","title":"Using Word Embeddings to Learn a Better Food Ontology. Front","authors":["J Youn, T Naravane, I Tagkopoulos - Artif. Intell, 2020"],"snippet":"… W ikinews W ikipedia 2017 + UMBC webbase + s tatmt.org 0.313 2.98 Crawl Common Crawl 0 .317 3.00 Word2vec (Mikolov …","url":["https://pdfs.semanticscholar.org/1c47/eb747f27eab42bc8e9e9ded83dd784eadf4c.pdf"]} {"year":"2020","title":"ValNorm: A New Word Embedding Intrinsic Evaluation Method Reveals Valence Biases are Consistent Across Languages and Over Decades","authors":["A Toney, A Caliskan - arXiv preprint arXiv:2006.03950, 2020"],"snippet":"… We choose six widely used pre-trained word embedding sets, listed in Table 2, to compare ValNorm's performance on different algorithms (GloVe, fastText, word2vec) and training corpora (Common Crawl, Wikipedia, OpenSubtitles …","url":["https://arxiv.org/pdf/2006.03950"]} {"year":"2020","title":"Vandalism Detection in Crowdsourced Knowledge Bases","authors":["S Heindorf - 2019"],"snippet":"… Manual OpenStreetMap, Uniprot, WordNet, MusicBrainz, IMDb Wikipedia, WikiHow, YouTube Wikia/FANDOM, StackExchange, Quora, Yahoo Answers DBpedia, YAGO, NELL dblp, BabelNet Internet Archive, Common Crawl NASA …","url":["https://pdfs.semanticscholar.org/e70f/b288ceb09fc244554a274f31cd1217663027.pdf"]} {"year":"2020","title":"Variational Transformers for Diverse Response Generation","authors":["Z Lin, GI Winata, P Xu, Z Liu, P Fung - arXiv preprint arXiv:2003.12738, 2020"],"snippet":"… embeddings. The first is EMBFT (Liu et al., 2016) that calculates the average of word embeddings in a sentence using FastText (Mikolov et al., 2018) which is trained with Common Crawl and Wikipedia data. We use FastText …","url":["https://arxiv.org/pdf/2003.12738"]} {"year":"2020","title":"VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation","authors":["F Luo, W Wang, J Liu, Y Liu, B Bi, S Huang, F Huang… - arXiv preprint arXiv …, 2020"],"snippet":"… We adopt the same 250K vocabulary that is also used by XLM-R (Conneau et al., 2019) and mBART (Liu et al., 2020b). Pre-Training Datasets For monolingual training datasets, we reconstruct Common-Crawl Corpus used in XLM-R (Conneau et al., 2019) …","url":["https://arxiv.org/pdf/2010.16046"]} {"year":"2020","title":"Video Question Answering on Screencast Tutorials","authors":["W Zhao, S Kim, N Xu, H Jin"],"snippet":"… visual cues, and graph embeddings. All the models have the word embeddings initialized with the 300-dimensional pretrained fastText [Bojanowski et al., 2017] vectors on Common Crawl dataset. The convolutional layer in …","url":["https://www.ijcai.org/Proceedings/2020/0148.pdf"]} {"year":"2020","title":"Visual and Textual Deep Feature Fusion for Document Image Classification","authors":["S Bakkali, Z Ming, M Coustaty, M Rusinol - Proceedings of the IEEE/CVF Conference …, 2020"],"snippet":"… FastText algorithm we used was pretrained on 2 million word vectors trained on Common Crawl (600B tokens), and uses 1,999,996 word vectors. Bert: Bert [11] is a contextualized bidirectional word embedding based on the transformer architecture …","url":["http://openaccess.thecvf.com/content_CVPRW_2020/papers/w34/Bakkali_Visual_and_Textual_Deep_Feature_Fusion_for_Document_Image_Classification_CVPRW_2020_paper.pdf"]} {"year":"2020","title":"Visual Relations Augmented Cross-modal Retrieval","authors":["Y Guo, J Chen, H Zhang, YG Jiang - … of the 2020 International Conference on …, 2020"],"snippet":"… vector with the corresponding visual feature. The label embedding vector is obtained with a learnable embedding layer initialized by GloVe [25] that pre-trained on the Common-Crawl dataset. Given a set of object categories …","url":["https://dl.acm.org/doi/pdf/10.1145/3372278.3390709"]} {"year":"2020","title":"Visualizing and Interpreting RNN Models in URL-based Phishing Detection","authors":["T Feng, C Yue - Proceedings of the 25th ACM Symposium on Access …, 2020"],"snippet":"… The legitimate URLs came from the Common Crawl (www.commoncrawl.org) open web searching database, while the phishing URLs came from the popular PhishTank (www.phishtank.com) phishing website repository. In …","url":["https://dl.acm.org/doi/pdf/10.1145/3381991.3395602"]} {"year":"2020","title":"Wat zei je? Detecting Out-of-Distribution Translations with Variational Transformers","authors":["TZ Xiao, AN Gomez, Y Gal - arXiv preprint arXiv:2006.08344, 2020"],"snippet":"… The following datasets were used in our experiments: • WMT EN ↔ DE: The training set for translation tasks between English (EN) and German (DE) composed of news-commentary-v13 with 284k sentences pairs, wmt13 …","url":["https://arxiv.org/pdf/2006.08344"]} {"year":"2020","title":"Weakly-Supervised Multi-Level Attentional Reconstruction Network for Grounding Textual Queries in Videos","authors":["Y Song, J Wang, L Ma, Z Yu, J Yu - arXiv preprint arXiv:2003.07048, 2020"],"snippet":"… For each second we uniformly sample 16 frames as input to C3D, and obtain a 4096-dimentional visual feature from fc6 layer. Each word from the query is represented by GloVe [22] word embedding vector pre-trained on Common Crawl …","url":["https://arxiv.org/pdf/2003.07048"]} {"year":"2020","title":"Web Crawl Processing on Big Data Scale","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"… We got this domain ranks file and column names from the common crawl blog post (https://commoncrawl.org/2020/02/host-and-domain-level-webgraphs-novdecjan-2019-2020/); they publish new domain ranks about four …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_7"]} {"year":"2020","title":"Web Table Extraction, Retrieval, and Augmentation: A Survey","authors":["S Zhang, K Balog - ACM Transactions on Intelligent Systems and …, 2020"],"snippet":"… Table corpora Type #tables Source WDC 2012 Web Table Corpus Web tables 147M Web crawl (Common Crawl) WDC 2015 Web Table Corpus Web tables 233M Web crawl (Common Crawl) Dresden Web Tables Corpus …","url":["https://dl.acm.org/doi/abs/10.1145/3372117"]} {"year":"2020","title":"Webis at TREC 2019: Decision Track","authors":["A Bondarenko, M Fröbe, V Kasturia, M Völske, B Stein…"],"snippet":"… In Proceedings of SIGIR 2017. 1419–1420. [2] Janek Bevendor , Benno Stein, Ma hias Hagen, and Martin Po hast. 2018. Elastic ChatNoir: Search Engine for the ClueWeb and the Common Crawl. In Proceedings of ECIR 2018. 820–824 …","url":["https://trec.nist.gov/pubs/trec28/papers/Webis.D.pdf"]} {"year":"2020","title":"Webis at TREC 2020: Health Misinformation Track","authors":["A Bondarenko, M Fröbe, S Günther, M Hagen… - 2020","J Bevendor, A Bondarenko, M Fröbe, S Günther…"],"snippet":"… During retrieval, we used ChatNoirs existing weighting scheme for the two Common Crawl snapshots, which combines BM25 scores of multiple elds … we relax the precondition of documents' 1We have indexed a 2015 and …","url":["https://webis.de/downloads/publications/papers/stein_2020zb.pdf","https://webis.de/downloads/publications/slides/stein_2020zb.pdf"]} {"year":"2020","title":"Webly Supervised Semantic Embeddings for Large Scale Zero-Shot Learning","authors":["YL Cacheux, A Popescu, HL Borgne - arXiv preprint arXiv:2008.02880, 2020"],"snippet":"… prototypes. These embeddings are extracted from generic large scale text collections such as Wikipedia [21,34] or Common Crawl [6,33] … use. For GloVe on ImageNet, the model pretrained on Common Crawl has the best performance …","url":["https://arxiv.org/pdf/2008.02880"]} {"year":"2020","title":"WeChat Neural Machine Translation Systems for WMT20","authors":["F Meng, J Yan, Y Liu, Y Gao, X Zeng, Q Zeng, P Li… - arXiv preprint arXiv …, 2020"],"snippet":"… Commentary, Common Crawl and Gigaword corpus. The English monolingual data includes News crawl, News discussions, Europarl v10, News Commentary, Common Crawl, Wiki dumps and the Gigaword corpus. After …","url":["https://arxiv.org/pdf/2010.00247"]} {"year":"2020","title":"WEFE: The Word Embeddings Fairness Evaluation Framework","authors":["P Badilla, F Bravo-Marquez, J Pérez"],"snippet":"… The following are the pre-trained em- bedding models that we consider: 1) conceptnet, 2) fasttextwikipedia, 3) glove-twitter, 4) glove-wikipedia, 5) lexveccommoncrawl, 6) word2vec-googlenews, and 7) word2vecgender-hard …","url":["https://felipebravom.com/publications/ijcai2020.pdf"]} {"year":"2020","title":"WEmbSim: A Simple yet Effective Metric for Image Captioning","authors":["N Sharif, L White, M Bennamoun, W Liu, SAA Shah - arXiv preprint arXiv:2012.13137, 2020"],"snippet":"… Page 5. TABLE I T - . Name Source Dims Corpus Corpus Size Vocabulary Size GloVE 840B [12] 300 Common Crawl 8.4 · 1011 2 · 106 Word2vec [6] 300 Google News (100B) 1.0 · 1011 3 · 106 FastText [13] 300 Wikipedia 4.0 · 109 3 · 106 …","url":["https://arxiv.org/pdf/2012.13137"]} {"year":"2020","title":"What determines the order of adjectives in English? Comparing efficiency-based theories using dependency treebanks","authors":["R Futrell, W Dyer, G Scontras - Proceedings of the 58th Annual Meeting of the …, 2020"],"snippet":"… sklearn.cluster.KMeans applied to a pretrained set of 1.9 million 300-dimension GloVe vectors2 generated from the Common Crawl corpus … Table 1a shows the accuracies of our predictors in predicting held-out …","url":["https://www.aclweb.org/anthology/2020.acl-main.181.pdf"]} {"year":"2020","title":"What Sparks Joy: The AffectVec Emotion Database","authors":["S Raji, G de Melo - Proceedings of The Web Conference 2020, 2020"],"snippet":"… We consider the cosine similarity of word– emotion pairs in word2vec trained on the Google News corpus [18], GloVe [26] trained on Twitter (200-dim.) and CommonCrawl (840B, 300-dim.), as well as the counterfitted vectors by Mrksic et al. [24]. Results …","url":["https://dl.acm.org/doi/pdf/10.1145/3366423.3380068"]} {"year":"2020","title":"What the [MASK]? Making Sense of Language-Specific BERT Models","authors":["D Nozza, F Bianchi, D Hovy - arXiv preprint arXiv:2003.02912, 2020"],"snippet":"… OSCAR (Open Super-large Crawled Almanach coRpus) (Or- tiz Suárez et al., 2019) is a huge multilingual corpus obtained by filtering the Common Crawl corpus, which is a parallel multilingual corpus comprised of crawled documents from the internet …","url":["https://arxiv.org/pdf/2003.02912"]} {"year":"2020","title":"When and Why is Unsupervised Neural Machine Translation Useless?","authors":["Y Kim, M Graça, H Ney - arXiv preprint arXiv:2004.10581, 2020","YKM Graça, H Ney - 22nd Annual Conference of the European Association …"],"snippet":"… However, for low-resource language pairs, it is difficult to match the data domain of both sides on a large scale. For example, our monolingual data for Kazakh is mostly from Wikipedia and Common Crawl, while the English data is solely from News Crawl …","url":["https://arxiv.org/pdf/2004.10581","https://www.aclweb.org/anthology/2020.eamt-1.pdf#page=55"]} {"year":"2020","title":"When Being Unseen from mBERT is just the Beginning: Handling New Languages With Multilingual Language Models","authors":["B Muller, A Anastasopoulos, B Sagot, D Seddah - arXiv preprint arXiv:2010.12858, 2020"],"snippet":"… al., 2019). OSCAR is a corpus extracted from a Common Crawl Web snapshot.3 It provides a significant 2Also see the discussion in Section §3.2 on the script distributions in mBERT. 3http://commoncrawl.org/ Language (iso …","url":["https://arxiv.org/pdf/2010.12858"]} {"year":"2020","title":"When do Word Embeddings Accurately Reflect Surveys on our Beliefs About People?","authors":["K Joseph, JH Morgan - arXiv preprint arXiv:2004.12043, 2020"],"snippet":"Page 1. When do Word Embeddings Accurately Reflect Surveys on our Beliefs about People? Kenneth Joseph Computer Science and Engineering University at Buffalo Buffalo, NY, 14226 kjoseph@buffalo.edu Jonathan H. Morgan …","url":["https://arxiv.org/pdf/2004.12043"]} {"year":"2020","title":"When Does Unsupervised Machine Translation Work?","authors":["K Marchisio, K Duh, P Koehn - arXiv preprint arXiv:2004.05516, 2020"],"snippet":"… “News crawl” (News) and “Common Crawl” (CC) settings determine whether the system can flexibly handle diverse datasets. Specifics of the datasets used are described in subsequent subsections … UN = United Nations …","url":["https://arxiv.org/pdf/2004.05516"]} {"year":"2020","title":"Which* BERT? A Survey Organizing Contextualized Encoders","authors":["P Xia, S Wu, B Van Durme - arXiv preprint arXiv:2010.00854, 2020"],"snippet":"… Raffel et al. (2019) curate a 745GB subset of Common Crawl (CC),10 which starkly contrasts with the 13GB used in BERT … 9https://sites.google.com/ view/ sustainlp2020/shared-task 10https://commoncrawl.org/ scrapes publicly …","url":["https://arxiv.org/pdf/2010.00854"]} {"year":"2020","title":"Who is asking? humans and machines experience","authors":["M Klein, L Balakireva, H Shankar"],"snippet":"… licenses/by/4.0/). Similarly, the motivation behind the recent study by Thompson and Jian [14] based on two Common Crawl samples of the web was to quantify the use of HTTP DOIs versus URLs of landing pages. They found …","url":["https://osf.io/pgxc3/download"]} {"year":"2020","title":"Why are events important and how to compute them in geospatial research?","authors":["M Yuan"],"snippet":"… GPT-3 is a gigantic neural network with 175 billion input parameters and 96 layers of transformer decoders, each of which has 1.8 billion parameters, and is pre-trained with 45TB (499 billion tokens) compressed data from five …","url":["https://www.josis.org/index.php/josis/article/viewFile/723/300"]} {"year":"2020","title":"Why Not Simply Translate? A First Swedish Evaluation Benchmark for Semantic Similarity","authors":["T Isbister, M Sahlgren - arXiv preprint arXiv:2009.03116, 2020"],"snippet":"… et al., 2018). We use the CBOW model that has been trained on Common Crawl and Wikipedia.6 As with Word2Vec, the vectors for sentences are obtained by averaging the embedding vector for each word. BERT: Deep Transformer …","url":["https://arxiv.org/pdf/2009.03116"]} {"year":"2020","title":"Why Overfitting Isn't Always Bad: Retrofitting Cross-Lingual Word Embeddings to Dictionaries","authors":["M Zhang, Y Fujinuma, MJ Paul, J Boyd-Graber"],"snippet":"… We align English embeddings with six target languages: German (DE), Spanish (ES), French (FR), Italian (IT), Japanese (JA), and Chinese (ZH). We use 300-dimensional fastText vectors trained on Wikipedia and Common Crawl (Grave et al., 2018) …","url":["http://users.umiacs.umd.edu/~mozhi/pdf/retrofit.pdf"]} {"year":"2020","title":"Wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types","authors":["D Rozado - PLOS ONE, 2020"],"snippet":"… This work systematically analyzed 3 popular word embeddings methods: Word2vec (Skip-gram) [4], Glove [9] and FastText [10], externally pretrained on a wide array of corpora such as Google News, Wikipedia, Twitter or Common Crawl …","url":["https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0231189"]} {"year":"2020","title":"WikiAsp: A Dataset for Multi-domain Aspect-based Summarization","authors":["H Hayashi, P Budania, P Wang, C Ackerson… - arXiv preprint arXiv …, 2020"],"snippet":"… sections of Wikipedia from referenced web pages. Following the WikiSum data generation script,3 we first crawled cited references covered by CommonCrawl for each Wikipedia article. We then recover all the sections4 of …","url":["https://arxiv.org/pdf/2011.07832"]} {"year":"2020","title":"Will it Unblend?","authors":["Y Pinter, CL Jacobs, J Eisenstein - arXiv preprint arXiv:2009.09123, 2020"],"snippet":"Page 1. Will it Unblend? Yuval Pinter School of Interactive Computing Georgia Institute of Technology Atlanta, GA, USA uvp@gatech.edu Cassandra L. Jacobs Department of Psychology University of Wisconsin Madison, WI, USA cjacobs2@wisc.edu …","url":["https://arxiv.org/pdf/2009.09123"]} {"year":"2020","title":"Word associations and the distance properties of context-aware word embeddings","authors":["MA Rodriguez, P Merlo - Proceedings of the 24th Conference on Computational …, 2020"],"snippet":"… However, in this work, we used the pre-trained FASTTEXT embeddings provided by the official site of FASTTEXT, that we expressly do not modify.3 The embeddings are trained on 600-billion tokens from …","url":["https://www.aclweb.org/anthology/2020.conll-1.30.pdf"]} {"year":"2020","title":"Word Embedding Evaluation for Sinhala","authors":["D Lakmal, S Ranathunga, S Peramuna, I Herath - Proceedings of The 12th Language …, 2020"],"snippet":"… Common Crawl can be considered as a precious starting point for building a cleaned large corpus for … Common Crawl monthly dataset only contains 0.007% of content in Sinhala4, however, this amount is still … 4https …","url":["https://www.aclweb.org/anthology/2020.lrec-1.231.pdf"]} {"year":"2020","title":"WORD EMBEDDINGS IN ROMANIAN FOR THE RETAIL BANKING DOMAIN","authors":["I RAICU, N BOITOUT, R BOLOGA, MG STURZA"],"snippet":"… In addition, Facebook released a year later another version of FastText pre-trained word embeddings, trained on Common Crawl and Wikipedia [4]. Another pre-trained word embeddings in Romanian can be found at …","url":["https://www.researchgate.net/profile/Irina_Raicu2/publication/341553193_WORD_EMBEDDINGS_IN_ROMANIAN_FOR_THE_RETAIL_BANKING_DOMAIN/links/5ec6d768a6fdcc90d68c8596/WORD-EMBEDDINGS-IN-ROMANIAN-FOR-THE-RETAIL-BANKING-DOMAIN.pdf"]} {"year":"2020","title":"Word Embeddings Inherently Recover the Conceptual Organization of the Human Mind","authors":["V Swift - arXiv preprint arXiv:2002.10284, 2020"],"snippet":"… Sub-word information was incorporated on the basis of n-grams (length = 5), with a window size of 5 and 10 negatives, and a step size of .05. The English model was trained on a Common Crawl corpus comprised of English text from 2.96 billion webpages …","url":["https://arxiv.org/pdf/2002.10284"]} {"year":"2020","title":"Word meaning in minds and machines","authors":["BM Lake, GL Murphy - arXiv preprint arXiv:2008.01766, 2020"],"snippet":"… is illustrated in Figure 1A. CBOW has been trained on tremendous corpora; for instance, in this article, we analyze a large-scale CBOW model trained on the Common Crawl corpus of 630 billion words. CBOW learns a word …","url":["https://arxiv.org/pdf/2008.01766"]} {"year":"2020","title":"Word Representations for Named Entity Recognition","authors":["R Agerri"],"snippet":"… Transformers: Bertin (Gigaword+Wikipedia), XLM-RoBERTa (Common Crawl) and mBERT (Wikipedia + books) • Project annotations (various strategies) Page 54 … BETO (various sources) – XLM-RoBERTa (Common Crawl 2.5TB) – mBERT (Wikipedia + books) …","url":["https://cit-ai.net/archive/CitAI_Seminar_11Nov20_Agerri.pdf"]} {"year":"2020","title":"Word Representations for Neural Network Based Myanmar Text-to-Speech System","authors":["AM Hlaing, WP Pa"],"snippet":"… In [21], the size of word vectors is small, and it contains about 55K entries for Myanmar language and can be downloaded from the link [23]. In [22], the word vectors are trained on Common Crawl and Wikipedia using fastText …","url":["http://www.inass.org/2020/2020043023.pdf"]} {"year":"2020","title":"Word Rotator's Distance","authors":["S Yokoi, R Takahashi, R Akama, J Suzuki, K Inui - Proceedings of the 2020 …, 2020"],"snippet":"Page 1. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 2944–2960, November 16–20, 2020. c 2020 Association for Computational Linguistics 2944 Word Rotator's Distance …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.236.pdf"]} {"year":"2020","title":"Word Rotator's Distance: Decomposing Vectors Gives Better Representations","authors":["S Yokoi, R Takahashi, R Akama, J Suzuki, K Inui - arXiv preprint arXiv:2004.15003, 2020"],"snippet":"Page 1. Word Rotator's Distance: Decomposing Vectors Gives Better Representations Sho Yokoi1 Ryo Takahashi1,2 Reina Akama1,2 Jun Suzuki1,2 Kentaro Inui1,2 1 Tohoku University 2 RIKEN {yokoi, ryo.t, reina.a, jun.suzuki, inui}@ecei.tohoku.ac.jp Abstract …","url":["https://arxiv.org/pdf/2004.15003"]} {"year":"2020","title":"Word Sense Disambiguation for 158 Languages using Word Embeddings Only","authors":["V Logacheva, D Teslenko, A Shelmanov, S Remus… - arXiv preprint arXiv …, 2020"],"snippet":"… The contributions of our work are the following: 1The full list languages is available at fasttext.cc and includes English and 157 other languages for which embeddings were trained on a combination of Wikipedia and CommonCrawl texts …","url":["https://arxiv.org/pdf/2003.06651"]} {"year":"2020","title":"Word2Sent: A new learning sentiment‐embedding model with low dimension for sentence level sentiment classification","authors":["M Kasri, M Birjali, A Beni‐Hssane - Concurrency and Computation: Practice and …"],"snippet":"Abstract Word embedding models become an increasingly important method that embeds words into a high dimensional space. These models have been widely utilized to extract semantic and syntactic feat...","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/cpe.6149"]} {"year":"2020","title":"Words Matter: Gender, Jobs and Applicant Behavior in India","authors":["S Chaturvedi, K Mahajan, Z Siddique - 2020"],"snippet":"… Pennington et al., 2014). The 300 dimensional pretrained word vectors have been obtained by training the algorithm on web data from common crawl, and comprise 2.2 million unique words. Cosine similarity between any …","url":["https://www.dse.univr.it/documenti/Seminario/documenti/documenti102498.pdf"]} {"year":"2020","title":"Words, constructions and corpora: Network representations of constructional semantics for Mandarin space particles","authors":["ACH Chen - Corpus Linguistics and Linguistic Theory, 2020"],"snippet":"Jump to Content Jump to Main Navigation Publications. Subjects. Architecture and Design Arts Asian and Pacific Studies Business and Economics Chemistry Classical and Ancient Near Eastern Studies Computer Sciences Cultural …","url":["https://www.degruyter.com/view/journals/cllt/ahead-of-print/article-10.1515-cllt-2020-0012/article-10.1515-cllt-2020-0012.xml"]} {"year":"2020","title":"Wrestling with Complexity in Computational Social Science: Theory, Estimation and Representation","authors":["S de Marchi - The SAGE Handbook of Research Methods in Political …, 2020"]} {"year":"2020","title":"WT5?! Training Text-to-Text Models to Explain their Predictions","authors":["S Narang, C Raffel, K Lee, A Roberts, N Fiedel… - arXiv preprint arXiv …, 2020"],"snippet":"… 1997; Ruder, 2017). In Raffel et al. (2019), this framework was used to pre-train Transformer (Vaswani et al., 2017) models on a large collection of unlabeled text drawn from the Common Crawl web scrape. We use the result …","url":["https://arxiv.org/pdf/2004.14546"]} {"year":"2020","title":"X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models","authors":["Z Jiang, A Anastasopoulos, J Araki, H Ding, G Neubig - Proceedings of the 2020 …, 2020"],"snippet":"Page 1. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 5943–5959, November 16–20, 2020. c 2020 Association for Computational Linguistics 5943 X-FACTR: Multilingual …","url":["https://www.aclweb.org/anthology/2020.emnlp-main.479.pdf"]} {"year":"2020","title":"XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation","authors":["Y Liang, N Duan, Y Gong, N Wu, F Guo, W Qi, M Gong… - arXiv preprint arXiv …, 2020"],"snippet":"… 2.1.2 Large Corpus (LC) Multilingual Corpus Following Wenzek et al. (2019), we construct a clean version of Common Crawl (CC)3 as the multilingual corpus … 2https://github.com/ attardi/wikiextractor. 3https://commoncrawl.org/. available in English …","url":["https://arxiv.org/pdf/2004.01401"]} {"year":"2020","title":"Xiaomi's Submissions for IWSLT 2020 Open Domain Translation Task","authors":["Y Sun, M Guo, X Li, J Cui, B Wang - Proceedings of the 17th International Conference …, 2020"],"snippet":"… And for unconstrained submission, we choose the largescale amounts of Commoncrawl Chinese10 and Japanese11 dataset as additional monolingual data for training LMs and executing BT to enhance our NMT systems …","url":["https://www.aclweb.org/anthology/2020.iwslt-1.18.pdf"]} {"year":"2020","title":"YNU OXZ@ HaSpeeDe 2 and AMI: XLM-RoBERTa with Ordered Neurons LSTM for classification task at EVALITA 2020","authors":["X Ou, H Li - Proceedings of Sixth Evaluation Campaign of Natural …, 2020"],"snippet":"… scale multi-language pre-training model. It can be un- derstood as a combination of XLM and RoBER- Ta. It is trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages. Because the training of the model …","url":["http://ceur-ws.org/Vol-2765/paper93.pdf"]} {"year":"2020","title":"Zero Shot Domain Generalization","authors":["U Maniyar, AA Deshmukh, U Dogan… - arXiv preprint arXiv …, 2020"],"snippet":"… Thus using semantic space helps us in the visual classification task. We use word embeddings of classes - in particular, simple GloVe embeddings [28] trained on Common Crawl corpus - as the semantic space in this work …","url":["https://arxiv.org/pdf/2008.07443"]} {"year":"2020","title":"Zero-shot semantic segmentation using relation network","authors":["Y Zhang - 2020"],"snippet":"Page 1. University of Jyväskylä Faculty of Information Technology Yindong Zhang Zero-shot Semantic Segmentation using Relation Network Master's thesis of information technology May 28, 2020 Page 2. i Author: Yindong …","url":["https://jyx.jyu.fi/bitstream/handle/123456789/69720/1/URN%3ANBN%3Afi%3Ajyu-202006043976.pdf"]}